1 前置步骤

1.1创建NDH用户

1. 通过EasyOps查看EasyUser服务easyuser_url:http://tfnode3.local:8176 2.7.png 2. 通过EasyUser服务创建用户

# 将url替换为步骤1中获取的easyuser_url,执行后返回success
curl -u admin:admin -d '{"username": "ndh_test_user","password":"abc123456"}' -X POST 'http://tfnode3.local:8176/secure/users' -H 'Content-Type:application/json'

# 查看已创建用户,确认用户创建成功
curl -u admin:admin http://tfnode3.local:8176/secure/users

# 下载对应用户的keytab
curl -u admin:admin -O http://tfnode3.local:8176/secure/keytab/ndh_test_user.keytab

1.2 YARN 资源池配置

1. 登录EasyOps进入YARN 服务 6.4.png

2. 创建label,并添加资源 6.5.png 6.6.png 3. 新建资源池 注意:

  • 新增、修改队列,要确保目前已有队列的容量总和为100%
  • 最大AM比例此需要填写为0-1
  • 新增、修改队列,多队列共用同一label时,需要确保多队列共用的label资源和为100%,否则会导致同步配置失败 6.7.png 6.8.png

4. 保存并同步配置 6.9.png 6.10.png 5. 进入YARN UI查看队列是否生效 6.11.png 6.12.png

1.3 Hive 测试库预置

1. 登录Hive Client部署机器 2. 执行kinit认证操作:通过hive用户认证

# 执行klist查看Principal
klist -kt /etc/security/keytabs/hive/hive.service.keytab
# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/hive/hive.service.keytab hive/xnode2.local@BDMS.COM

6.13.png

3. 通过hive命令创建测试库、表

create database if not exists ndh_test_db;
create table if not exists ndh_test_db.ndh_base_tbl(id int, name string);

6.14.png

1.4 通过Access 级联授权

1. 查看Access URL
access_url: "http://172.30.4.108:16180" 2.8.png

2. 查看Access依赖的HDFS 的clusterId clusterId: easyops-cluster 2.9.png

3. 调用Access API授权库: 将库ndh_test_dbcreate权限授权ndh_test_user

# 拼接请求体,clusterId替换为上述查询的HDFS clusterId
{"clusterId": "easyops-cluster","users": ["ndh_test_user"],"db":"ndh_test_db"}

# 拼接请求URL,http://172.30.4.108:16180替换为上述查询到的Access URL
http://172.30.4.108:16180/openapi/permission/v1/data-auth/grantForDB

# 通过curl 发送post请求
curl -d '{"clusterId": "easyops-cluster","users": ["ndh_test_user"],"db":"ndh_test_db"}' -X POST 'http://172.30.4.108:16180/openapi/permission/v1/data-auth/grantForDB' -H 'Content-Type:application/json'

校验级联授权结果:

校验Hive Policy正常新建 2.10.png 查询hive 库ndh_test_db授权信息 2.11.png 2.12.png 校验HDFS policy正常级联创建 2.13.png 查询库路径授权信息:/user/warehouse/ndh_test_db.db 2.14.png 2.15.png 4. ccess API授权表 将ndh_test_db.ndh_base_tbl 表的所有权限授权给ndh_test_user

# 拼接请求体,clusterId替换为上述查询的HDFS clusterId
{"clusterId": "easyops-cluster","product": "","users": ["ndh_test_user"],"authItems": [{"dataResource": {"serviceType": "HIVE","db": "ndh_test_db","tbl":"ndh_base_tbl","col":"*","associate":true},"dataAuthActionItems": [{"action": "select","expireTime": 1662453185000},{"action": "update","expireTime": 1662453185000},{"action": "alter","expireTime": 1662453185000},{"action": "drop","expireTime": 1662453185000}]}],"operator": "pb"}

# 拼接请求URL,http://172.30.4.108:16180替换为上述查询到的Access URL
http://172.30.4.108:16180/openapi/permission/v1/data-auth/grant

# 通过curl 发送post请求
curl -d '{"clusterId": "easyops-cluster","product": "","users": ["ndh_test_user"],"authItems": [{"dataResource": {"serviceType": "HIVE","db": "ndh_test_db","tbl":"ndh_base_tbl","col":"*","associate":true},"dataAuthActionItems": [{"action": "select","expireTime": 1662453185000},{"action": "update","expireTime": 1662453185000},{"action": "alter","expireTime": 1662453185000},{"action": "drop","expireTime": 1662453185000}]}],"operator": "pb"}' -X POST 'http://172.30.4.108:16180/openapi/permission/v1/data-auth/grant' -H 'Content-Type:application/json'

2 组件服务功能测试

2.1 HDFS 服务功能测试

2.1.1 前置操作

1. 登录HDFS Client部署服务器

2. 执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab
# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab  ndh_test_user@BDMS.COM

6.1.png 6.2.png

2.1.2 验证步骤

1. 查看hdfs文件目录

hdfs dfs -ls /

2. 在hdfs上创建文件并写入数据

hdfs dfs -touchz  /user/ndh_test_user/test_file.txt
echo "<Text to append, ndh test: word count case>" | hdfs dfs -appendToFile - /user/ndh_test_user/test_file.txt

3. 从hdfs上下载文件

hdfs dfs -get /user/ndh_test_user/test_file.txt

2.1.3 预期结果

1. 命令正常执行

2. 创建文件后可在hdfs文件系统查看到 6.3.png 3. 成功下载到上一步的文件,查看其内容为上一步写入的内容

6.4.png

2.2 Yarn 服务功能测试

2.2.1 前置操作

1. 登录YARN Client部署服务器

2. 执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab
# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab  ndh_test_user@BDMS.COM

6.1.png 6.2.png

3. 查看YarnClient部署目录

查看EasyOps服务部署根目录(BASE_DIR),如/home/bdms 6.3.png 6.4.png 进入BASE_DIR,并逐级找到Yarn Client部署目录(其中/20221018113442217309170ee为组件实例名,不同机器上不同),如下所示的/home/bdms/yarn/default_yarn/client//20221018113442217309170ee

# 替换/home/bdms为上述获取的EasyOps服务部署的根目录BASE_DIR
[root@xnode3 ~]# cd /home/bdms/
[root@xnode3 bdms]# ls
hadoop  hdfs  impala  java  knox  kyuubi  logs  monitor  nginx_ha  spark2  yarn  zookeeper
[root@xnode3 bdms]# cd yarn
[root@xnode3 yarn]# ls
default_yarn   package_shared
[root@xnode3 yarn]# cd dafault_yarn/
[root@xnode3 default_yarn]# ls
client  historyserver  nodemanager  resourcemanager
[root@xnode3 default_yarn]# cd client/
[root@xnode3 client]# ls
/20221018113442217309170ee
# 此处若组件安装多次,则选择目录创建时间最近的
[root@xnode3 client]# cd /20221018113442217309170ee/
[root@xnode3 /20221018113442217309170ee]# ls
config  current  data  keytab  logs pid
[root@xnode3 /20221018113442217309170ee]# pwd
/home/bdms/yarn/default_yarn/client/20221018113442217309170ee

2.2.2 验证步骤

1. 执行MR example wordcount任务

# 若已执行过,则需要删除输出目录
hdfs dfs -rmr -skipTrash /tmp/input
hdfs dfs -rmr -skipTrash /tmp/output 

hdfs dfs -mkdir -p /tmp/input
hdfs dfs -touchz /tmp/input/ndh_test_file.txt
echo "<Text to append, ndh test: word count case>" | hdfs dfs -appendToFile - /tmp/input/ndh_test_file.txt

# 指定EXAMPLE_JAR_DIR为上述查看的YARN Client部署目录
EXAMPLE_JAR_DIR=/home/bdms/yarn/default_yarn/client/20221018113442217309170ee
source ${EXAMPLE_JAR_DIR}/config/hadoop-env.sh
# 执行队列为前置步骤中创建的队列名
hadoop jar ${EXAMPLE_JAR_DIR}/current/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar wordcount -Dmapreduce.job.queuename=ndh_test /tmp/input /tmp/output

2. 执行过程中,打开YARN UI,查看正在运行中名为word count的MR任务 6.5.png 6.6.png 6.7.png

2.2.3 预期结果

1. 任务正常执行完成

2. 通过MR HistoryServer正常查看该任务 6.8.png 6.9.png

2.3 Hive 服务功能测试

2.3.1 前置操作

1. 登录Hive Client 部署服务器 2. 执行kinit认证操作:通过ndh_test_user keytab认证

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab
# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab ndh_test_user@BDMS.COM

如下图所示 6.10.png 6.11.png 3. 通过EasyOps查看HiveServer2服务连接串,如下所示:

4. hive_link:"jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.COM" 6.12.png 6.13.png

2.3.2 验证步骤

1. Hive基本功能验证

通过beeline连接HiveServer2,并执行以下SQL

# 通过beeline连接
beeline -u "jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.COM?mapreduce.job.queuename=ndh_test"
# 连接成功后,分别执行以下SQL
use ndh_test_db;  
create table hive_tbl_01(id int,name string,address string);
insert into table hive_tbl_01 select 1,"joey","zj";
select * from hive_tbl_01;
select count(1) from hive_tbl_01;
drop table hive_tbl_01;

执行记录如下(已过滤部分日志):

[root@xnode3 ~]# beeline -u "jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.COM?mapreduce.job.queuename=ndh_test"
Connecting to jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.COM?mapreduce.job.queuename=ndh_test
log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Connected to: Apache Hive (version 2.3.8-1.1.0)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.9 by Apache Hive
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> use ndh_test_db;
INFO  : Compiling command(queryId=hive_20221025222450_9e73cb27-151a-4ebc-b206-c95d76836ff2);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: use ndh_test_db
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222450_9e73cb27-151a-4ebc-b206-c95d76836ff2);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.072 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222450_9e73cb27-151a-4ebc-b206-c95d76836ff2); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:use ndh_test_db
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20221025222450_9e73cb27-151a-4ebc-b206-c95d76836ff2);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 0.04 seconds
INFO  : OK
No rows affected (0.197 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> create table hive_tbl_01(id int,name string,address string);
INFO  : Compiling command(queryId=hive_20221025222450_860859f9-02ae-44b3-86bb-3fd958789b09);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: create table hive_tbl_01(id int,name string,address string)
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222450_860859f9-02ae-44b3-86bb-3fd958789b09);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.164 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222450_860859f9-02ae-44b3-86bb-3fd958789b09); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:create table hive_tbl_01(id int,name string,address string)
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20221025222450_860859f9-02ae-44b3-86bb-3fd958789b09);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 0.084 seconds
INFO  : OK
No rows affected (0.261 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> insert into table hive_tbl_01 select 1,"joey","zj";
INFO  : Compiling command(queryId=hive_20221025222451_511cb5fa-4405-4cf7-957f-15ec1eedca51);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: insert into table hive_tbl_01 select 1,"joey","zj"
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:int, comment:null), FieldSchema(name:_c1, type:string, comment:null), FieldSchema(name:_c2, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222451_511cb5fa-4405-4cf7-957f-15ec1eedca51);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.482 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222451_511cb5fa-4405-4cf7-957f-15ec1eedca51); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:insert into table hive_tbl_01 select 1,"joey","zj"
WARN  : Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
INFO  : Query ID = hive_20221025222451_511cb5fa-4405-4cf7-957f-15ec1eedca51
INFO  : Total jobs = 3
INFO  : Launching Job 1 out of 3
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1666075306696_0016
INFO  : Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:easyops-cluster, Ident: (token for ndh_test_user: HDFS_DELEGATION_TOKEN owner=ndh_test_user, renewer=yarn, realUser=hive/xnode3.local@BDMS.COM, issueDate=1666707891779, maxDate=1667312691779, sequenceNumber=28, masterKeyId=9), Kind: HIVE_DELEGATION_TOKEN, Service: HiveServer2ImpersonationToken, Ident: 00 0d 6e 64 68 5f 74 65 73 74 5f 75 73 65 72 0d 6e 64 68 5f 74 65 73 74 5f 75 73 65 72 1a 68 69 76 65 2f 78 6e 6f 64 65 33 2e 6c 6f 63 61 6c 40 42 44 4d 53 2e 43 4f 4d 8a 01 84 0f 88 16 f4 8a 01 84 1e fb 2a f4 8e 1e e1 15]
INFO  : The url to track the job: http://xnode4.local:8088/proxy/application_1666075306696_0016/
INFO  : Starting Job = job_1666075306696_0016, Tracking URL = http://xnode4.local:8088/proxy/application_1666075306696_0016/
INFO  : Kill Command = /home/bdms/yarn/default_yarn/client/20221018113442217309170ee/current/bin/hadoop job  -kill job_1666075306696_0016
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
INFO  : 2022-10-25 22:25:06,970 Stage-1 map = 0%,  reduce = 0%
INFO  : 2022-10-25 22:25:18,630 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.61 sec
INFO  : MapReduce Total cumulative CPU time: 2 seconds 610 msec
INFO  : Ended Job = job_1666075306696_0016
INFO  : Starting task [Stage-7:CONDITIONAL] in serial mode
INFO  : Stage-4 is selected by condition resolver.
INFO  : Stage-3 is filtered out by condition resolver.
INFO  : Stage-5 is filtered out by condition resolver.
INFO  : Starting task [Stage-4:MOVE] in serial mode
INFO  : Moving data to directory hdfs://easyops-cluster/user/warehouse/ndh_test_db.db/hive_tbl_01/.hive-staging_hive_2022-10-25_22-24-51_141_7072788565978475619-1/-ext-10000 from hdfs://easyops-cluster/user/warehouse/ndh_test_db.db/hive_tbl_01/.hive-staging_hive_2022-10-25_22-24-51_141_7072788565978475619-1/-ext-10002
INFO  : Starting task [Stage-0:MOVE] in serial mode
INFO  : Loading data to table ndh_test_db.hive_tbl_01 from hdfs://easyops-cluster/user/warehouse/ndh_test_db.db/hive_tbl_01/.hive-staging_hive_2022-10-25_22-24-51_141_7072788565978475619-1/-ext-10000
INFO  : Starting task [Stage-2:STATS] in serial mode
INFO  : Table ndh_test_db.hive_tbl_01 stats: [numFiles=1, numRows=1, totalSize=10, rawDataSize=9]
INFO  : MapReduce Jobs Launched:
INFO  : Stage-Stage-1: Map: 1   Cumulative CPU: 2.61 sec   HDFS Read: 5291 HDFS Write: 89 SUCCESS
INFO  : Total MapReduce CPU Time Spent: 2 seconds 610 msec
INFO  : Completed executing command(queryId=hive_20221025222451_511cb5fa-4405-4cf7-957f-15ec1eedca51);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 28.657 seconds
INFO  : OK
No rows affected (29.15 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> select * from hive_tbl_01;
INFO  : Compiling command(queryId=hive_20221025222520_c08b09a4-3c9c-4373-9a9d-5dcd80316a05);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: select * from hive_tbl_01
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:hive_tbl_01.id, type:int, comment:null), FieldSchema(name:hive_tbl_01.name, type:string, comment:null), FieldSchema(name:hive_tbl_01.address, type:string, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222520_c08b09a4-3c9c-4373-9a9d-5dcd80316a05);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.348 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222520_c08b09a4-3c9c-4373-9a9d-5dcd80316a05); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:select * from hive_tbl_01
INFO  : Completed executing command(queryId=hive_20221025222520_c08b09a4-3c9c-4373-9a9d-5dcd80316a05);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 0.0 seconds
INFO  : OK
+-----------------+-------------------+----------------------+
| hive_tbl_01.id  | hive_tbl_01.name  | hive_tbl_01.address  |
+-----------------+-------------------+----------------------+
| 1               | joey              | zj                   |
+-----------------+-------------------+----------------------+
1 row selected (0.479 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> select count(1) from hive_tbl_01;
INFO  : Compiling command(queryId=hive_20221025222520_23c739ab-f970-4371-bf43-3c75bc6d4c25);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: select count(1) from hive_tbl_01
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222520_23c739ab-f970-4371-bf43-3c75bc6d4c25);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.285 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222520_23c739ab-f970-4371-bf43-3c75bc6d4c25); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:select count(1) from hive_tbl_01
INFO  : Completed executing command(queryId=hive_20221025222520_23c739ab-f970-4371-bf43-3c75bc6d4c25);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 0.0 seconds
INFO  : OK
+------+
| _c0  |
+------+
| 1    |
+------+
1 row selected (0.301 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> drop table hive_tbl_01;
INFO  : Compiling command(queryId=hive_20221025222521_d851f513-dc69-47b5-b3a2-1c247b6c68f3);USER_NAME:ndh_test_user;IP:172.30.2.213;QUERY_STRING: drop table hive_tbl_01
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hive_20221025222521_d851f513-dc69-47b5-b3a2-1c247b6c68f3);USER_NAME:ndh_test_user;IP:172.30.2.213;Time taken: 0.059 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20221025222521_d851f513-dc69-47b5-b3a2-1c247b6c68f3); USER_NAME=ndh_test_user;IP=172.30.2.213;QUERY_STRING:drop table hive_tbl_01
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20221025222521_d851f513-dc69-47b5-b3a2-1c247b6c68f3);USER_NAME=ndh_test_user;IP=172.30.2.213;Time taken: 0.141 seconds
INFO  : OK
No rows affected (0.21 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>

2.3.3 预期结果

1. SQL均执行成功;

2.4 Spark 服务功能测试

2.4.1 前置操作

1. 登录Spark Client部署服务器

2. 执行kinit认证操作:通过ndh_test_user keytab认证

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab

# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab ndh_test_user@BDMS.COM

如下图所示 6.14.png 6.15.png 3. 查看Spark Client部署目录

查看EasyOps服务部署根目录(BASE_DIR),如/home/bdms 6.16.png 6.17.png

进入BASE_DIR,并逐级找到Spark Client部署目录(其中202210181134422225d2ebefe为组件实例名,不同机器上不同),如下所示的/home/bdms/spark2/default_spark2/client/202210181134422225d2ebefe/

# 替换/home/bdms为上述获取的EasyOps服务部署的根目录BASE_DIR
[root@ndh10 bdms]# cd /home/bdms/
[root@ndh10 bdms]# ls
hadoop  hdfs  impala  java  knox  kyuubi  logs  monitor  nginx_ha  spark2  yarn  zookeeper
[root@ndh10 bdms]# cd spark2
[root@ndh10 spark2]# ls
default_spark2  package_shared
[root@ndh10 spark2]# cd ndh_spark2/
[root@ndh10 ndh_spark2]# ls
client  jobhistoryserver
[root@ndh10 ndh_spark2]# cd client/
[root@ndh10 client]# ls
202210181134422225d2ebefe
# 此处若组件安装多次,则选择目录创建时间最近的
[root@ndh10 client]# cd 202210181134422225d2ebefe/
[root@ndh10 202210181134422225d2ebefe]# ls
config  current  data  keytab  logs  package  pid  ranger-0.5.4-1.1.3.5-spark-plugin
[root@ndh10 202210181134422225d2ebefe]# pwd
/home/bdms/spark2/default_spark2/client/202210181134422225d2ebefe

2.4.2 验证步骤

1. 验证Spark client模式JAR任务运行,需通过--指定队列

# 指定SPARK_EXAMPLE_JAR_DIR为上述查看的Spark Client部署目录
SPARK_EXAMPLE_JAR_DIR=/home/bdms/spark2/default_spark2/client/202210181134422225d2ebefe
spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --driver-memory 1g --executor-memory 1g --executor-cores 1 --queue ndh_test ${SPARK_EXAMPLE_JAR_DIR}/current/examples/jars/spark-examples_*.jar 100

2. 验证Spark cluster模式JAR任务运行,需通过--指定队列

# 指定SPARK_EXAMPLE_JAR_DIR为上述查看的Spark Client部署目录
SPARK_EXAMPLE_JAR_DIR=/home/bdms/spark2/default_spark2/client/202210181134422225d2ebefe
spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 1g --executor-memory 1g --executor-cores 1 --queue ndh_test ${SPARK_EXAMPLE_JAR_DIR}/current/examples/jars/spark-examples_*.jar 100

3. 验证spark-sql 任务执行,需要通过--指定队列

# 执行spark-sql 
spark-sql --queue ndh_test

# 执行以下SQL,ndh_test_db 前置操作已经创建
use ndh_test_db;
create table spark_tbl_01(id int,name string,address string);
insert into table spark_tbl_01 select 1,"joey","zj";
select * from spark_tbl_01;
select count(1) from spark_tbl_01;
drop table spark_tbl_01;

执行结果如下所示

spark-sql> use ndh_test_db;
Time taken: 0.086 seconds
spark-sql> create table spark_tbl_01(id int,name string,address string);
Time taken: 0.684 seconds
spark-sql> insert into table spark_tbl_01 select 1,"joey","zj";
2022-10-25T22:01:53.893+0800: [GC (Metadata GC Threshold)
Desired survivor size 53477376 bytes, new threshold 3 (max 15)
[PSYoungGen: 431034K->43212K(1073152K)] 465081K->82547K(1416192K), 0.0683764 secs] [Times: user=0.11 sys=0.09, real=0.06 secs]
2022-10-25T22:01:53.962+0800: [Full GC (Metadata GC Threshold) [PSYoungGen: 43212K->0K(1073152K)] [ParOldGen: 39334K->56988K(222720K)] 82547K->56988K(1295872K), [Metaspace: 122808K->122798K(1163264K)], 0.1926002 secs] [Times: user=0.67 sys=0.05, real=0.20 secs]
Time taken: 8.103 seconds
spark-sql> select * from spark_tbl_01;
1        joey        zj
Time taken: 1.181 seconds, Fetched 1 row(s)
spark-sql> select count(1) from spark_tbl_01;
1
Time taken: 0.822 seconds, Fetched 1 row(s)
spark-sql> drop table spark_tbl_01;
Time taken: 0.627 seconds

2.4.3 预期结果

1. 任务均正常执行成功。

2. 进入Spark HistoryServer 可正常查看已运行完成的任务

2.5 Kyuubi 服务功能测试

2.5.1 前置操作

1. 登录Spark Client部署服务器 2. 执行kinit认证操作:通过ndh_test_user keytab认证

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab

# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab ndh_test_user@BDMS.COM

如下图所示 6.18.png 6.19.png

  1. 查看Spark Client部署目录 获取Kyuubi JDBC连接串通过EasyOps查看Kyuubi服务连接串,如下所示: kyuubi_jdbc_url:"jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi-cluster;" 6.20.png 6.21.png

2.5.2 验证步骤

1. Kyuubi基本功能:

通过beeline连接Kyuubi,并执行以下SQL

# 通过beeline连接
beeline -u "jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi-cluster;principal=hive/_HOST@CDH.HUATAI.COM;#spark.yarn.queue=root.ndh_test"

# 连接成功后,分别执行以下SQL ndh_test_db 前置操作已经创建
use ndh_test_db;
create table kyuubi_tbl_01(id int,name string,address string);
insert into table kyuubi_tbl_01 select 1,"joey","zj";
select * from kyuubi_tbl_01;
select count(1) from kyuubi_tbl_01;
drop table kyuubi_tbl_01;

执行记录如下(已过滤部分日志):

[root@xnode3 ~]# beeline -u "jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi-cluster;principal=hive/_HOST@CDH.HUATAI.COM;#spark.yarn.queue=root.ndh_test"
Connecting to jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi-cluster;principal=hive/_HOST@CDH.HUATAI.COM;#spark.yarn.queue=root.ndh_test
log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Connected to: Apache Kyuubi (Incubating) (version 1.4.1-incubating)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.9 by Apache Hive
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> use ndh_test_db;
2022-10-25 22:14:47.713 INFO credentials.HadoopFsDelegationTokenProvider: getting token owned by ndh_test_user for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_465453460_206, ugi=ndh_test_user (auth:PROXY) via hive/xnode3.local@BDMS.COM (auth:KERBEROS)]]
+---------+
| Result  |
+---------+
+---------+
No rows selected (0.456 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> create table kyuubi_tbl_01(id int,name string,address string);
2022-10-25 22:14:54.578 INFO credentials.HadoopCredentialsManager: Send new credentials with epoch 0 to SQL engine through session 53ab3c3f-a179-43a5-b1a4-5fa9f70d5fb6
2022-10-25 22:14:54.625 INFO credentials.HadoopCredentialsManager: Update session credentials epoch from -1 to 0
+---------+
| Result  |
+---------+
+---------+
No rows selected (0.824 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> insert into table kyuubi_tbl_01 select 1,"joey","zj";
2022-10-25 22:14:54.653 INFO operation.ExecuteStatement: Processing ndh_test_user's query[55ed97c3-6e2d-4895-9a94-6d7e811a3140]: INITIALIZED_STATE -> PENDING_STATE, statement: insert into table kyuubi_tbl_01 select 1,"joey","zj"
2022-10-25 22:14:54.658 INFO operation.ExecuteStatement: Processing ndh_test_user's query[55ed97c3-6e2d-4895-9a94-6d7e811a3140]: PENDING_STATE -> RUNNING_STATE, statement: insert into table kyuubi_tbl_01 select 1,"joey","zj"
2022-10-25 22:14:59.388 INFO operation.ExecuteStatement: Processing ndh_test_user's query[55ed97c3-6e2d-4895-9a94-6d7e811a3140]: RUNNING_STATE -> FINISHED_STATE, statement: insert into table kyuubi_tbl_01 select 1,"joey","zj", time taken: 4.731 seconds
+---------+
| Result  |
+---------+
+---------+
No rows selected (4.761 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> select * from kyuubi_tbl_01;
2022-10-25 22:15:00.655 INFO operation.ExecuteStatement: Processing ndh_test_user's query[da7e9025-3be4-458e-829b-01d796493248]: RUNNING_STATE -> FINISHED_STATE, statement: select * from kyuubi_tbl_01, time taken: 1.224 seconds
+-----+-------+----------+
| id  | name  | address  |
+-----+-------+----------+
| 1   | joey  | zj       |
+-----+-------+----------+
1 row selected (1.256 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> select count(1) from kyuubi_tbl_01;
2022-09-23 09:50:35.358 INFO operation.ExecuteStatement: Processing hive's query[b68579be-bfd0-44ab-9ebf-520199ef5876]: RUNNING_STATE -> FINISHED_STATE, statement: select count(1) from kyuubi_tbl_01, time taken: 0.719 seconds
2022-10-25 22:15:01.640 INFO operation.ExecuteStatement: Query[effb801c-53ad-4c7b-adf1-0fe4f934756b] in FINISHED_STATE
2022-10-25 22:15:01.641 INFO operation.ExecuteStatement: Processing ndh_test_user's query[effb801c-53ad-4c7b-adf1-0fe4f934756b]: RUNNING_STATE -> FINISHED_STATE, statement: select count(1) from kyuubi_tbl_01, time taken: 0.939 seconds
+-----------+
| count(1)  |
+-----------+
| 1         |
+-----------+
1 row selected (0.974 seconds)
0: jdbc:hive2://xnode3.local:2182,xnode4.loca>
0: jdbc:hive2://xnode3.local:2182,xnode4.loca> drop table kyuubi_tbl_01;
+---------+
| Result  |
+---------+
+---------+
No rows selected (0.244 seconds)
0: jdbc:hive2://ndh10.huatai.com:2182,ndh11.h>

2.5.3 预期结果

1. SQL均执行成功;

2.6 Impala 服务功能测试

2.6.1 前置操作

1. 登录Hive Client 部署服务器

2. 执行kinit认证操作:通过ndh_test_user keytab认证

# 执行klist查看Principal
klist -kt /etc/security/keytabs/ndh_test_user.keytab

# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/ndh_test_user.keytab ndh_test_user@BDMS.COM

如下图所示 6.22.png 6.23.png 3. 通过EasyOps查看Impala Beeline服务连接串,如下所示: impala_link:"jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=impala-ha/hiveserver2;principal=impala/_HOST@BDMS.COM"

  1. 6.24.png 6.25.png

2.6.2 验证步骤

1. Impala基本功能验证

通过beeline连接Impalad,并执行以下SQL
# 通过beeline连接
beeline -u "jdbc:hive2://xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=impala-ha/hiveserver2;principal=impala/_HOST@BDMS.COM"

# 连接成功后,分别执行以下SQL
use ndh_test_db;  
create table impala_tbl_01(id int,name string,address string);
insert into table impala_tbl_01 select 1,"joey","zj";
select * from impala_tbl_01;
select count(1) from impala_tbl_01;
drop table impala_tbl_01;

2.7 HBase 服务功能测试

2.7.1 前置操作

1. 登录HBase Client部署服务器

2. 执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/hbase/hbase.keytab

# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/hbase/hbase.keytab hbase/xnode2.local@BDMS.COM

6.26.png 3. 查看HBase Client部署目录

查看EasyOps服务部署根目录(BASE_DIR),如/home/bdms 6.27.png 6.28.png 进入BASE_DIR,并逐级找到HBase Client部署目录(其中/20221018113442217309170ee为组件实例名,不同机器上不同),如下所示的/home/bdms/hbase/default_hbase/client/20221018113442223fde7b95f

# 替换/home/bdms为上述获取的EasyOps服务部署的根目录BASE_DIR
[root@xnode2 ~]# cd /home/bdms/
[root@xnode2 bdms]# ls
elasticsearch  hbase  hdfs  hive  java  kafka  kerberos  ldap  logs  monitor  ntesmysqlpaas  spark2  yarn  zookeeper
[root@xnode2 bdms]# cd hbase/
[root@xnode2 hbase]# ls
default_hbase  package_shared
[root@xnode2 hbase]# cd default_hbase/
[root@xnode2 default_hbase]# ls
client  regionserver
[root@xnode2 default_hbase]# cd client/
[root@xnode2 client]# ls
20221018113442223fde7b95f
[root@xnode2 client]# cd 20221018113442223fde7b95f/
[root@xnode2 20221018113442223fde7b95f]# ls
config  current  data  keytab  logs  package  pid
[root@xnode2 20221018113442223fde7b95f]# pwd
/home/bdms/hbase/default_hbase/client/20221018113442223fde7b95f

2.7.2 验证步骤

1. 通过hbase shell连接HBase服务

HBASE_HOME=/home/bdms/hbase/default_hbase/client/20221018113442223fde7b95f
export HBASE_CONF_DIR=${HBASE_HOME}/config
${HBASE_HOME}/current/bin/hbase shell

2. 验证HBase基本功能

# 1.创建namespaces:
create_namespace 'test_ndh_hbase'

# 2. 创建表:
create 'test_ndh_hbase:student','id','address','info'

# 3. 插入数据:
put 'test_ndh_hbase:student', 'Elvis', 'id', '22'
put 'test_ndh_hbase:student', 'Elvis','info:age', '26'
put 'test_ndh_hbase:student', 'Elvis','info:birthday', '1988-09-14 '
put 'test_ndh_hbase:student', 'Elvis','info:industry', 'it'
put 'test_ndh_hbase:student', 'Elvis','address:city', 'beijing'
put 'test_ndh_hbase:student', 'Elvis','address:country', 'china'

# 4. 查询表
scan 'test_ndh_hbase:student'

# 5. 删除表
disable 'test_ndh_hbase:student'
drop 'test_ndh_hbase:student'
exists 'test_ndh_hbase:student'

# 6. 删除namespaces
drop_namespace  'test_ndh_hbase'

执行记录如下:

[root@xnode2 bin]# HBASE_HOME=/home/bdms/hbase/default_hbase/client/20221018113442223fde7b95f
[root@xnode2 bin]# export HBASE_CONF_DIR=${HBASE_HOME}/config
[root@xnode2 bin]# ${HBASE_HOME}/current/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/bdms/hdfs/package_shared/hadoop-3.3.0-1.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bdms/hbase/package_shared/hbase-2.2.6/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.2.6, r88c9a386176e2c2b5fd9915d0e9d3ce17d0e456e, Tue Sep 15 17:36:14 CST 2020
Took 0.0053 seconds
hbase(main):001:0> create_namespace 'test_ndh_hbase'
Took 1.1733 seconds
hbase(main):002:0> create 'test_ndh_hbase:student','id','address','info'
Created table test_ndh_hbase:student
Took 1.3291 seconds
=> Hbase::Table - test_ndh_hbase:student
hbase(main):003:0>
hbase(main):004:0*
hbase(main):005:0* put 'test_ndh_hbase:student', 'Elvis', 'id', '22'
put 'test_ndh_hbase:student', 'Elvis','info:age', '26'
put 'test_ndh_hbase:student', 'Elvis','info:birthday', '1988-09-14 '
put 'test_ndh_hbase:student', 'Elvis','info:industry', 'it'
put 'test_ndh_hbase:student', 'Elvis','address:city', 'beijing'
Took 0.2563 seconds
put 'test_ndh_hbase:student', 'Elvis','address:country', 'china'hbase(main):006:0> put 'test_ndh_hbase:student', 'Elvis','info:age', '26'
Took 0.0084 seconds
hbase(main):007:0> put 'test_ndh_hbase:student', 'Elvis','info:birthday', '1988-09-14 '
Took 0.0088 seconds
hbase(main):008:0> put 'test_ndh_hbase:student', 'Elvis','info:industry', 'it'
Took 0.0064 seconds
hbase(main):009:0> put 'test_ndh_hbase:student', 'Elvis','address:city', 'beijing'
Took 0.0081 seconds
hbase(main):010:0> put 'test_ndh_hbase:student', 'Elvis','address:country', 'china'
Took 0.0087 seconds
hbase(main):011:0>
hbase(main):012:0* scan 'test_ndh_hbase:student'
ROW                                      COLUMN+CELL
 Elvis                                   column=address:city, timestamp=1666719524530, value=beijing
 Elvis                                   column=address:country, timestamp=1666719525159, value=china
 Elvis                                   column=id:, timestamp=1666719524389, value=22
 Elvis                                   column=info:age, timestamp=1666719524430, value=26
 Elvis                                   column=info:birthday, timestamp=1666719524465, value=1988-09-14
 Elvis                                   column=info:industry, timestamp=1666719524502, value=it
1 row(s)
Took 0.0411 seconds
hbase(main):013:0> disable 'test_ndh_hbase:student'
drop 'test_ndh_hbase:student'
exists 'test_ndh_hbase:student'
Took 0.8262 seconds
hbase(main):014:0> drop 'test_ndh_hbase:student'
Took 0.4985 seconds
hbase(main):015:0> exists 'test_ndh_hbase:student'
Table test_ndh_hbase:student does not exist
Took 0.0120 seconds
=> false
hbase(main):016:0>
hbase(main):017:0* drop_namespace  'test_ndh_hbase'
Took 0.1972 seconds
hbase(main):018:0>

2.8 Kafka 服务功能测试

2.8.1 前置操作

1. 通过EasyOps查看Kafka Manager获取配置信息

Zookeepers xnode3.local:2182 xnode4.local:2182 xnode2.local:2182/kafka
Brokers xnode2.local:9092,xnode3.local:9092,xnode4.local:9092

6.29.png 6.30.png 6.31.png 2. 登录Kafka broker部署服务器,执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/kafka/kafka.service.keytab

# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/kafka/kafka.service.keytab kafka/xnode3.local@BDMS.COM

6.32.png 3. 查看Kafka broker部署目录 查看EasyOps服务部署根目录(BASE_DIR),如/home/bdms 6.33.png 6.34.png 进入BASE_DIR,并逐级找到Kafka broker部署目录(其中202210181134422111b6b1df3为组件实例名,不同机器上不同),如下所示的/home/bdms/kafka/default_kafka/broker/202210181134422111b6b1df3

# 替换/home/bdms为上述获取的EasyOps服务部署的根目录BASE_DIR
[root@xnode3 ~]# cd /home/bdms/
[root@xnode3 bdms]# ls
easy_ranger        elasticsearch  hbase  hive  kafka     knox    ldap  monitor        spark2  zookeeper
easy_ranger_admin  hadoop         hdfs   java  kerberos  kyuubi  logs  ntesmysqlpaas  yarn
[root@xnode3 bdms]# cd kafka/
[root@xnode3 kafka]# ls
default_kafka
[root@xnode3 kafka]# cd default_kafka/
[root@xnode3 default_kafka]# ls
broker
[root@xnode3 default_kafka]# cd broker/
[root@xnode3 broker]# ls
202210181134422111b6b1df3
[root@xnode3 broker]# cd 202210181134422111b6b1df3/
[root@xnode3 202210181134422111b6b1df3]# ls
config  current  data  keytab  logs  monitor  package  pid
[root@xnode3 202210181134422111b6b1df3]# pwd
/home/bdms/kafka/default_kafka/broker/202210181134422111b6b1df3
[root@xnode3 202210181134422111b6b1df3]#

2.8.2 验证步骤

1. 创建topic,生产数据

KAFKA_HOME=/home/bdms/kafka/default_kafka/broker/202210181134422111b6b1df3
# 创建topic,此处的zookeeper列表为前置步骤1中获取,通过','拼接
${KAFKA_HOME}/current/bin/kafka-topics.sh --create --zookeeper xnode3.local:2182,xnode4.local:2182,xnode2.local:2182/kafka --replication-factor 1 --partitions 1 --topic test_kafka_topic

# 生产数据,此处的broker list为前置步骤1中获取,通过','拼接
${KAFKA_HOME}/current/bin/kafka-console-producer.sh --broker-list xnode2.local:9092,xnode3.local:9092,xnode4.local:9092 --topic test_kafka_topic
# 任意输入数据

6.35.png 2. 登录其他broker节点,消费数据

# 获取该节点kafka broker部署目录
KAFKA_HOME=/home/bdms/kafka/default_kafka/broker/202210181134422111b6b1df3
# 消费数据,此处的bootstrap-server为前置步骤1中获取,通过','拼接
bin/kafka-console-consumer.sh --bootstrap-server xnode2.local:9092,xnode3.local:9092,xnode4.local:9092 --topic test_kafka_topic --from-beginning

6.36.png

2.8.3 预期结果

1. 命令均正常执行,topic正常创建,正常生产、消费数据

2.9 Knox 服务功能测试

1. 验证HDFS UI

  • 通过EasyOps进入Knox服务实例,进入Quick UI点击HDFS UI 6.37.png

  • 校验能否正常打开HDFS UI 6.38.png

2. 验证YARN UI

  • 通过EasyOps进入Knox服务实例,进入Quick UI点击HDFS UI 6.39.png

  • 校验能否正常打开YARN UI 6.40.png 3. 验证JobHistory UI

  • 通过EasyOps进入Knox服务实例,进入Quick UI点击JobHistory UI 6.41.png

  • 校验能否正常打开JobHistory UI,查看历史任务是否能正常展示 6.42.png 4. 验证SparkHistory UI

  • 通过EasyOps进入Knox服务实例,进入Quick UI点击SparkHistory UI 6.43.png
  • 校验能否正常打开SparkHistory UI,查看Spark历史任务是否能正常展示 6.44.png

2.10 Kudu服务验证

2.10.1 前置操作

按照上述步骤Impala验证中逻辑,已通过beeline连接impala

获取kudu master 信息 master_addresses: "nfnode2.local:7051,nfnode3.local:7051,nfnode4.local:7051" 2.16.png

2.10.2 验证步骤

Kudu基本功能验证

通过beeline连接Impalad 新建kudu表,并执行以下SQL

# 指定SPARK_CLIENT_HOME为上述查看的Spark Client部署目录
SPARK_CLIENT_HOME=/usr/easyops/spark2/default_spark2_client
source ${SPARK_CLIENT_HOME}/config/spark-env.sh

# 通过beeline连接
${SPARK_CLIENT_HOME}/current/bin/beeline -u "jdbc:hive2://xjnode3.local:2182,xjnode4.local:2182,xjnode5.local:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=impala-ha/hiveserver2-default_impala;principal=impala/_HOST@BDMS.COM"

# 连接成功后,分别执行以下SQL

-- 新建kudu表,kudu.master_addresses 地址替换为上述查询的kudu master地址
CREATE TABLE ndh_test_db.test_ndh_kudu (
    id bigint,
    name string,
    cnt bigint,
    primary key(id)
)
PARTITION BY HASH(id) PARTITIONS 10
COMMENT 'kudu测试表'
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='xjnode2.local:7051,xjnode4.local:7051,xjnode3.local:7051');
-- 插入数据
insert into ndh_test_db.test_ndh_kudu values (1, 'user1', 1111), (2, 'user2', 2222), (3, 'user3', 33333);
select * from ndh_test_db.test_ndh_kudu;
delete from ndh_test_db.test_ndh_kudu where id = 1;
select * from ndh_test_db.test_ndh_kudu;
-- 删除表
drop table ndh_test_db.test_ndh_kudu;

2.10.3 预期结果

SQL均正常执行成功。

2.11 Router服务验证

2.11.1 前置操作

1. 登录Router Client部署服务器

2. 执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/router/hdfs.keytab
# 获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/router/hdfs.keytab hdfs/nbnode2.local@CDH.163.COM

2.11.2 验证步骤

  • Router基础功能回归
    ROUTER_CLIENT_HOME=/usr/easyops/router/default_router_client
    source ${ROUTER_CLIENT_HOME}/config/hadoop-env.sh
    # 通过router新建目录
    ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -mkdir /tmp/test_router_dir
    # 通过router写入文件
    ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -touchz /tmp/test_router_dir/ndh_test_file.txt
    echo "<Text to append, ndh test: word count case>" | ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -appendToFile - /tmp/test_router_dir/ndh_test_file.txt
    # 通过router获取文件
    ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -get /tmp/test_router_dir/ndh_test_file.txt
    # 查看文件内容
    ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -cat /tmp/test_router_dir/ndh_test_file.txt
    # 删除目录
    ${ROUTER_CLIENT_HOME}/current/bin/hdfs dfs -rmr /tmp/test_router_dir

2.11.3 预期结果

命令均正常执行成功

2.12.1 前置操作

1. 登录Flink client部署服务器 2. 执行kinit认证操作

# 执行klist查看Principal
klist -kt /etc/security/keytabs/router/hdfs.keytab
#获取Principal后执行kinit认证
kinit -kt /etc/security/keytabs/router/hdfs.keytab hdfs/nbnode2.local@CDH.163.COM

2.12.2 验证步骤

  • Flink 功能回归
    YARN_CLIENT_HOME=/usr/easyops/yarn/default_yarn_client
    FLINK_CLIENT_HOME=/usr/easyops/flink/default_flink_client
    source ${YARN_CLIENT_HOME}/config/hadoop-env.sh
    # 新建目录及文件,hdfs://easyops-cluster 
    ${YARN_CLIENT_HOME}/current/bin/hdfs dfs -rmr /tmp/flink-input
    ${YARN_CLIENT_HOME}/current/bin/hdfs dfs -rmr /tmp/flink-output
    ${YARN_CLIENT_HOME}/current/bin/hdfs dfs -mkdir /tmp/flink-input
    ${YARN_CLIENT_HOME}/current/bin/hdfs dfs -touchz /tmp/flink-input/ndh_test_file.txt
    echo "<Text to append, ndh test: word count case>" | ${YARN_CLIENT_HOME}/current/bin/hdfs dfs -appendToFile - /tmp/flink-input/ndh_test_file.txt
    # 提交flink任务
    ${FLINK_CLIENT_HOME}/current/bin/flink run -m yarn-cluster -yjm 1024m -ytm 1024m -c org.apache.flink.examples.java.wordcount.WordCount ${FLINK_CLIENT_HOME}/current/examples/batch/WordCount.jar --input hdfs://easyops-cluster/tmp/flink-input/ndh_test_file.txt --output hdfs://easyops-cluster/tmp/flink-output/

2.12.3 预期结果

命令均正常执行成功,无报错