DEMO-JDBC链接Hive/Impala

适用模块

客户端

具体说明

JavaAPI连接Hive/Impala

注意事项:
1、此hive连接认证操作不可用于udf函数中
2、通常Kerberos认证时是走Host的,所以建议在连接侧将整个集群的Host统一配置一下,并且在连接时统一使用域名,而不是IP。当您在连接时遇到以下报错时,请先检查此项配置

javax.security.sasl.SaslException: GSS initiate failed
...
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
...
Caused by: sun.security.krb5.KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER
...
Caused by: sun.security.krb5.Asn1Exception: Identifier doesn't match expected value (906)
**使用示例**
依赖配置

对应pom.xml文件依赖:

<?xml version="1.0" encoding="UTF-8"?>

4.0.0

<groupId>org.example</groupId>
<artifactId>hjdbc</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <dependency>
        <groupId>org.apache.hive</groupId>
        <artifactId>hive-jdbc</artifactId>
        <version>2.1.1</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <!--                <version>2.5.3</version>-->
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
            <executions>
                <execution>
                    <id>make-assembly</id>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

```
##### 配置文件
使用JavaAPI连接Hive的zookeeper的demo代码

# zk  jdbc 连接串
jdbcUrl=jdbc:hive2://demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.163.COM
可在mammut中台数据库中找到,select * from mammut.pf_hive_cluster\G
HiveJdbc.java
##### 代码示例

package com.neteas;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import java.sql.*;

public class HiveJdbc {
    //private static String url2 = "jdbc:hive2://demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.163.COM";
    private static Connection conn = null;
    private static PreparedStatement ps = null;
    private static ResultSet rs = null;
    private static String HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";



    /**
     * @description: 通过jdbc连接hive2
     */

    public Connection getConnection(String url2,String keytabfile,String personkey) {
        Configuration conf = new Configuration();
        conf.set("hadoop.security.authentication", "Kerberos");
        System.setProperty("krb5_ini", System.getProperty("user.dir") + "/krb5.conf");
        System.setProperty("hive_keytab", System.getProperty("user.dir") + "/"+keytabfile);
        System.setProperty("java.security.krb5.conf", System.getProperty("krb5_ini"));
        //System.setProperty("sun.security.krb5.debug", "true");
        UserGroupInformation.setConfiguration(conf);
        try {
            UserGroupInformation.loginUserFromKeytab(personkey, System.getProperty("hive_keytab"));
            // 使用hive用户登陆
            Class.forName("org.apache.hive.jdbc.HiveDriver");
            conn = DriverManager.getConnection(url2, "", "");
        } catch (Exception e) {
            e.printStackTrace();
        }
        return conn;
    }

    /**
     * @description: 创建数据库
     */
    public void cresteDatabase(){
        try {
            conn.prepareStatement("CREATE database test").execute();
            System.out.println("数据库创建成功");
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    /**
     * @description: 查询数据
     */
    public void getAll(Connection conn) {
        String sql = "select * from mammut.test3";
        System.out.println(sql);
        try {
            ps = conn.prepareStatement(sql);
            rs = ps.executeQuery();
            // 获取所有列
            int columns = rs.getMetaData().getColumnCount();
            //处理数据
            while (rs.next()) {
                for (int i = 1; i <= columns; i++) {
                    System.out.print(rs.getString(i) + "\t");
                }
                System.out.println();
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    public void showTables(Connection conn,String database) {
        try {
            // 进入default数据库
            System.out.println("进入数据库*********");
            ps = conn.prepareStatement("use "+database);
            ps.execute();
            // 展示所有表
            rs = ps.executeQuery("show tables");
            // 处理结果集
//            while (rs.next()) {
//                System.out.println(rs.getString(1));
//                System.out.println(rs.getString(2));
//            }
            int columns = rs.getMetaData().getColumnCount();
            //处理数据
            while (rs.next()) {
                for (int i = 1; i <= columns; i++) {
                    System.out.print(rs.getString(i) + "\t");
                }
                System.out.println();
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    /**
     * @description: 执行DDL语句
     */
    public void execute() {
        String sql = "INSERT overwrite directory '/user/hue/learn_oozie/mazy_hive_1/output'\n" +
                "row format delimited fields terminated by \"\\t\"\n" +
                "SELECT sid,sname FROM mytable LIMIT 10";
        System.out.println(sql);
        try {
            ps = conn.prepareStatement(sql);
            ps.execute();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    /**
     * @description: 创建表格
     */

    public void createTables() {
        String createSql = "create table if not exists default.mytable(id String, name String) row format delimited fields terminated by \",\" stored as textfile";
        try {
            conn.prepareStatement(createSql).execute();
            System.out.println("创建表成功");
        } catch (SQLException e) {
            e.printStackTrace();
            System.err.println("创建表失败");
        }
    }

    /**
     * @description: 进入数据库,展示所有表
     */



    /**
     * @description: 加载数据
     */

    public void loadTable() {
        String loadSql = "load data inpath \"/user/liuyzh/mytable.txt\" into table mytable";
        try {
            conn.prepareStatement(loadSql).execute();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    /**
     * @description: 删除数据表
     */

    public void deleteTable(){
        try {
            conn.prepareStatement("DROP table default.mytable").execute();
            System.out.println("数据表被删除");
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }

    /**
     * @description: 删除数据库
     */

    public void dropDatabase(){
        String sql = "drop database test";
        try {
            conn.prepareStatement(sql).execute();
            System.out.println("删除数据库成功");
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }


    /**
     * @description: 关闭连接
     */

    public void closeConnect() {
        if (rs != null) {
            try {
                rs.close();
            } catch (SQLException e) {
                e.printStackTrace();
            }
        }
        if (ps != null) {
            try {
                ps.close();
            } catch (SQLException e) {
                e.printStackTrace();
            }
        }
        if (conn != null) {
            try {
                conn.close();
            } catch (SQLException e) {
                e.printStackTrace();
            }
        }
    }

}
Test.java
##### 代码示例
package com.neteas;

import java.sql.Connection;

public class Test {
    public static void main(String[] args) {
        String url2=args[0];
        String keytabfile=args[1];
        String personkey=args[2];
        String database=args[3];
        System.out.println("url="+url2);
        System.out.println("keytabfile="+keytabfile);
        System.out.println("personkey="+personkey);
        System.out.println("database="+database);
        HiveJdbc jd = new HiveJdbc();

        Connection con = jd.getConnection(url2,keytabfile,personkey);
        System.out.println(con);
        jd.showTables(con,database);

    }
}
说明:执行命令如下
java -cp hjdbc-1.0-SNAPSHOT-jar-with-dependencies.jar com.neteas.Test 'jdbc:hive2://demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182,demoxx.jdlt.163.org:2182/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@BDMS.163.COM' 'qianzhaoyuan.keytab' 'bdms_qianzhaoyuan/dev@BDMS.163.COM' 'mammut'
第一个参数:jdbc
第二个参数:个人key文件名称
第三个参数:key principal(可通过klist -kt qianzhaoyuan.keytab查询)
第四个参数值:数据库名


1.在中台中测试执行
(1).创建script
hivejdbc.bash脚本如下(脚本需要utf-8,unix脚本):
(2).在资源及配置中上传相关脚本和包
上传包认证文件(可个人可项目),案例中使用个人为了安全
krb5.conf(在中台部署服务器下/etc/krb5.conf
(3)运行script,日志查询结果

DEMO-JDBC链接Hive/Impala - 图1 DEMO-JDBC链接Hive/Impala - 图2 DEMO-JDBC链接Hive/Impala - 图3 DEMO-JDBC链接Hive/Impala - 图4

2.在服务器中执行
(1).上传包认证文件(可个人可项目),案例中使用个人为了安全
krb5.conf(在中台部署服务器下/etc/krb5.conf
可执行jar
都放在同一个目录下
(2).运行jar命令,如图

DEMO-JDBC链接Hive/Impala - 图5 DEMO-JDBC链接Hive/Impala - 图6


作者:qianzhaoyuan