37.Java连接Kafka

37.1 实验环境

  • Intellij已安装且正常运行
  • Maven环境正常
  • RedHat7.2
  • CM和CDH版本为5.11.2
  • Kafka2.2.0-0.10.2
  • 创建topic,test3有3个replication,3个partition
[ec2-user@ip-172-31-22-86~]$ kafka-topics --create --zookeeper ip-172-31-22-86.ap-southeast-1.compute.internal:2181 --replication-factor 3 --partitions 3 --topic test3
  • krb5.conf配置(直接使用CDH集群的Kerberos配置)
# Configuration snippets may beplaced in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc =FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = CLOUDERA.COM
 #default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 CLOUDERA.COM = {
  kdc =ip-172-31-22-86.ap-southeast-1.compute.internal
  admin_server = ip-172-31-22-86.ap-southeast-1.compute.internal
 }

[domain_realm]
 .ip-172-31-22-86.ap-southeast-1.compute.internal= CLOUDERA.COM
 ip-172-31-22-86.ap-southeast-1.compute.internal= CLOUDERA.COM
  • Kerberos的keytab文件
    • 使用kadmin为Kerberos账号生成keytab,fayson.keytab文件生成在当前目录下。
[ec2-user@ip-172-31-22-86~]$ sudo kadmin.local
Authenticating as principal hdfs/[email protected] with password.
kadmin.local:  xst -norandkey -k fayson.keytab [email protected]   
...
kadmin.local:  exit
[ec2-user@ip-172-31-22-86~]$
  • jaas-cache.conf配置文件
KafkaClient{
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/Volumes/Transcend/keytab/fayson.keytab"
  principal="[email protected]";
};
  • 在当前开发环境下配置集群的主机信息到hosts文件
    • 在/etc/hosts文件中添加
  • Fayson使用的AWS环境,所以使用公网IP和hostname对应。
    • 如果你的开发环境可以直连Hadoop集群,可以直接配置Hadoop内网IP和hostname对应即可。

37.2 实验操作

  • 使用Intellij创建Java Maven工程
  • 在pom.xml配置文件中增加Kafka API的Maven依赖

  org.apache.kafka
  kafka-clients
  0.10.2.0

  • 编写生产消息代码
package com.cloudera;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
/**
 * Created by fayson on 2017/10/24.
 */
public class MyProducer {
    public static String TOPIC_NAME = "test3";
    public static void main(String[] args){
        System.setProperty("java.security.krb5.conf", "/Volumes/Transcend/keytab/krb5.conf");
        System.setProperty("java.security.auth.login.config", "/Volumes/Transcend/keytab/jaas-cache.conf");
        System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
//       System.setProperty("sun.security.krb5.debug","true");

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "ip-172-31-21-45.ap-southeast-1.compute.internal:9092,ip-172-31-26-102.ap-southeast-1.compute.internal:9020,ip-172-31-26-80.ap-southeast-1.compute.internal:9020");
        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.kerberos.service.name", "kafka");
        Producer producer = new KafkaProducer(props);
        for (int i = 0; i < 10; i++) {
            String key = "key-"+ i;
            String message = "Message-"+ i;
            ProducerRecord record= new ProducerRecord(TOPIC_NAME, key, message);
            producer.send(record);
            System.out.println(key + "----"+ message);
        }
        producer.close();
    }
}
  • 编写消费消息代码
package com.cloudera;

import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import java.util.Arrays;
import java.util.Properties;

/**
 * Created by fayson on 2017/10/24.
 */
public class MyConsumer {
    private static String TOPIC_NAME = "test3";
    public static void main(String[] args){
        System.setProperty("java.security.krb5.conf", "/Volumes/Transcend/keytab/krb5.conf");
        System.setProperty("java.security.auth.login.config", "/Volumes/Transcend/keytab/jaas-cache.conf");
        System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");

        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "ip-172-31-21-45.ap-southeast-1.compute.internal:9092,ip-172-31-26-102.ap-southeast-1.compute.internal:9020,ip-172-31-26-80.ap-southeast-1.compute.internal:9020");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "DemoConsumer");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
        props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.kerberos.service.name", "kafka");

        KafkaConsumer consumer = new KafkaConsumer(props);
        TopicPartition partition0= new TopicPartition(TOPIC_NAME, 0);
        TopicPartition partition1= new TopicPartition(TOPIC_NAME, 1);
        TopicPartition partition2= new TopicPartition(TOPIC_NAME, 2);


        consumer.assign(Arrays.asList(partition0,partition1, partition2));
        ConsumerRecords records = null;
        while (true){
            try {
                Thread.sleep(10000l);
                System.out.println();
                records = consumer.poll(Long.MAX_VALUE);
                for (ConsumerRecord record : records) {
                    System.out.println("Receivedmessage: (" + record.key() + "," + record.value() + ") at offset " + record.offset());
                }
            } catch (InterruptedException e){
                e.printStackTrace();
            }
        }
    }
}
  • 代码测试
    • 执行消费程序,消费topic为test3的所有partition消息
      • 启动成功,等待消费test3的消息
    • 执行生产消息程序,向test3的topic生产消息
      • 向test3的topic发送的消息
    • 查看消费程序读取到的消息
  • 总结
    • 在开发环境下通过Java代码直接连接到已启用Kerberos的Kafka集群时,则需要将krb5.conf和jaas.conf配置加载到程序运行环境中。
      • 至于使用Kerberos密码的方式Fayson也不会。
    • 测试使用的topic有3个partiton,如果没有将所有的broker列表配置到bootstrap.servers中,会导致部分消息丢失。

大数据视频推荐:
腾讯课堂
CSDN
大数据语音推荐:
企业级大数据技术应用
大数据机器学习案例之推荐系统
自然语言处理
大数据基础
人工智能:深度学习入门到精通

你可能感兴趣的:(37.Java连接Kafka)