http://www.kafkatool.com/download.html kafka可视化客户端工具(Kafka Tool)的基本使用
1 安装zookeeper
1.1 下载安装包:
https://archive.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
1.2 解压文件 E:\zookeeper-3.4.13\zookeeper-3.4.13
1.3 打开zookeeper-3.4.13\conf,把zoo_sample.cfg重命名成zoo.cfg
1.4 路径下新增data log 文件夹,打开zoo.cfg 修改以下路径(自己解压包所在路径)
1.5 dataDir=E:\\zookeeper-3.4.13\\zookeeper-3.4.13\\data
dataLogDir =E:\\zookeeper-3.4.13\\zookeeper-3.4.13\\log
1.6 添加如下系统变量:
ZOOKEEPER_HOME: E:\zookeeper-3.4.13\zookeeper-3.4.13(zookeeper目录)
Path: 在现有的值后面添加 ";%ZOOKEEPER_HOME%\bin;"
1.7 运行Zookeeper: 打开cmd然后执行 zkserver.sh
2 安装kafka
2.1 下载安装文件 https://mirror.bit.edu.cn/apache/kafka/2.4.1/kafka_2.11-2.4.1.tgz
(注意不要下载源码版,不然会报错提示错误: 找不到或无法加载主类 kafka.Kafka
如果还是报这个错误,查看下:)
2.2 解压文件E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1
2.3 打开kafka-2.4.1-src\config
2.4 打开 server.properties 把 log.dirs的值改成 log.dirs=./logs
2.5 打开cmd
2.6 进入kafka文件目录:E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1(kafka目录)
2.7 输入并执行: .\bin\windows\kafka-server-start.bat .\config\server.properties
3 测试是否生效
1 创建topics
在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows
执行 kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test2
2 打开一个PRODUCER:
在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows
执行 kafka-console-producer.bat --broker-list localhost:9092 --topic test2
3 打开一个CONSUMER:
在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows
执行 kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test2 --from-beginning
测试成功。
3 springboot集成kafka
3.1新建springboot项目
pom.xml
org.springframework.boot
spring-boot-starter-data-jdbc
org.mybatis.spring.boot
mybatis-spring-boot-starter
2.1.2
org.springframework.kafka
spring-kafka
org.springframework.boot
spring-boot-devtools
runtime
true
mysql
mysql-connector-java
runtime
org.projectlombok
lombok
true
org.springframework.boot
spring-boot-starter-test
test
org.junit.vintage
junit-vintage-engine
org.springframework.kafka
spring-kafka-test
test
com.google.code.gson
gson
2.8.2
org.springframework.boot
spring-boot-starter-web
org.springframework.boot
spring-boot-maven-plugin
3.2 配置文件
server.servlet.context-path=/krykafka
server.port=8082
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#============== kafka ===================
# 指定kafka 代理地址,可以多个
spring.kafka.bootstrap-servers=localhost:9092
#=============== provider =======================
spring.kafka.producer.retries=0
# 每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
#=============== consumer =======================
# 指定默认消费者group id
spring.kafka.consumer.group-id=test-hello-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
#spring.kafka.listener.missing-topics-fatal=false
# 指定消息key和消息体的编解码方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
3.3 新建 KafkaProducer
package com.kry.kafka.config;
/**
* @Author hf
* @Date 2020/3/21 9:43
* @Version 1.0
*/
import java.util.Optional;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
@Component
public class KafkaReceiver {
private static Logger logger = LoggerFactory.getLogger(KafkaReceiver.class);
@KafkaListener(topics = {"test2"})
public void listen(ConsumerRecord, ?> record) {
Optional> kafkaMessage = Optional.ofNullable(record.value());
if (kafkaMessage.isPresent()) {
Object message = kafkaMessage.get();
logger.info("----------------- record =" + record);
logger.info("------------------ message =" + message);
}
}
}
3.4 新建KafkaReceiver
kafkaTemplate.send("test2",message);
只有在kafka中 执行 kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test2 才能生效。
如果 test1没有被创建就使用 kafkaTemplate.send("test1",message);
会报下面的错误
Topic(s) [test1] is/are not present and missingTopicsFatal is true
package com.kry.kafka.config;
/**
* @Author hf
* @Date 2020/3/21 9:42
* @Version 1.0
*/
import java.util.Date;
import java.util.UUID;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
@Component
public class KafkaProducer {
private static Logger logger = LoggerFactory.getLogger(KafkaProducer.class);
@Autowired
private KafkaTemplate kafkaTemplate;
private Gson gson = new GsonBuilder().create();
//发送消息方法
public void send() {
for(int i=0;i<2;i++){
String message="猴哥,猴哥,师傅被妖怪给抓走了!!!";
logger.info("发送消息 --> message = {}", message);
kafkaTemplate.send("test2",message);
}
}
}
3.5测试方法
@Controller
public class KafkaTestController {
@Autowired
private KafkaProducer producer;
@RequestMapping("/testSendMsg")
@ResponseBody
public String testSendMsg(){
producer.send();
return "success";
}
}
.3.6调用方法验证生效