Linux安装kafka教程

Linux安装kafka教程

1、下载kafka

这里我选择的版本是 kafka_2.13-2.8.2

## 选择下载目录
mkdir /home/software/kafka/
## 下载文件
wget https://archive.apache.org/dist/kafka/2.8.2/kafka_2.13-2.8.2.tgz
## 解压文件
tar -zxf kafka_2.13-2.8.2.tgz
## 进入文件夹
cd kafka_2.13-2.8.2
## 建立软连接
ln -s /home/software/kafka/kafka_2.13-2.8.2 /opt/kafka

2、创建用户

## 创建zookeeper用户并且授权
useradd -r -s /bin/false zookeeper
chown -R zookeeper:zookeeper /opt/kafka /var/lib/zookeeper

## 创建kafka用户并且授权
useradd -r -s /bin/false kafka
chown -R kafka:kafka /opt/kafka /tmp/kafka-logs

3、进入zookeeper,创建feature

## 进入zookeeper
/opt/kafka/bin/zookeeper-shell.sh localhost:2181
## 创建feature
create /feature "{"features":{},"version":1,"status":1}"
## 验证结果
get /feature
## 删除feature
delete /feature

4、配置zookeeper

zookeeper保持默认配置即可,这里主要是为了方便管理的配置。

## 创建zookeeper.service
vi /etc/systemd/system/zookeeper.service

并粘贴以下内容:

[Unit]
Description=Apache Zookeeper (Kafka Built-in)
After=network.target

[Service]
Type=simple
User=zookeeper
Group=zookeeper
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

5、配置kafka

# 进入配置文件
vi /opt/kafka/config/server.properties

修改以下内容:

## 开启本地访问如下:
listeners=PLAINTEXT://127.0.0.1:9092
advertised.listeners=PLAINTEXT://127.0.0.1:9092

## 如果开启远程访问,如下:
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://0.0.0.0:9092

## 配置zookeeper地址:
zookeeper.connect=localhost:2181

创建kafka.service

vi /etc/systemd/system/kafka.service

并粘贴以下内容:

[Unit]
Description=Apache Kafka Server
After=network.target zookeeper.service

[Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

6、配置开机自启

# 重新加载 systemd 配置
sudo systemctl daemon-reload
# 启动 zookeeper
sudo systemctl start zookeeper
# 检查状态和日志
sudo systemctl status zookeeper
## 开机自启
sudo systemctl enable zookeeper
## 查看日志
journalctl -u zookeeper -xe --no-pager | tail -n 20

# 启动 Kafka
sudo systemctl start kafka
# 检查状态和日志
sudo systemctl status kafka
## 开机自启
sudo systemctl enable kafka
## 查看日志
journalctl -u kafka -xe --no-pager | tail -n 20

7、踩坑报错及解决方案

7.1 Feature ZK node at path: /feature does not exist

解决方案:参照步骤3

7.2 Failed to process feature ZK node change event. The broker will eventually exit.

解决方案:参照步骤3

7.3 kafka.common.InconsistentClusterIdException: The Cluster ID xxxx-xxx-xxxx doesn’t match stored clusterId Some(v9GFB2tWT6Krlaxgl86ufg)

这个错误表明 Kafka Broker 启动时检测到集群 ID 不一致,解决方案如下:

方案1:

# 1. 找到 Kafka 数据目录中的 meta.properties
# (默认在 log.dirs 配置的目录下,通常是 /var/lib/kafka/data/meta.properties)
sudo find / -name meta.properties

# 2. 编辑该文件,将 cluster.id 改为 ZooKeeper 日志中打印的 ID(xxxx-xxx-xxxx)
sudo vi /var/lib/kafka/data/meta.properties

方案二:重建 ZooKeeper 集群ID
适用场景:ZooKeeper 是新初始化的,允许重置集群ID

# 1. 删除 ZooKeeper 中的 /cluster/id 节点
echo "delete /cluster/id" | /opt/kafka/bin/zookeeper-shell.sh localhost:2181

# 2. 让 Kafka 自动生成新集群ID(需先删除本地 meta.properties)
sudo rm /var/lib/kafka/data/meta.properties

方案三:完全清理并重新初始化
适用场景:测试环境或允许数据丢失

# 1. 停止 Kafka
sudo systemctl stop kafka
# 2. 清理数据目录
sudo rm -rf /var/lib/kafka/data/*
# 3. 重启 Kafka(会自动初始化新集群ID)
sudo systemctl start kafka

最后检查broker状态:

sudo systemctl status kafka
tail -f /opt/kafka/logs/server.log

你可能感兴趣的:(linux,kafka,运维)