十三:kafka分布式部署

一:kafka概述:

就一个消息中间件,当前官网叫做:分布式流平台a distributed streaming platform:
A streaming platform has three key capabilities:

1.Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
2.Store streams of records in a fault-tolerant durable way.
3.Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

1.Building real-time streaming data pipelines that reliably get data between systems or applications
2.Building real-time streaming applications that transform or react to the streams of data

二:kafak对比Flume:

Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store.

Flume: 只有1个进程:含Source Channel Sink
kafka : 有3个进程: producer broker cousumer
生成者 服务进程(存储)(需配置) 消费者(sparkstreaming flik 结构化流)
编程语言为Scala
几个概率:主题Topic 区分不同的业务系统。落在磁盘上就是不同的文件夹。
十三:kafka分布式部署_第1张图片

三:kafka分布式部署:

3.1 分布式安装准备:

3.1.1:先安装Zookeeper CDH5.7.0

安装已经在前期HA时已安装好,具体情况如下:bin/zkServer.sh status
十三:kafka分布式部署_第2张图片

3.1.2:安装好scala 2.11版本:

tar -xzvf scala-2.11.8.tgz -C …/app
chown -R scala-2.11.8
ln -s scala-2.11.8 scala

添加环境变量:
cat /etc/profile

export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export PATH= P A T H : PATH: PATH:SCALA_HOME/bin

3.2 kafka 安装布署

3.2.1:kafa选型:

kafka 没有CDH5.7.0 ,在CDH中,kafka是独立分支。
kafka_2.11 - 0.10.0.1.tgz
scala2.11 版本
0.10.0.1 kafka版本
http://mirror.bit.edu.cn/apache/kafka/
十三:kafka分布式部署_第3张图片

3.2.2:kafka 布署:

tar -xzvf kafka_2.11-0.10.2.2.tgz -C …/app
ln -s kafka_2.11-0.10.2.2 kafka
十三:kafka分布式部署_第4张图片
首选:kafka数据是落到Linux磁盘上的,所有首选创建个存储目录:mkdir logs
其次:配置服务进程:

The id of the broker. This must be set to a unique integer for each broker.

broker.id=0 (每个机器按序排)

阿里云需要留意地方
(# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1/2/3
host.name=阿里云内网地址
advertised.host.name=阿里云外网地址
advertised.port=9092

A comma seperated list of directories under which to store log files

log.dirs=/home/hadoop/app/kafka/logs

默认伪分布配置在本地电脑的zookeeper

root directory for all kafka znodes.

zookeeper.connect=39.105.98.82:2181,39.105.123.53:2181,39.106.106.185:2181/kafka (带个目录好删除)
zookeeper.connect=172.17.4.16:2181,172.17.4.17:2181,172.17.217.124:2181/kafka

然后看看是否能启动运行:

看看是否能找到:which kafka-server-start.sh 一般找不到,需要进行路径补全:
nohup bin/kafka-server-start.sh config/server.properties &
tail -F nohup.out

十三:kafka分布式部署_第5张图片

四:CDH上使用KAFKA:

修改相应配置,在修改ID的时候,需要 /var/local/kafka/data/meta.properties下面也对应修改
十三:kafka分布式部署_第6张图片

[root@hadoop003 ~]# more /var/local/kafka/data/meta.properties

#Sat Feb 23 21:29:58 CST 2019
version=0
broker.id=48

创建个TOPIC
kafka-topics
–create
–zookeeper 172.17.4.16:2181,172.17.4.17:2181,172.17.217.124:2181/kafka
–replication-factor 3
–partitions 3
–topic kunming

查询下:
kafka-topics
–list
–zookeeper 172.17.4.16:2181,172.17.4.17:2181,172.17.217.124:2181/kafka

kafka-topics
–describe
–zookeeper 172.17.4.16:2181,172.17.4.17:2181,172.17.217.124:2181/kafka
–topic kunming
三分区三副本
19/02/25 17:23:08 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
Topic:kunming PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kunming Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: kunming Partition: 1 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kunming Partition: 2 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

具体对应在磁盘的文件目录:
[root@hadoop002 data]# ll
total 24
-rw-r–r-- 1 kafka kafka 0 Feb 23 21:25 cleaner-offset-checkpoint
drwxr-xr-x 2 kafka kafka 4096 Feb 25 17:10 kunming-0
drwxr-xr-x 2 kafka kafka 4096 Feb 25 17:10 kunming-1
drwxr-xr-x 2 kafka kafka 4096 Feb 25 17:10 kunming-2
-rw-r–r-- 1 kafka kafka 54 Feb 25 16:54 meta.properties
-rw-r–r-- 1 kafka kafka 40 Feb 25 17:15 recovery-point-offset-checkpoint
-rw-r–r-- 1 kafka kafka 40 Feb 25 17:16 replication-offset-checkpoint
[root@hadoop002 data]# pwd
/var/local/kafka/data
[root@hadoop002 data]#

你可能感兴趣的:(kafka)