Kafka安全认证-Kerberos&SCRAM

SASL/SCRAM动态认证

配置SASL/PLAIN验证,实现了对Kafka的权限控制。但SASL/PLAIN验证有一个问题:只能在JAAS文件KafkaServer中配置用户,一但Kafka启动,无法动态新增用户。SASL/SCRAM验证可以动态新增用户并分配权限安装步骤.

初始化

①启动Zookeeper服务

[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh start zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

②解压kafka安装包

③ 创建SCRAM证书

1、创建broker建通信用户:admin(在使用sasl之前必须先创建,否则启动报错)

[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-sec],SCRAM-SHA-512=[password=admin-sec]' --entity-type users --entity-name admin
Completed Updating config for entity: user-principal 'admin'.

2、创建生产用户:producer

[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --alter --add-config 'SCRAM-SHA-256=[password=producer-sec],SCRAM-SHA-512=[password=producer-sec]' --entity-type users --entity-name producer
Completed Updating config for entity: user-principal 'producer'.

3、创建生产用户:consumer

[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --alter --add-config 'SCRAM-SHA-256=[password=consumer-sec],SCRAM-SHA-512=[password=consumer-sec]' --entity-type users --entity-name consumer
Completed Updating config for entity: user-principal 'producer'.

4、查看SCRAM证书信息

  • 查看所有用户证书
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --describe --entity-type users
Configs for user-principal 'admin' are SCRAM-SHA-512=salt=eGNkNjYzZDJwN24xeTFtaXpic2d6dnY1ag==,stored_key=l4FUWp9mV5gjT2NQT0ehFoZ6xp2UVWo9uzdoqCMTkHwM/QeJLL18ox6Xj4hDe3RBb4nv/RjGsJgKkXHd+cURNg==,server_key=QMAjOMaLnrbzwyJwlXaPFK81HuIQzS9NJJGrQewKlpHO/7oq7Pc8BAxMApyGjv7THFpzcLiFarspyvPJeG1V2w==,iterations=4096,SCRAM-SHA-256=salt=N2FyaWdpenRiYzczeWUwdXpidGN5N2NlYQ==,stored_key=q1rarCTxAZgLT14da2BGoKJ+AR80rqkRSCCH6q+wNC8=,server_key=34mFNBMYr5S8xznga6/N7eWPB16fRgM/uXh1A7Mp9NU=,iterations=4096
Configs for user-principal 'producer' are SCRAM-SHA-512=salt=bTg2dmExaDlucGdrOTh0bGp3dzVleDJzNg==,stored_key=OIKvp1ZqEBYh6l6W6DAaVGoff7qpSQ6QW21TH2k8Flt5V3IpUXXAjq9zkE8M1QHB5dTDaIxudYpDsJrr5sdbgw==,server_key=s2tMQOEb7aR7fFpkGFmy/OOqsDqy/Os32JbCUj3Crd/bXwQsbez5Bp661bliQVze8db9cBNOnvWGrf3smDJQNg==,iterations=4096,SCRAM-SHA-256=salt=NWJpajRncXR2MW8wOW04NzNqanM0YTI0Yg==,stored_key=HamFB9o2XMNzDyNhCCkBfDo73rwF9spdM3joIui7nZY=,server_key=4Sbnk6WwXwSHAB4BU8JRQBTAobvgW60ZSH84rOtWuy8=,iterations=4096
Configs for user-principal 'consumer' are SCRAM-SHA-512=salt=ZDU4aHVlaHA2dWI4cjVsMGkzOXNvcjJxNQ==,stored_key=XeSS7hyl3BKHPUWEM7giYr3Vps3bb/vBR5knsU7omjmloI3Qpdr8cfqkVEDd2dO11GWyzp2v2T0V8LX1BMEEeg==,server_key=e87Y3Jrmwsk+VjW4OBtY/Xn8MP4Jx4ZWenZkXQfe6B3JZiVUdJhlahtU6dIokSU+0dCzTfMu/JXDYipr29f2RA==,iterations=4096,SCRAM-SHA-256=salt=dHR1ZDFja2JrMTVoNWE5ZHp4Y3p2cjB6ZA==,stored_key=421ZqStlkF+UEJwDpHPVUnY5MJzSO3sjg0wU6QkSWd4=,server_key=9LpRUhwcVZzWFL2YBWZ4sbQFYKC4b6krNshnI5IBsDI=,iterations=4096
  • 查看指定账户的证书
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --describe --entity-type users --entity-name admin                                         

3、删除SCRAM证书

[root@CentOSB kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOSB:2181 --alter --delete-config 'SCRAM-SHA-512' --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name admin

服务端配置

1、在$KAFA_HOME/conf目录下创建kafka_server_jaas.conf配置文件

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-sec";
};

2、修改bin/kafka-run-class.sh文件,并修改内容如下

[root@CentOSB kafka_2.11-2.2.0]# 
...
# JVM performance options
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dzookeeper.sasl.client=false  -Djava.security.auth.login.config=/usr/kafka_2.11-2.2.0/config/kafka_server_jaas.conf $KAFKA_JAAS"
fi

3、创建config/server.properties,并且修改内容如下

...
listeners=SASL_PLAINTEXT://CentOS:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
allow.everyone.if.no.acl.found=false
super.users=User:admin
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
...
zookeeper.connect=CentOS:2181

4、启动kafka服务

[root@CentOSB kafka_2.11-2.2.0]# ./bin/kafka-server-start.sh -daemon config/server.properties

客户端配置

1、配置kafka-topic.sh脚本

  • $KAFA_HOME/conf目录创建config|producer|consumer.properties
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-sec";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
  • 创建test-topic作为后续测试
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh --bootstrap-server CentOS:9092 --command-config config/config.properties --create --topic test-topic --partitions 2 --replication-factor 1 
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh --bootstrap-server CentOS:9092 --command-config config/config.properties --describe --topic test-topic
Topic:test-topic        PartitionCount:2        ReplicationFactor:1     Configs:segment.bytes=1073741824
        Topic: test-topic       Partition: 0    Leader: 0       Replicas: 0     Isr: 0
        Topic: test-topic       Partition: 1    Leader: 0       Replicas: 0     Isr: 0
  • test-topic添加producer写授权
[root@CentOS kafka_2.11-2.2.0]#  ./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=CentOS:2181 --add --allow-principal User:producer --operation Write --topic test-topic
  • 查看授权列表
[root@CentOS kafka_2.11-2.2.0]#  ./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=CentOS:2181 --list

Current ACLs for resource `Topic:LITERAL:test-topic`:
        User:producer has Allow permission for operations: Write from hosts: *

  • 删除资源权限访问
[root@CentOS kafka_2.11-2.2.0]#  ./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=CentOS:2181 --remove --resource-pattern-type LITERAL --topic test-topic
Are you sure you want to delete all ACLs for resource filter `ResourcePattern(resourceType=TOPIC, name=test-topic, patternType=LITERAL)`? (y/n)
y
  • 指定IP地址访问
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=CentOS:2181 --add --operation Write --allow-host 192.168.42.128  --allow-principal User:producer --topic test-topic
  • 设置IP黑名单访问
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=CentOS:2181 --add --operation Write --deny-host 192.168.42.128  --deny-principal User:producer --topic test-topic
  • 测试访问
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-console-producer.sh --broker-list CentOS:9092 --producer.config config/producer.properties --topic test-topic

客户端限流

配置 说明
producer_byte_rate 1024 发布者单位时间(每秒)内可以发布到单台broker的字节数。
consumer_byte_rate 2048 消费者单位时间(每秒)内可以从单台broker拉取的字节数。
request_percentage 200 定义请求配额的作用域是单个线程。设置request_percentage = n,那么n生效的范围是一个线程,故这类配额的最大值就是(num.network.threads + num.io.threads) * 100。如果是默认参数的话就是1100。随着客户端向broker不断发送请求,broker会实时地比较当前请求处理时间百分比与该配额值的关系。一旦发现该值超过了预设的限定值则自动触发限速逻辑:等待一段时间后再返回response给客户端。

用例

  • 查看客户端
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --describe --entity-type users --entity-name producer
  • 查看Client限流
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --describe --entity-type clients --entity-name producer
  • 限流默认账户
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh  --zookeeper CentOS:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
  • 限流默认客户端
[root@CentOS kafka_2.11-2.2.0]#  ./bin/kafka-configs.sh  --zookeeper CentOS:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
  • 限定特定账户
[root@CentOS kafka_2.11-2.2.0]#./bin/kafka-configs.sh --zookeeper CentOS:2181 --describe --entity-type users --entity-name producer
  • 限定特定客户端
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name console-producer
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-producer-perf-test.sh --topic test-topic --num-records 100 --record-size 1024 --throughput -1 --producer.config config/producer.properties

  • 删除限流
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-configs.sh --zookeeper CentOS:2181 --alter --delete-config 'producer_byte_rate,consumer_byte_rate,request_percentage' --entity-name console-producer  --entity-type clients

Keberos认证

①创建kakfa.keytab,并将改文件拷贝到kafka安装目录下config下

[root@CentOS kafka_2.11-2.4.0]# kadmin.local -q 'addprinc -randkey kafka/centos'
Authenticating as principal root/[email protected] with password.
WARNING: no policy specified for kafka/[email protected]; defaulting to no policy
Principal "kafka/[email protected]" created.

[root@CentOS kafka_2.11-2.4.0]# kadmin.local -q 'addprinc -randkey zookeeper/centos'
Authenticating as principal root/[email protected] with password.
WARNING: no policy specified for zookeeper/[email protected]; defaulting to no policy
Principal "zookeeper/[email protected]" created.

[root@CentOS kafka_2.11-2.4.0]# kadmin.local -q "ktadd -k /root/kafka.keytab kafka/centos zookeeper/centos"
Authenticating as principal root/[email protected] with password.
Entry for principal kafka/centos with kvno 3, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/kafka.keytab.
Entry for principal kafka/centos with kvno 3, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/kafka.keytab.
Entry for principal kafka/centos with kvno 3, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/kafka.keytab.
...
Entry for principal zookeeper/centos with kvno 3, encryption type des-hmac-sha1 added to keytab WRFILE:/root/kafka.keytab.
Entry for principal zookeeper/centos with kvno 3, encryption type des-cbc-md5 added to keytab WRFILE:/root/kafka.keytab.

②修改server.properties

listeners=SASL_PLAINTEXT://centos:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka

③在kafka的暗转目录下创建kafka_server_jaas.conf配置文件

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/usr/kafka_2.11-2.4.0/config/kafka.keytab"
    principal="kafka/[email protected]";
};
// Zookeeper client authentication
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="/usr/kafka_2.11-2.4.0/config/kafka.keytab"
    storeKey=true
    useTicketCache=false
    principal="zookeeper/centos";
};
// kafka client authentition
KafkaClient {
   com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/usr/kafka_2.11-2.4.0/config/kafka.keytab"
    principal="kafka/[email protected]";
};

④修改kafka安装目录下的kafka-run-class.sh文件,在# JVM performance options添加如下

  • 原始配置
...
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true"
fi
...
  • 修改后
...
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/usr/kafka_2.11-2.4.0/config/kafka_server_jaas.conf -Dsun.security.krb5.debug=false"
fi
...

⑤修改Zookeeper的zoo.cfg

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

⑥在Zookeeper的conf目录下新增jaas.conf文件

Server {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="/usr/kafka_2.11-2.4.0/config/kafka.keytab"
    storeKey=true
    useTicketCache=false
    principal="zookeeper/centos";
};
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="/usr/kafka_2.11-2.4.0/config/kafka.keytab"
    storeKey=true
    useTicketCache=false
    principal="zookeeper/centos";
};

这里是因为讲kafka和zk部署在同一台服务器,因此keyTab路径和kafka的一致

⑦修改zkEnv.sh,最后一行追加

export JVMFLAGS="-Djava.security.auth.login.config=/usr/zookeeper-3.4.6/conf/jaas.conf"

⑧分别启动zk和kafka,至此kafka zookeeper构建完毕

你可能感兴趣的:(kafka)