k8s部署filebeat收集日志推送kafka

        网上有许多方案,今天我们采用在pod中部署专有日志收集容器的方式来实现,话不多说,下面干货。

1、官网pull镜像

docker pull docker.elastic.co/beats/filebeat:8.4.3

        文档地址:https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html

2、编写filebeat.yml

filebeat.inputs:
- type: log
  paths:
    - /logs/*.log
output.kafka:
  # 录入kafka地址
  hosts: ["xxx.xxx.xxx.xxx:9092"]

  # 录入自己的topic
  topic: '%{[fields.log_topic]}'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

         文档:

                https://www.elastic.co/guide/en/beats/filebeat/8.4/filebeat-input-log.html

        ​​​​​​​        https://www.elastic.co/guide/en/beats/filebeat/8.4/kafka-output.html

3、编写deployment.yaml

        "......"表示省略,只展示关键代码

……
spec:
  volumes:
    - name: filebeat
      configMap: filebeat
        - key: filebeat.yml
          path: filebeat.yml
        defaultMode: 420
    - name: nginx-log
      emptyDir: {}
  containers:
    - name: nginx
      image: nginx
……
      volumMounts:
        - name: nginx-log
          mountPath: /var/log/nginx
    - name: filebeat
      image: docker.elastic.co/beats/filebeat:8.4.3
      args:
        - '-c'
        - '/etc/filebeat.yml'
        - '-e'
        - '--strict.perms=false'
      volumeMounts:
        - name: filebeat
          readOnly: true
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: nginx-log
          readOnly: true
          mountPaht: /logs
……

4、在kafka验证是否有记录

./kafka-console-consumer.sh --bootstrap-server xxx.xxx.xxx.xxx:9092 --topic xxx --from-beginning

你可能感兴趣的:(kubernetes,kafka,elk)