Logstash

概述

logstash集中收集日志、转换|过滤、存储到目的端。

是一种分布式日志收集框架,开发语言是JRuby,与Java平台对接,经常与ElasticSearch,Kibana配置,组成著名的ELK技术栈,非常适合用来做日志数据的分析。

简单来说logstash就是一根具备实时数据传输能力的管道,负责将数据信息从管道的输入端传输到管道的输出端;

同时这根管道还可以让你根据自己的需求在中间加上滤网,Logstash提供很多功能强大的滤网以满足你的各种应用场景。

它可以单独出现,作为日志收集软件,收集日志到多种存储系统或临时中转系统,如MySQL,Redis,Kakfa,HDFS, Lucene,Solr等,并不一定是ElasticSearch。

架构 

数据源-->input-->fliter-->output-->目的地

因为数据往往分散的存储在不同的数据源(数据库)中,数据源可以为mysql\redis\kafka\HDFS,Logstash 支持各种输入选择 ,可以在同一时间从众多常用来源捕捉事件。能够以连续的流式传输方式,轻松地从源头日志、指标、Web 应用、数据存储采集数据。

fliter过滤器:用于在将event通过output发出之前对其实现某些处理功能。
grok:用于分析结构化文本数据。

目前是logstash中将非结构化数据日志数据转化为结构化的可查询数据的不二之选。

[root@node1 ~]# rpm -ql logstash | grep "patterns$"  grok定义模式结构化的位置。
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/mcollective-patterns

Output(输出):将我们过滤出的数据保存到数据库和相关存储中。

总结如下:

inpust:必须,负责产生事件(Inputs generate events),常用:File、syslog、redis、beats(如:Filebeats)


filters:可选,负责数据处理与转换(filters modify them),常用:grok、mutate、drop、clone、geoip


outpus:必须,负责数据输出(outputs ship them elsewhere),常用:elasticsearch、file、graphite(图形化组件)、statsd(统计)

安装

rpm包安装

基于logstash是jruby语言编写,即需要java环境。

使用流程

 大致需要配置下面的地方

input {     从哪个地方读取,输入数据。
   
}

filter {    依据grok模式对数据进行分析结构化
   
}

output {    将分析好的数据输出存储到哪些地方
  
}

 举例1:标准输入

# 使用input插件的stdin/stdout输入输出流的形式启动

cd logstash-7.8.0/bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}' 

运行成功后在终端进行输入操作,这部分的操作同下面的举例,这里记录更直接的方式
[root@node1 ~]# cd /etc/logstash/conf.d/  默认logstash的配制文件在这个目录下
[root@node1 conf.d]# ls
[root@node1 conf.d]# vim shil.conf
input {
  stdin {   标准输入
  }
}

output {
  stdout {   标准输入
    codec => rubydebug   编码格式ruby
  }
}

[root@node1 conf.d]# logstash -f /etc/logstash/conf.d/shil.conf --config.debug  使用--config.debug进行验证配置是否有错误

[INFO ] 2020-10-13 13:16:47.900 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[INFO ] 2020-10-13 13:16:48.010 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2020-10-13 13:16:48.227 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello world   我们输入一些字段。
{
          "host" => "node1",   当前主机
       "message" => "hello world",  发布的消息
      "@version" => "1",   版本号
    "@timestamp" => 2020-10-13T06:08:07.476Z  
}

  举例2:通过grok来对日志进行分析,读取,标准输出。

我们自定义gork模式对日志进行过滤。
语法格式:
       %{SYNTAX:SEMANTIC}
               SYNTAX:预定义模式名称;
               SEMANTIC:匹配到的文本的自定义标识符;
[root@node1 conf.d]# vim groksimple.conf
input {
    stdin {}
}

filter {
    grok {
    match => { "message" => "%{IP:clientip} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
 }
}

output {
  stdout {
  codec => rubydebug
  }
}
[root@node1 conf.d]# logstash -f /etc/logstash/conf.d/groksimple.conf

[INFO ] 2020-10-13 14:29:46.098 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
1.1.1.1 get /index.html 30 0.23  我们标准输入一些日志信息。
{
    "@timestamp" => 2020-10-13T06:30:11.973Z,
          "host" => "node1",
      "@version" => "1",
       "request" => "/index.html",
       "message" => "1.1.1.1 get /index.html 30 0.23",
      "duration" => "0.23",
      "clientip" => "1.1.1.1",
        "method" => "get",
         "bytes" => "30"
}

 举例3:将一些webserver服务器产生的日志进行过滤标准输出

file输入插件不读取远程文件,它只允许从本地主机读取文件。 

要想读远程文件,方案有以下:

A.可以在远程服务器 some.server 上安装 filebeat,以 some.server 该syslog文件,并将其发送到你的logstash或直接发送到你的ES服务器。

B.您可以直接在远程服务器上安装Logstash,并将file输入插件直接放在服务器上。

C.您可以使用 http_poller 插件 来通过HTTP检索该文件。

input {
  http_poller {
    urls => {
      mysyslog => "http://some.server/dir/derp/syslog.log"
    }
    request_timeout => 60
    interval => 60
  }
}

 

[root@node1 conf.d]# vim httpdsimple.conf
input {
   file {     从哪个文件中获取
   path => ["/var/log/httpd/access_log"]   文件路径
   type => "apachelog"    文件类型
   start_position => "beginning"    从最开始取数据
   }
}

filter {
   grok {    过滤分析格式
   match => {"message" => "%{COMBINEDAPACHELOG}"}  过滤httpd日志格式。
   }
}

output {
   stdout {
     codec => rubydebug
  }
}

[root@node4 conf.d]# logstash -f /etc/logstash/conf.d/httpd.conf --path.data=/tmp

[INFO ] 2020-10-13 17:58:00.852 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9603}

在浏览器直接访问10.5.100.183

{
    "@timestamp" => 2020-10-13T10:01:02.347Z,
       "message" => "- - - [13/Oct/2020:18:01:01 +0800] \"GET / HTTP/1.1\" 304 - \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36\"",
      "@version" => "1",
          "tags" => [
        [0] "_grokparsefailure"
    ],
          "path" => "/var/log/httpd/access_log",
          "type" => "apachelog",
          "host" => "node4"
}
{
    "@timestamp" => 2020-10-13T10:01:02.407Z,
       "message" => "- - - [13/Oct/2020:18:01:01 +0800] \"GET /favicon.ico HTTP/1.1\" 404 209 \"http://10.5.100.183/\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36\"",
      "@version" => "1",
          "tags" => [
        [0] "_grokparsefailure"
    ],
          "path" => "/var/log/httpd/access_log",
          "type" => "apachelog",
          "host" => "node4"
}

举例4:JDBC插件,读取数据库中的数据

# logstash设置input\output

input{
        jdbc{
                jdbc_driver_library => "/usr/local/logstash-7.8.0/config/mysql-connector-java-8.0.16.jar"
                jdbc_driver_class => "com.mysql.jdbc.Driver"
                jdbc_connection_string => "jdbc:mysql://192.168.223.128/test"
                jdbc_user => "root"
                jdbc_password => "root"
                use_column_value => true
                tracking_column => id #追踪字段
                schedule => "* * * * *" #最小采集频率,logstash不支持秒级更新,最小时间单位是1分钟
                jdbc_paging_enabled => "true"
                jdbc_page_size => "50000"
                statement => "SELECT * from tb_user where id > :sql_last_value" #最新数据,可以通过删除 ./root/.logstash_jdbc_last_run 文件重新定位,查询位置命令:find /root -name *.logstash_jdbc_last_run
        }
}
output{
        stdout{
                codec=>rubydebug
        }
}


# 启动服务

cd logstash-7.8.0/bin/logstash -f config/jdbc.conf

# 在数据库中添加数据,查看logstash响应

举例5:syslog系统内核记录

# syslog机制负责记录内核和应用程序产生的日志信息,管理员可以通过查看日志记录,来掌握系统状况。
# 默认系统已经安装了syslog直接启动即可。

cd logstash-7.8.0/config/
vim syslog.conf  #编辑一个检测脚本文件,输入以下配置
------------------------------------------------------------------
input{
        syslog{
                type => "system-syslog"
                port => 514
        }
}
output{
        stdout{
                codec=> rubydebug
        }
}


#启动服务

cd logstash-7.8.0/bin/logstash -f config/syslog.conf


# 发送数据
# 修改系统日志配置文件

vim /etc/rsyslog.conf
#添加一行以下配置
*.* @@192.168.223.128:514
#重启系统日志服务使之生效
systemctl restart rsyslog

# 查看logstash输出结果

举例6:其他插件

fliter插件

Logstash之所以强悍的主要原因是filter插件;通过过滤器的各种组合可以得到我们想要的结构化数据。

        grok插件:grok正则表达式是logstash非常重要的一个环节;可以通过grok非常方便的将数据拆分和索引。grok插件:能匹配一切数据,但是性能和对资源的损耗也很大,但是对于时间来说非常便利语法格式:%{语法:语义} 默认grok调用的是:/logstash-7.8.0/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns 这个目录下的正则,当然,你也可以定义自己的正则表达式!

        Date插件:从字段解析日期以用作事件的LogStash时间戳。

        geoip地址查询插件:geoip是常见的免费的IP地址归类查询库,geoip可以根据IP地址提供对应的地域信息,包括国别,省市,经纬度等等,此插件对于可视化地图和区域统计非常有用。

        mutate插件:mutate插件是logstash另一个非常重要的插件,它提供了丰富的基础类型数据处理能力,包括重命名、删除、替换、修改日志事件中的字段。我们这里使用一个常用的mutate插件:正则表达式替换字段功能gsub。PS:gsub可以通过正则表达式替换字段中匹配到的值,只对字符串字段有效。

举例7:Output插件

顾名思义,将输出到指定位置。一般为文件、Elasticsearch

cd logstash-7.3.0/config/
vim output_file.conf  #编辑一个检测脚本文件,输入以下配置
------------------------------------------------------------------
input {stdin{}}
output{
        file{
                path => "/usr/local/logstash-7.8.0/config/datas/%{+YYYY-MM-dd}-%{host}.txt"
                codec => line {
                        format => "%{message}"
                }
                flush_interval => 0
        }
}

# 启动服务
cd logstash-7.8.0/bin/logstash -f config/output_file.conf

# 在控制台输入一些字符

# 字符输出到文件中
cd logstash-7.8.0/config/
vim output_es.conf  #编辑一个检测脚本文件,输入以下配置
------------------------------------------------------------------
input {stdin{}}
output {
    elasticsearch {
        hosts => ["ydt1:9200"]
        index => "logstash-%{+YYYY.MM.dd}"
    }
}


# 启动服务
cd logstash-7.8.0/bin/logstash -f config/output_es.conf

# 控制台输入一些数据,然后通过elasticsearch-head查看是否保存成功

# 检查输出到elasticSearch中

完整流程举例:

# 安装 ES Logstash

[root@node4 ~]# yum install elasticsearch-7.9.0-x86_64.rpm -y
[root@node4 ~]# yum install logstash-7.9.1.rpm -y
[root@node4 ~]# java -version
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
[root@node4 ~]# 
[root@node4 ~]# vim /etc/profile.d/logstash.sh
export PATH=/usr/share/logstash/bin:$PATH
[root@node4 ~]# source /etc/profile.d/logstash.sh 

# 关闭防火墙

[root@node4 ~]# systemctl stop firewalld

# 机器互配hosts

[root@node4 ~]# vim /etc/hosts
10.5.100.183 node4.magedu.com node4
10.5.100.146 node5.magedu.com node5
node5节点操作一样。


# 将httpd产生日志+logstash+redis+ES+kibana
# kibana是可视化日志组件

# 安装httpd

[root@node4 ~]# yum install httpd -y
[root@node4 ~]# ss -tnl | grep "80"
LISTEN     0      100        ::1:8009                    :::*                  
LISTEN     0      128         :::80                      :::*                  
LISTEN     0      100         :::8080                    :::*                  
LISTEN     0      1         ::ffff:127.0.0.1:8005                    :::*                  
[root@node4 ~]# 

# logstash配置httpd

[root@node4 ~]# vim /etc/logstash/conf.d/httpd.conf
input {
   file {
   path => ["/var/log/httpd/access_log"]
   type => "apachelog"
   start_position => "beginning"
   }
}

# logstash 过滤方式采用grok

filter {
   grok {
   match => {"message" => "%{COMBINEDAPACHELOG}"}
   }
}

# 输出到redis

output {
   redis {
     host => '10.5.100.146'
     data_type => 'list'
     key => 'logstash:redis'
     }
   }

node5节点:
[root@node5 ~]# yum install redis -y   依赖epel源
[root@node5 ~]# vim /etc/redis.conf  修改监听ip
#bind 127.0.0.1
bind 10.5.100.146
[root@node5 ~]# ss -tnlp | grep '6379'
LISTEN     0      128    10.5.100.146:6379      *:*     users:(("redis-server",pid=11036,fd=4))
[root@node5 ~]

# node4节点启动logstash

[root@node4 ~]# logstash -f /etc/logstash/conf.d/httpd.conf --path.data=/root/ES/httpd/
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2020-10-14 13:15:22.654 [main] runner - Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.262-b10 on 1.8.0_262-b10 +indy +jit [linux-x86_64]"}
[INFO ] 2020-10-14 13:15:22.716 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/root/ES/httpd/queue"}
[INFO ] 2020-10-14 13:15:22.734 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/root/ES/httpd/dead_letter_queue"}
[WARN ] 2020-10-14 13:15:23.225 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-10-14 13:15:23.261 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"47e6c4d2-f008-4155-bf09-00f2844aca95", :path=>"/root/ES/httpd/uuid"}
[INFO ] 2020-10-14 13:15:25.551 [Converge PipelineAction::Create
] Reflections - Reflections took 57 ms to scan 1 urls, producing 22 keys and 45 values [INFO ] 2020-10-14 13:15:26.639 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/httpd.conf"], :thread=>"#"} [INFO ] 2020-10-14 13:15:27.713 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.07} [INFO ] 2020-10-14 13:15:28.041 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/root/ES/httpd/plugins/inputs/file/.sincedb_15940cad53dd1d99808eeaecd6f6ad3f", :path=>["/var/log/httpd/access_log"]} [INFO ] 2020-10-14 13:15:28.073 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2020-10-14 13:15:28.245 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2020-10-14 13:15:28.250 [[main]9604} node5节点: [root@node5 ~]# redis-cli -h 10.5.100.146 -p 6379 10.5.100.146:6379> LINDEX (error) ERR wrong number of arguments for 'lindex' command 10.5.100.146:6379> KEYS * 1) "logstash:redis" 查看到key已存在 10.5.100.146:6379> LLEN logstash:redis (integer) 12 10.5.100.146:6379> node4节点:使用浏览器访问httpd服务 node5节点: 10.5.100.146:6379> LLEN logstash:redis (integer) 30 数据立刻增加。 10.5.100.146:6379> 10.5.100.146:6379> LINDEX logstash:redis 10 随机访问10号索引的值。 "{\"type\":\"apachelog\",\"message\":\"- - - [13/Oct/2020:18:01:01 +0800] \\\"GET / HTTP/1.1\\\" 304 - \\\"-\\\" \\\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36\\\"\",\"path\":\"/var/log/httpd/access_log\",\"host\":\"node4\",\"@timestamp\":\"2020-10-14T05:15:29.299Z\",\"@version\":\"1\",\"tags\":[\"_grokparsefailure\"]}" 10.5.100.146:6379> 接下来我们将redis中的数据取出来保存在ES中。官方文档:https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html node1节点: [root@node1 ~]# cd /etc/logstash/conf.d/ [root@node1 conf.d]# vim elastic.conf input { redis { host => '10.5.100.146' port => 6379 data_type => 'list' key => 'logstash:redis' } } output { elasticsearch { hosts => ['127.0.0.1:9200'] index => 'logstash-%{+YYYY.MM.dd}' } [root@node1 conf.d]# logstash -f /etc/logstash/conf.d/elastic.conf WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [INFO ] 2020-10-19 17:36:07.830 [main] runner - Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.262-b10 on 1.8.0_262-b10 +indy +jit [linux-x86_64]"} [WARN ] 2020-10-19 17:36:08.354 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2020-10-19 17:36:10.256 [Converge PipelineAction::Create
] Reflections - Reflections took 41 ms to scan 1 urls, producing 22 keys and 45 values [INFO ] 2020-10-19 17:36:11.292 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}} [WARN ] 2020-10-19 17:36:12.286 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"} [INFO ] 2020-10-19 17:36:12.555 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2020-10-19 17:36:12.557 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2020-10-19 17:36:12.887 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]} [INFO ] 2020-10-19 17:36:12.915 [Ruby-0-Thread-5: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled} [INFO ] 2020-10-19 17:36:13.086 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/elastic.conf"], :thread=>"#"} [INFO ] 2020-10-19 17:36:13.182 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [INFO ] 2020-10-19 17:36:13.285 [Ruby-0-Thread-5: :1] elasticsearch - Installing elasticsearch template to _template/logstash [INFO ] 2020-10-19 17:36:14.954 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.86} [INFO ] 2020-10-19 17:36:15.024 [[main]-pipeline-manager] redis - Registering Redis {:identity=>"redis://@10.5.100.146:6379/0 list:logstash:redis"} [INFO ] 2020-10-19 17:36:15.044 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2020-10-19 17:36:15.279 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2020-10-19 17:36:15.778 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} 验证:node4webserver发起请求,生成日志,经过redis,直接到存储到node1中ES上。 [root@node4 ~]# curl 10.5.100.183 yan [root@node4 ~]# [root@node1 ~]# curl -XGET 'localhost:9200/_cat/indices?v' yellow open logstash-2020.10.19 0QGXtm6HRXOZuhNvv-mHzA 1 1 0 0 208b 208b yellow open logstash-2020.10.09 HaFsjp0QTfywqB7NUtuNkw 1 1 6 0 7.6kb 7.6kb [root@node1 conf.d]# curl -XGET 'localhost:9200/logstash-2020.10.19/_search?pretty' { "took" : 100, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 24, "relation" : "eq" }, "max_score" : 1.0, "hits" : [ { "_index" : "logstash-2020.10.19", "_type" : "_doc", "_id" : "pxk7QHUBLHpFwAZyUPDS", "_score" : 1.0, "_source" : { "message" : "- - - [19/Oct/2020:17:41:38 +0800] \"GET / HTTP/1.1\" 304 - \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36\"", "type" : "apachelog", "path" : "/var/log/httpd/access_log", "@timestamp" : "2020-10-19T09:41:38.393Z", "tags" : [ "_grokparsefailure" ], "host" : "node4", "@version" : "1" } }, { "_index" : "logstash-2020.10.19", "_type" : "_doc", "_id" : "qBk7QHUBLHpFwAZyUPDS", "_score" : 1.0, "_source" : { "message" : "- - - [19/Oct/2020:17:41:37 +0800] \"GET / HTTP/1.1\" 304 - \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36\"", "type" : "apachelog", "path" : "/var/log/httpd/access_log", "@timestamp" : "2020-10-19T09:41:38.393Z", "tags" : [ "_grokparsefailure" ], "host" : "node4", "@version" : "1" } } # 可以看出已经输出到ES中,索引为logstash-2020.10.19。 # 接下来我们可以连接kibana了,通过可视化查看日志信息。 [root@node1 ELK]# ll total 755776 -rw-r--r-- 1 root root 319581641 Aug 18 20:04 elasticsearch-7.9.0-x86_64.rpm -rw-r--r-- 1 root root 295714192 Oct 20 11:43 kibana-7.9.2-linux-x86_64.tar.gz -rw-r--r-- 1 root root 158616924 Sep 16 12:41 logstash-7.9.1.rpm [root@node1 ELK]# [root@node1 ELK]# tar -xzf kibana-7.9.2-linux-x86_64.tar.gz [root@node1 ELK]# ll total 755776 -rw-r--r-- 1 root root 319581641 Aug 18 20:04 elasticsearch-7.9.0-x86_64.rpm drwxr-xr-x 13 root root 266 Oct 20 12:46 kibana-7.9.2-linux-x86_64 -rw-r--r-- 1 root root 295714192 Oct 20 11:43 kibana-7.9.2-linux-x86_64.tar.gz -rw-r--r-- 1 root root 158616924 Sep 16 12:41 logstash-7.9.1.rpm [root@node1 ELK]# cd kibana-7.9.2-linux-x86_64 [root@node1 ELK]# ln -sv kibana-7.9.2-linux-x86_64 kibana ‘kibana’ -> ‘kibana-7.9.2-linux-x86_64’ [root@node1 ELK]# cd kibana [root@node1 kibana]# ls bin config LICENSE.txt node_modules optimize plugins src x-pack built_assets data node NOTICE.txt package.json README.txt webpackShims [root@node1 kibana]# cd config/ [root@node1 config]# ll total 12 -rw-r--r-- 1 root root 5259 Sep 23 09:42 kibana.yml -rw-r--r-- 1 root root 216 Sep 23 09:42 node.options [root@node1 config]# vim kibana.yml server.port: 5601 监听端口 server.host: "localhost" 主机地址 elasticsearch.hosts: ["http://localhost:9200"] ES地址或集群地址,9200通信端口。 [root@node1 kibana]# bin/kibana --allow-root 运行kibana log [09:24:59.498] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: ef534e8e-49a1-4025-9795-d6bc2f3cb9da log [09:24:59.557] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license. log [09:24:59.558] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license. log [09:24:59.578] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection log [09:25:01.526] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.535] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [09:25:01.537] [warning] You're running Kibana 7.9.2 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v7.9.0 @ 127.0.0.1:9200 (127.0.0.1) log [09:25:01.538] [info][status][plugin:[email protected]] Status changed from yellow to green - Ready log [09:25:01.541] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.555] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.559] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.561] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.564] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.603] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.619] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready log [09:25:01.630] [info][listening] Server running at http://localhost:5601 log [09:25:02.414] [info][server][Kibana][http] http server running at http://localhost:5601 log [09:25:04.195] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection. kibana构建完成了,现在无法通过浏览器访问呢,因为我是监听的本地地址。到目前为止都不知道如何修改es监听其他地址。


参考链接:

原文链接:https://blog.csdn.net/yurun_house/article/details/109069554

原文链接:https://blog.csdn.net/huxiang19851114/article/details/113397796


 

你可能感兴趣的:(elasticsearch,elk)