如何在Ubuntu 18.04上使用弹性堆栈分析托管Redis数据库统计信息

介绍 (Introduction)

Database monitoring is the continuous process of systematically tracking various metrics that show how the database is performing. By observing performance data, you can gain valuable insights and identify possible bottlenecks, as well as find additional ways of improving database performance. Such systems often implement alerting that notifies administrators when things go wrong. Gathered statistics can be used to not only improve the configuration and workflow of the database, but also those of client applications.

数据库监视是系统跟踪各种度量的连续过程,这些度量显示了数据库的运行情况。 通过观察性能数据,您可以获得有价值的见解并确定可能的瓶颈,并找到提高数据库性能的其他方法。 此类系统通常会实施警报,在出现问题时通知管理员。 收集的统计信息不仅可以用来改善数据库的配置和工作流程,还可以用来改善客户端应用程序的配置和工作流程。

The benefit of using the Elastic Stack (ELK stack) for monitoring your managed database is its excellent support for searching and the ability to ingest new data very quickly. It does not excel at updating the data, but this trade-off is acceptable for monitoring and logging purposes, where past data is almost never changed. Elasticsearch offers a powerful means of querying the data, which you can use through Kibana to get a better understanding of how the database fares through different time periods. This will allow you to correlate database load with real-life events to gain insight into how the database is being used.

使用弹性堆栈 (ELK堆栈)来监视托管数据库的好处是其对搜索的出色支持以及非常快速地提取新数据的能力。 它并不擅长更新数据,但是这种权衡取舍对于监视和记录目的是可以接受的,过去的数据几乎从未更改过。 Elasticsearch提供了一种强大的查询数据的方法,您可以通过Kibana使用它来更好地了解数据库在不同时间段的运行情况。 这将使您能够将数据库负载与实际事件相关联,以洞悉数据库的使用方式。

In this tutorial, you’ll import database metrics, generated by the Redis INFO command, into Elasticsearch via Logstash. This entails configuring Logstash to periodically run the command, parse its output and send it to Elasticsearch for indexing immediately afterward. The imported data can later be analyzed and visualized in Kibana. By the end of the tutorial, you’ll have an automated system pulling in Redis statistics for later analysis.

在本教程中,您将通过Logstash将Redis INFO命令生成的数据库指标导入到Elasticsearch中。 这需要将Logstash配置为定期运行该命令,解析其输出,然后将其发送到Elasticsearch进行索引。 导入的数据可以稍后在Kibana中进行分析和可视化。 在本教程结束时,您将拥有一个自动化系统,可提取Redis统计信息以供以后分析。

先决条件 (Prerequisites)

  • An Ubuntu 18.04 server with at least 4 GB RAM, root privileges, and a secondary, non-root account. You can set this up by following this initial server setup guide. For this tutorial the non-root user is sammy.

    具有至少4 GB RAM,root特权和辅助非root帐户的Ubuntu 18.04服务器。 您可以按照本初始服务器安装指南进行设置 。 在本教程中,非root用户是sammy

  • Java 8 installed on your server. For installation instructions, visit How To Install Java with apt on Ubuntu 18.04.

    服务器上安装了Java 8。 有关安装说明,请访问如何在Ubuntu 18.04上使用apt安装Java 。

  • Nginx installed on your server. For a guide on how to do that, see How To Install Nginx on Ubuntu 18.04.

    Nginx安装在您的服务器上。 有关如何执行此操作的指南,请参见如何在Ubuntu 18.04上安装Nginx 。

  • Elasticsearch and Kibana installed on your server. Complete the first two steps of the How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04 tutorial.

    服务器上安装了Elasticsearch和Kibana。 完成如何在Ubuntu 18.04教程上安装Elasticsearch,Logstash和Kibana(弹性堆栈)的前两个步骤。

  • A Redis managed database provisioned from DigitalOcean with connection information available. Make sure that your server’s IP address is on the whitelist. To learn more about DigitalOcean Managed Databases, visit the product docs.

    由DigitalOcean调配的Redis托管数据库,其中包含可用的连接信息。 确保服务器的IP地址在白名单上。 要了解有关DigitalOcean托管数据库的更多信息,请访问产品文档 。

  • Redli installed on your server according to the How To Connect to a Managed Database on Ubuntu 18.04 tutorial.

    根据“ 如何在Ubuntu 18.04上连接到托管数据库”教程将 Redli安装在服务器上 。

步骤1 —安装和配置Logstash (Step 1 — Installing and Configuring Logstash)

In this section, you will install Logstash and configure it to pull statistics from your Redis database cluster, then parse them to send to Elasticsearch for indexing.

在本部分中,您将安装Logstash并将其配置为从Redis数据库集群中提取统计信息,然后解析它们以发送给Elasticsearch进行索引。

Start off by installing Logstash with the following command:

首先使用以下命令安装Logstash:

  • sudo apt install logstash -y

    sudo apt安装logstash -y

Once Logstash is installed, enable the service to automatically start on boot:

一旦安装了Logstash,请启用该服务以在启动时自动启动:

  • sudo systemctl enable logstash

    sudo systemctl启用logstash

Before configuring Logstash to pull the statistics, let’s see what the data itself looks like. To connect to your Redis database, head over to your Managed Database Control Panel, and under the Connection details panel, select Flags from the dropdown:

在配置Logstash提取统计信息之前,让我们看看数据本身是什么样。 要连接到Redis数据库,请转到“托管数据库控制面板”,然后在“ 连接详细信息”面板下,从下拉列表中选择“ 标志 ”:

You’ll be shown a preconfigured command for the Redli client, which you’ll use to connect to your database. Click Copy and run the following command on your server, replacing redli_flags_command with the command you have just copied:

将为您显示Redli客户端的预配置命令,该命令将用于连接到数据库。 单击复制 ,然后在服务器上运行以下命令,将redli_flags_command替换为刚刚复制的命令:

  • redli_flags_command info

    redli_flags_command信息

Since the output from this command is long, we’ll explain this broken down into its different sections:

由于此命令的输出很长,因此我们将解释为不同的部分:

In the output of the Redis info command, sections are marked with #, which signifies a comment. The values are populated in the form of key:value, which makes them relatively easy to parse.

在Redis info命令的输出中,部分用#标记,表示注释。 这些值以key:value的形式填充,这使得它们相对容易解析。


   
   
     
     
     
     
Output
# Server redis_version:5.0.4 redis_git_sha1:ab60b2b1 redis_git_dirty:1 redis_build_id:7909f4de3561dc50 redis_mode:standalone os:Linux 5.2.14-200.fc30.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:9.1.1 process_id:72 run_id:ddb7b96c93bbd0c369c6d06ce1c02c78902e13cc tcp_port:25060 uptime_in_seconds:1733 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:8687593 executable:/usr/bin/redis-server config_file:/etc/redis.conf # Clients connected_clients:3 client_recent_max_input_buffer:2 client_recent_max_output_buffer:0 blocked_clients:0 . . .

The Server section contains technical information about the Redis build, such as its version and the Git commit it’s based on. While the Clients section provides the number of currently opened connections.

Server部分包含有关Redis构建的技术信息,例如其版本和基于它的Git提交。 而“ Clients部分提供了当前打开的连接数。


   
   
     
     
     
     
Output
. . . # Memory used_memory:941560 used_memory_human:919.49K used_memory_rss:4931584 used_memory_rss_human:4.70M used_memory_peak:941560 used_memory_peak_human:919.49K used_memory_peak_perc:100.00% used_memory_overhead:912190 used_memory_startup:795880 used_memory_dataset:29370 used_memory_dataset_perc:20.16% allocator_allocated:949568 allocator_active:1269760 allocator_resident:3592192 total_system_memory:1030356992 total_system_memory_human:982.62M used_memory_lua:37888 used_memory_lua_human:37.00K used_memory_scripts:0 used_memory_scripts_human:0B number_of_cached_scripts:0 maxmemory:463470592 maxmemory_human:442.00M maxmemory_policy:allkeys-lru allocator_frag_ratio:1.34 allocator_frag_bytes:320192 allocator_rss_ratio:2.83 allocator_rss_bytes:2322432 rss_overhead_ratio:1.37 rss_overhead_bytes:1339392 mem_fragmentation_ratio:5.89 mem_fragmentation_bytes:4093872 mem_not_counted_for_evict:0 mem_replication_backlog:0 mem_clients_slaves:0 mem_clients_normal:116310 mem_aof_buffer:0 mem_allocator:jemalloc-5.1.0 active_defrag_running:0 lazyfree_pending_objects:0 . . .

Here Memory confirms how much RAM Redis has allocated for itself, as well as the maximum amount of memory it can possibly use. If it starts running out of memory, it will free up keys using the strategy you specified in the Control Panel (shown in the maxmemory_policy field in this output).

此处的Memory确认Redis已为其分配了多少RAM,以及它可能使用的最大内存量。 如果开始耗尽内存,它将使用您在“控制面板”中指定的策略释放密钥(显示在此输出的maxmemory_policy字段中)。


   
   
     
     
     
     
Output
. . . # Persistence loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1568966978 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:0 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:217088 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 # Stats total_connections_received:213 total_commands_processed:2340 instantaneous_ops_per_sec:1 total_net_input_bytes:39205 total_net_output_bytes:776988 instantaneous_input_kbps:0.02 instantaneous_output_kbps:2.01 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:353 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 . . .

In the Persistence section, you can see the last time Redis saved the keys it stores to disk, and if it was successful. The Stats section provides numbers related to client and in-cluster connections, the number of times the requested key was (or wasn’t) found, and so on.

在“ Persistence部分中,您可以查看Redis上次将其存储的密钥保存到磁盘的情况,以及是否成功。 Stats部分提供与客户端和集群内连接相关的数字,找到(或未找到)请求的密钥的次数,等等。


   
   
     
     
     
     
Output
. . . # Replication role:master connected_slaves:0 master_replid:9c1d345a46d29d08537981c4fc44e312a21a160b master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:46137344 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 . . .

Note: The Redis project uses the terms “master” and “slave” in its documentation and in various commands. DigitalOcean generally prefers the alternative terms “primary” and “replica.” This guide will default to the terms “primary” and “replica” whenever possible, but note that there are a few instances where the terms “master” and “slave” unavoidably come up.

注意: Redis项目在其文档和各种命令中使用术语“主”和“从”。 DigitalOcean通常更喜欢使用替代术语“主要”和“副本”。 本指南将在可能的情况下默认使用“主要”和“副本”两个术语,但请注意,在某些情况下,不可避免会出现“主要”和“从属”两个术语。

By looking at the role under Replication, you’ll know if you’re connected to a primary or replica node. The rest of the section provides the number of currently connected replicas and the amount of data that the replica is lacking in regards to the primary. There may be additional fields if the instance you are connected to is a replica.

通过查看“ Replication下的role ,您将知道是连接到主节点还是副本节点。 本节的其余部分提供了当前连接的副本数量以及该副本在主数据库方面缺少的数据量。 如果您连接的实例是副本,则可能会有其他字段。


   
   
     
     
     
     
Output
. . . # CPU used_cpu_sys:1.972003 used_cpu_user:1.765318 used_cpu_sys_children:0.000000 used_cpu_user_children:0.001707 # Cluster cluster_enabled:0 # Keyspace

Under CPU, you’ll see the amount of system (used_cpu_sys) and user (used_cpu_user) CPU Redis is consuming at the moment. The Cluster section contains only one unique field, cluster_enabled, which serves to indicate that the Redis cluster is running.

CPU ,您将看到系统Redis目前正在消耗的系统( used_cpu_sys )和用户( used_cpu_user )的数量。 “ Cluster部分仅包含一个唯一字段cluster_enabled ,用于指示Redis群集正在运行。

Logstash will be tasked to periodically run the info command on your Redis database (similar to how you just did), parse the results, and send them to Elasticsearch. You’ll then be able to access them later from Kibana.

Logstash的任务是定期在Redis数据库上运行info命令(类似于您刚做的事情),解析结果并将其发送到Elasticsearch。 然后,您以后便可以从Kibana访问它们。

You’ll store the configuration for indexing Redis statistics in Elasticsearch in a file named redis.conf under the /etc/logstash/conf.d directory, where Logstash stores configuration files. When started as a service, it will automatically run them in the background.

您将在Elasticsearch中将索引Redis统计信息的配置存储在/etc/logstash/conf.d目录下名为redis.conf的文件中, redis.conf在其中存储配置文件。 当作为服务启动时,它将自动在后台运行它们。

Create redis.conf using your favorite editor (for example, nano):

使用您喜欢的编辑器(例如,nano)创建redis.conf

  • sudo nano /etc/logstash/conf.d/redis.conf

    须藤nano /etc/logstash/conf.d/redis.conf

Add the following lines:

添加以下行:

/etc/logstash/conf.d/redis.conf
/etc/logstash/conf.d/redis.conf
input {
    exec {
        command => "redis_flags_command info"
        interval => 10
        type => "redis_info"
    }
}

filter {
    kv {
        value_split => ":"
        field_split => "\r\n"
        remove_field => [ "command", "message" ]
    }

    ruby {
        code =>
        "
        event.to_hash.keys.each { |k|
            if event.get(k).to_i.to_s == event.get(k) # is integer?
                event.set(k, event.get(k).to_i) # convert to integer
            end
            if event.get(k).to_f.to_s == event.get(k) # is float?
                event.set(k, event.get(k).to_f) # convert to float
            end
        }
        puts 'Ruby filter finished'
        "
    }
}

output {
    elasticsearch {
        hosts => "http://localhost:9200"
        index => "%{type}"
    }
}

Remember to replace redis_flags_command with the command shown in the control panel that you used earlier in the step.

请记住,将redis_flags_command替换为您在该步骤前面使用的控制面板中显示的命令。

You define an input, which is a set of filters that will run on the collected data, and an output that will send the filtered data to Elasticsearch. The input consists of the exec command, which will run a command on the server periodically, after a set time interval (expressed in seconds). It also specifies a type parameter that defines the document type when indexed in Elasticsearch. The exec block passes down an object containing two fields, command and message string. The command field will contain the command that was run, and the message will contain its output.

您定义一个input ,它是将对收集的数据运行的一组过滤器,以及一个将过滤后的数据发送到Elasticsearch的输出。 输入包含exec命令,该命令将在设置的时间interval (以秒表示)之后定期在服务器上运行command 。 它还指定一个type参数,该type参数在Elasticsearch中建立索引时定义文档类型。 exec块向下传递一个包含两个字段的对象: commandmessage字符串。 command字段将包含运行的命令, message将包含其输出。

There are two filters that will run sequentially on the data collected from the input. The kv filter stands for key-value filter, and is built-in to Logstash. It is used for parsing data in the general form of keyvalue_separatorvalue and provides parameters for specifying what are considered a value and field separators. The field separator pertains to strings that separate the data formatted in the general form from each other. In the case of the output of the Redis INFO command, the field separator (field_split) is a new line, and the value separator (value_split) is :. Lines that do not follow the defined form will be discarded, including comments.

有两个过滤器将对从输入收集的数据按顺序运行。 kv过滤器代表键值过滤器,并且内置于Logstash。 它用于解析key value_separator value的一般形式的数据,并提供用于指定哪些值和字段分隔符的参数。 字段分隔符与用于将以常规形式格式化的数据彼此分隔开的字符串有关。 对于Redis INFO命令的输出,字段分隔符( field_split )是换行,而值分隔符( value_split )是: 。 不符合定义格式的行将被删除,包括注释。

To configure the kv filter, you pass : to thevalue_split parameter, and \r\n (signifying a new line) to the field_split parameter. You also order it to remove the command and message fields from the current data object by passing them to remove_field as elements of an array, because they contain data that are now useless.

要配置kv过滤器,请在参数value_split传递: ,并在field_split参数中传递\r\n (表示新行)。 您还命令它通过将它们作为数组的元素传递给remove_field ,以从当前数据对象中删除commandmessage字段,因为它们现在包含无用的数据。

The kv filter represents the value it parsed as a string (text) type by design. This raises an issue because Kibana can’t easily process string types, even if it’s actually a number. To solve this, you’ll use custom Ruby code to convert the number-only strings to numbers, where possible. The second filter is a ruby block that provides a code parameter accepting a string containing the code to be run.

kv过滤器表示根据设计将其解析为字符串(文本)类型的值。 这引起了一个问题,因为Kibana无法轻松处理字符串类型,即使实际上是数字也是如此。 为了解决这个问题,您将在可能的情况下使用自定义的Ruby代码将仅数字的字符串转换为数字。 第二个过滤器是ruby块,它提供一个code参数,该参数接受包含要运行的代码的字符串。

event is a variable that Logstash provides to your code, and contains the current data in the filter pipeline. As was noted before, filters run one after another, meaning that the Ruby filter will receive the parsed data from the kv filter. The Ruby code itself converts the event to a Hash and traverses through the keys, then checks if the value associated with the key could be represented as an integer or as a float (a number with decimals). If it can, the string value is replaced with the parsed number. When the loop finishes, it prints out a message (Ruby filter finished) to report progress.

event是Logstash提供给您的代码的变量,并且包含过滤器管道中的当前数据。 如前所述,过滤器一个接一个地运行,这意味着Ruby过滤器将从kv过滤器接收解析后的数据。 Ruby代码本身将event转换为Hash并遍历键,然后检查与键关联的值是否可以表示为整数或浮点数(带小数的数字)。 如果可以,则将字符串值替换为解析后的数字。 循环结束后,它会打印出一条消息( Ruby filter finished )以报告进度。

The output sends the processed data to Elasticsearch for indexing. The resulting document will be stored in the redis_info index, defined in the input and passed in as a parameter to the output block.

输出将处理后的数据发送到Elasticsearch进行索引。 结果文档将存储在redis_info索引中,该索引在输入中定义,并作为参数传递到输出块。

Save and close the file.

保存并关闭文件。

You’ve installed Logstash using apt and configured it to periodically request statistics from Redis, process them, and send them to your Elasticsearch instance.

您已使用apt安装了Logstash,并将其配置为定期从Redis请求统计信息,进行处理并将其发送到Elasticsearch实例。

第2步-测试Logstash配置 (Step 2 — Testing the Logstash Configuration)

Now you’ll test the configuration by running Logstash to verify it will properly pull the data.

现在,您将通过运行Logstash来测试配置,以验证它可以正确提取数据。

Logstash supports running a specific configuration by passing its file path to the -f parameter. Run the following command to test your new configuration from the last step:

Logstash通过将其文件路径传递给-f参数来支持运行特定配置。 运行以下命令以从最后一步测试新配置:

  • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

    须藤/ usr / share / logstash / bin / logstash -f /etc/logstash/conf.d/redis.conf

It may take some time to show the output, but you’ll soon see something similar to the following:

显示输出可能需要一些时间,但是您很快就会看到类似以下内容的内容:


   
   
     
     
     
     
Output
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-09-20 11:59:53.440 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-09-20 11:59:53.459 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.3"} [INFO ] 2019-09-20 12:00:02.543 [Converge PipelineAction::Create
] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-09-20 12:00:03.331 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-09-20 12:00:03.727 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-09-20 12:00:04.015 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6} [WARN ] 2019-09-20 12:00:04.020 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [INFO ] 2019-09-20 12:00:04.071 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-09-20 12:00:04.100 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template [INFO ] 2019-09-20 12:00:04.146 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [INFO ] 2019-09-20 12:00:04.295 [[main]-pipeline-manager] exec - Registering Exec Input {:type=>"redis_info", :command=>"...", :interval=>10, :schedule=>nil} [INFO ] 2019-09-20 12:00:04.315 [Converge PipelineAction::Create
] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"} [INFO ] 2019-09-20 12:00:04.483 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-09-20 12:00:05.318 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} Ruby filter finished Ruby filter finished Ruby filter finished ...

You’ll see the Ruby filter finished message being printed at regular intervals (set to 10 seconds in the previous step), which means that the statistics are being shipped to Elasticsearch.

您会看到Ruby filter finished消息会定期打印(在上一步中设置为10秒),这意味着统计信息将被发送到Elasticsearch。

You can exit Logstash by clicking CTRL + C on your keyboard. As previously mentioned, Logstash will automatically run all config files found under /etc/logstash/conf.d in the background when started as a service. Run the following command to start it:

您可以通过在键盘上单击CTRL + C退出Logstash。 如前所述,Logstash作为服务启动时,将在后台自动运行在/etc/logstash/conf.d下找到的所有配置文件。 运行以下命令以启动它:

  • sudo systemctl start logstash

    须藤systemctl启动logstash

You’ve run Logstash to check if it can connect to your Redis cluster and gather data. Next, you’ll explore some of the statistical data in Kibana.

您已经运行Logstash来检查它是否可以连接到Redis集群并收集数据。 接下来,您将探索Kibana中的一些统计数据。

步骤3 —在Kibana中探索导入的数据 (Step 3 — Exploring Imported Data in Kibana)

In this section, you’ll explore and visualize the statistical data describing your database’s performance in Kibana.

在本节中,您将探索和可视化描述Kibana中数据库性能的统计数据。

In your web browser, navigate to your domain where you exposed Kibana as a part of the prerequisite. You’ll see the default welcome page:

在您的Web浏览器中,导航到您的域,在该域​​中您已将Kibana公开为前提条件的一部分。 您会看到默认的欢迎页面:

Before exploring the data Logstash is sending to Elasticsearch, you’ll first need to add the redis_info index to Kibana. To do so, click on Management from the left-hand vertical sidebar, and then on Index Patterns under the Kibana section.

在探索Logstash发送给Elasticsearch的数据之前,您首先需要将redis_info索引添加到Kibana。 为此,请从左侧垂直边栏中单击“ 管理” ,然后单击“ Kibana”部分下的“ 索引模式 ”。

You’ll see a form for creating a new Index Pattern. Index Patterns in Kibana provide a way to pull in data from multiple Elasticsearch indexes at once, and can be used to explore only one index.

您会看到一个用于创建新索引模式的表单。 Kibana中的索引模式提供了一种从多个Elasticsearch索引中一次提取数据的方法,并且只能用于探索一个索引。

Beneath the Index pattern text field, you’ll see the redis_info index listed. Type it in the text field and then click on the Next step button.

在“ 索引模式”文本字段下面,您将看到列出的redis_info索引。 在文本字段中键入它,然后单击下一步按钮。

You’ll then be asked to choose a timestamp field, so you’ll later be able to narrow your searches by a time range. Logstash automatically adds one, called @timestamp. Select it from the dropdown and click on Create index pattern to finish adding the index to Kibana.

然后,系统将要求您选择一个时间戳字段,以便以后可以按时间范围缩小搜索范围。 Logstash自动添加一个,称为@timestamp 。 从下拉列表中选择它,然后单击创建索引模式以完成将索引添加到Kibana。

To create and see existing visualizations, click on the Visualize item in the left-hand vertical menu. You’ll see the following page:

要创建并查看现有的可视化,请单击左侧垂直菜单中的“ 可视化”项。 您会看到以下页面:

To create a new visualization, click on the Create a visualization button, then select Line from the list of types that will pop up. Then, select the redis_info* index pattern you have just created as the data source. You’ll see an empty visualization:

要创建新的可视化文件,请单击“ 创建可视化文件”按钮,然后从将弹出的类型列表中选择“ 行” 。 然后,选择刚创建的redis_info*索引模式作为数据源。 您将看到一个空的可视化:

The left-side panel provides a form for editing parameters that Kibana will use to draw the visualization, which will be shown on the central part of the screen. On the upper-right hand side of the screen is the date range picker. If the @timestamp field is being used in the visualization, Kibana will only show the data belonging to the time interval specified in the range picker.

左侧面板提供了一个用于编辑参数的表单,Kibana将使用该表单来绘制可视化效果,该参数将显示在屏幕的中央部分。 在屏幕的右上角是日期范围选择器。 如果在可视化中使用@timestamp字段,则Kibana将仅显示属于范围选择器中指定的时间间隔的数据。

You’ll now visualize the average Redis memory usage during a specified time interval. Click on Y-Axis under Metrics in the panel on the left to unfold it, then select Average as the Aggregation and select used_memory as the Field. This will populate the Y axis of the plot with the average values.

现在,您将可视化指定时间间隔内的平均Redis内存使用情况。 单击左侧面板中“ 度量”下的“ Y轴 ”以展开它,然后选择“ 平均值”作为“ 聚合”并选择“ used_memory作为“ 字段” 。 这将使用平均值填充图的Y轴。

Next, click on X-Axis under Buckets. For the Aggregation, choose Date Histogram. @timestamp should be automatically selected as the Field. Then, show the visualization by clicking on the blue play button on the top of the panel. If your database is brand new and not used you won’t see a very long line. In all cases, however, you will see an accurate portrayal of average memory usage. Here is how the resulting visualization may look after little to no usage:

接下来,在“存储桶”下单击“ X轴 ”。 对于汇总 ,选择日期直方图@timestamp应该被自动选择为Field 。 然后,通过单击面板顶部的蓝色播放按钮来显示可视化效果。 如果您的数据库是全新的并且没有使用过,则不会出现很长的一行。 但是,在所有情况下,您都会看到平均内存使用情况的准确描述。 这是最终的可视化效果在几乎不使用或不使用时的样子:

In this step, you have visualized memory usage of your managed Redis database, using Kibana. You can also use other plot types Kibana offers, such as the Visual Builder, to create more complicated graphs that portray more than one field at the same time. This will allow you to gain a better understanding of how your database is being used, which will help you optimize client applications, as well as your database itself.

在此步骤中,您已使用Kibana可视化了托管Redis数据库的内存使用情况。 您还可以使用Kibana提供的其他绘图类型(例如Visual Builder)来创建更复杂的图形,这些图形可以同时描绘多个字段。 这将使您更好地了解数据库的使用方式,这将帮助您优化客户端应用程序以及数据库本身。

结论 (Conclusion)

You now have the Elastic stack installed on your server and configured to pull statistics data from your managed Redis database on a regular basis. You can analyze and visualize the data using Kibana, or some other suitable software, which will help you gather valuable insights and real-world correlations into how your database is performing.

现在,您已在服务器上安装了弹性堆栈,并配置了弹性堆栈以定期从托管Redis数据库中提取统计数据。 您可以使用Kibana或其他一些合适的软件来分析和可视化数据,这将帮助您收集有价值的见解以及与现实世界相关的数据库性能。

For more information about what you can do with your Redis Managed Database, visit the product docs. If you’d like to present the database statistics using another visualization type, check out the Kibana docs for further instructions.

有关可以使用Redis托管数据库执行的操作的更多信息,请访问产品文档 。 如果您想使用其他可视化类型显示数据库统计信息,请查看Kibana文档以获取更多说明。

翻译自: https://www.digitalocean.com/community/tutorials/how-to-analyze-managed-redis-database-statistics-using-the-elastic-stack-on-ubuntu-18-04

你可能感兴趣的:(数据库,python,linux,java,大数据)