Elasticsearch, Logstash, and Kibana – The ELK stack
Logstash
Logstash [2] processes and normalizes logfiles. The application retrieves its information from various data sources, which you need to define as input modules. Sources can be, for example, data streams from Syslog or protocol files. In the second step, filter plugins process the data based on user specifications; you can also make this phase simply forward the material without any processing. The output modules finally output the results; in our lab, everything goes to the Elasticsearch service. Figure 2 shows how the components interact.
Like Elasticsearch, Logstash is free and available under the Apache license. The project page offers the source code, Debian and RPM packages, and notes about the online repository.
We installed version 2.1.0 from November 25, 2015. Logstash also needs a Java Runtime Environment. It does not include a systemd service unit; instead, the vendor provides a legacy init script – unfortunately, without a reload
parameter – because Logstash is not currently capable of dynamically reloading its configuration. A bug report had already been submitted when this issue went to press.
Building Blocks
You can configure Logstash in the /etc/logstash/conf.d
directory, which is empty by default. The vendor does not deliver a simple default configuration and thus does not give users some quick guidance on how to achieve some initial results and understand the interaction of the Logstash pipeline. That said, the reference [8] is a good place to look for exhaustive explanations about all the plugins, and searching the web reveals numerous examples by other users that you could use as a template.
Because Logstash parses all the setup files from /etc/logstash/conf.d
in alphanumeric order and connects them to create an overall configuration, it is a good idea to think about the structure up front. The test computer uses the following schema: The input files start with a
, the filters with 5
, and the output modules with 9
. The filenames are preceded by a four-digit number, leaving plenty of scope for experiment.
The simplest example is 0005-file-input.conf
(Listing 2), which reads local logfiles. It uses the file
input module and defines as the sources the Nginx access logfiles from the local machine (line 12); exclude
rules out files with the .gz
suffix, and type
is an optional descriptor that the filter plugins can reference (see the "Extracted" section).
Listing 2
0005-file-input.conf
If you want to collect and process logs from remote servers in addition to local logs, you can draw on Syslog itself for some help (Listings 3 and 4 and online [6]). The Logstash forwarder and Filebeat offer you an alternative (see the "Woodcutters" box).
Listing 3
0001-syslog-input.conf
Listing 4
0002-socketsyslog-input.conf
Woodcutter
The Logstash server can receive data from remote computers. Earlier versions relied on a Logstash forwarder [9] to do this. A small tool that ran on each server collected the logs locally and then sent them to the central Logstash server using the Lumberjack protocol. The 0201-lumberjack-input.conf
file (Listing 5) shows an example that makes the Logstash server available for existing legacy installations using Logstash forwarders.
In Logstash 2, the developers introduced Filebeat [4], a universal service that relies on the Beats protocol [10] to send data streams to a specific port on the Logstash server. Filebeat will replace Lumberjack in the long term. The Beats protocol, which has only been around since Logstash 2, is also used for other data shippers. We installed Filebeat version 1.0.0 dated November 24, 2015, from the project website. The /etc/filebeat/filebeat.yml
contains meaningful defaults, which you can easily modified to suit your needs. The example file [6] shows the setup for the test machine.
One of Filebeat's advantages is that the tool can use SSL to send the collected logs on request to the Logstash server. Additionally, administrators can equip the Filebeat clients with certificates themselves and thus define the computers from which they receive logs. Another benefit is the registry file, which Filebeat users to remember which files it has already read and sent. In other words, if the Logstash server is not available, Filebeat can restart at a later time from where the transmission was interrupted.
Filebeat can theoretically deliver directly to Elasticsearch, and this is the default setting in the configuration (Output
section). Because this does not include processing with filters, but simply sends the data streams to the index as is, we commented out this option on our test machine. Instead, Filebeat sends its data to Logstash.
Listing 5
0201-lumberjack-input.conf
The filter modules process everything that has reached Logstash from the sources. They analyze the data streams and break them down into individual snippets of information and data fields. In addition to the modules for parsing and breaking down, other modules add more detail to enrich the raw data, including, for example, dns
(DNS name resolution) and geoip
(IP geolocation from the MaxMind database).
Extracted
All Logstash plugins, including the filters, are Ruby Gems. You can use the /opt/logstash/bin/plugin
scripts to manage these extensions on your own system [11], list existing modules (Figure 3), install new ones from the web or from your local disk, and update existing modules. In addition to the included filters, you have access to a number of community plugins not written or updated by Elastic.
To define which filters are allowed to access what data, you can use if
… else
statements that use standard fields, such as the previously mentioned type
descriptor set by many input modules. Tags, which Filebeat can define, can serve as differentiating criteria for filters. Basically, all of the fields discovered in the upstream process are available, including fields that have just been extracted from a line in a logfile.
The 5003-postfix-filter.conf
file [6] provides an example:
[...] if [postfix_keyvalue_data] { kv { source => "postfix_keyvalue_data" trim => "<>," prefix => "postfix_" remove_field => [ "postfix_keyvalue_data" ] } [...]
In this case, the kv
filter (extraction of key-value pairs) is only used if the postfix_keyvalue_data
field is defined.
The frequently used grok
module can parse certain log formats, breaking down the body text into individual data fields, orienting its work on popular regular expressions, and supporting references to previously defined templates. Logstash itself contains a number of Grok patterns in the logstash-patterns-core
plugin.
Developing your own patterns is not a trivial task. You will find a number of half-baked attempts on the web, but also some good examples under free licenses that you can add to your own configuration. You will find a very good pattern for Postfix [12], examples for Dovecot [13], and Nginx patterns [14]. The GitHub repository [15] collects templates for services such as Bacula, Nagios, PostgreSQL, and more.
The trial and error method of creating meaningful Grok patterns – requiring continuous Logstash restarts – is not a good idea and takes far too long. Two online tools solve this problem. Administrators can develop and test their Grok patterns on two sites [16] [17] before adding them to their Logstash configurations and restarting the service. You will also want to keep an eye on the configtest
parameter of your Logstash init scripts and check your setup files for syntax errors using the /etc/init.d/logstash configtest
command.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.