Centralized log management with Graylog

Watching the Logs

Article from Issue 188/2016
Author(s): , Author(s): , Author(s):

System logs offer clues for tracking intruders and troubleshooting problems. If you're in charge of a whole network, wouldn't you rather monitor all your logs from a single central point? Graylog and its companion components let you manage all your logs from a single interface.

Logfiles chronicle the state of the system, and experienced admins know to check the logs for messages when a problem arises. If you only administer one computer and it is sitting on your desk, the task is easy. But if you're taking care of several systems on a diverse network, keeping up with all the logfiles can be a major chore.

Several commercial tools fill the role of managing and monitoring log messages across the network, but you don't have to spend big to get big-time log monitoring capabilities. This article describes how to configure network monitoring using a configuration centered around the Graylog log server.

Logging Server Architecture

Graylog is an open source log management tool, providing central storage, processing, and analysis of log messages from servers, clients, or network devices. The Graylog log server is based on Java and offers a means for combining several server nodes in a cluster for high availability and scalability.

The architecture of a typical Graylog implementation is shown in Figure 1. Central elements for the Graylog central logging systems include:

Figure 1: Simplified schematic diagram of a Graylog architecture.
  • Graylog server (graylog-server): Receiving and processing log messages and alarms.
  • Web interface (graylog-web-interface): Web-based access to the log management software in a browser for administration, configuration, and monitoring of Graylog.
  • MongoDB: Storage of configuration and metadata of the Graylog server.
  • Elasticsearch: an index and search server that offers a convenient interface for accessing log messages.

As you will learn later in this article, many users prefer to implement Graylog in a cluster configuration with a load balancing tool to distribute the log messages across the Graylog servers (as shown in Figure 1). See the box titled "Clusters" for more on configuring Graylog in a cluster configuration.


Operating the Graylog server as a cluster will help with scalability and high availability (HA). Adding more Graylog server nodes will let you handle a higher number of log messages per minute. Load distribution is carried out by an upstream load balancer, which distributes the log messages for processing to the individual servers. The extra nodes also increase resilience, because the logging server continues to work if one Graylog server fails.

The Elasticsearch server and MongoDB database can also operate with multiple instances (Figure 2). The configuration described in this article supports a MongoDB database as a replica set, with the first instance of the database (MongoDB1) running in "Master" mode and the second instance (MongoDB2) in "Slave" mode. The master database is configured so that it is automatically replicated to the second instance. The replica set of the database boosts data availability, and you can easily extend it to include more database instances.

Figure 2: The Graylog components operate in clusters for scalability and high availability.

The Graylog web interface (graylog-web-interface) lets you authenticate the user with a separate user account or LDAP. Communication between the web interface and the Graylog server (graylog-server) relies on the REST protocol (HTTP-based), which you can protect using HTTPS.

Computers on the network act as clients, transmitting their messages to the log server. Log messages are transmitted via TCP or UDP in GELF (Graylog extended log format) or syslog format, and you can use TLS to encrypt the communication between the Graylog server and its back end, a MongoDB database.

You'll need to run the Graylog server on a Linux system with Java 7 or later.

Installing Graylog

Setting up Graylog starts with retrieving the component packages. I will configure the various components as virtual machines running a Debian 7 (wheezy) operating system as a basis.

Follow the installation steps in Listing 1 for graylog-server on graylog-ms (master) and graylog-node1 (slave) instances; then, set up the MongoDB database. To set up MongoDB, run the commands in Listing 2 for the VMs graylog-ms and graylog-node1.

Listing 1

Installing Graylog


Listing 2

Setting up MongoDB


After you install Graylog, the next step is the configuration. For the graylog-ms master VM, perform the following changes to the /etc/graylog/server/server.conf configuration file:

  • is_master = true: The virtual machine graylog-ms is the master. If you use several graylog-server systems, only one system can be the master.
  • rest_listen_uri = http://graylog-ms:12900/: URI of the interface REST API, which must be accessible when using a cluster. Replace the name of the graylog-ms system with the corresponding IP address of the system or define it in your /etc/hosts configuration file.
  • password_secret = complex password: Password required for encrypting other passwords and for generating random strings (salts). The value used for password_secret must be identical on all graylog-server instances. You can create a complex password, for example, with the
pwgen n 1 s 96
  • command.
  • root_password_sha2 = SHA2 hash value: Hash value of the password for logging via the web interface with the user name admin. Create the hash value using
echo -n MyComplexesPassword | shasum -a 256
  • elasticsearch_max_docs_per_index = 20000000: Number of log messages to keep per index. This value is the default and relates to the Elasticsearch component.
  • elasticsearch_max_number_of_indices = 20: Total number of indexes. The value of 20 is the default and relates to the Elasticsearch component. The total number of possible log messages that can be stored is calculated by multiplying the configuration parameter elastic-search_max_docs_per_index by the elasticsearch_max_number_of_indices.
  • elasticsearch_shards = 2: The total number of shards (i.e., allocation of indexes to systems with the Elasticsearch component). The value depends on the number of Elasticsearch components.
  • elasticsearch_replicas = 1: The total number of original copies of the indexes. The value of this parameter depends on the Elasticsearch component. A value of 1 creates a copy of all log messages on the master VM, as well as on es-node1.
  • mongodb_*: Change the following configuration parameters for integrating the MongoDB database:
mongodb_host = *IP_address of "graylog-ms"*
mongodb_database = graylog2
mongodb_port = 27017

You can leave the remaining MongoDB configuration parameters as the defaults.

For the graylog-node1 VM, modify the /etc/graylog/server/server.conf configuration file with all the same settings you configured for graylog-ms, except for the following parameters:

  • is_master = false
  • rest_listen_uri = http://graylog-node1:12900/ (URI of the interface REST API, which must be accessible when using a cluster.) Replace the name of the graylog-node1 system with the IP address of the system or define it in your /etc/hosts configuration file.
  • mongodb_*: Change the following configuration parameters for integrating the MongoDB database:
mongodb_host = *IP_address of "graylog-node1"*
mongodb_database = graylog2
mongodb_port = 27017

For a detailed description of each MongoDB configuration parameter, see the Graylog documentation [1].

Graylog Web Interface

For the graylog-web-interface VM, you need to make some changes to the /etc/graylog/web/web.conf Graylog configuration file:

graylog2-server.uris="http://graylog-ms:12900/, http://graylog-node1:12900/"

Replace the names graylog-ms and graylog-node1 with the IP addresses of the systems or define them in the /etc/hosts configuration file.

Use the following command

Application.secret=<complex password>

to enter the password required to encrypt other passwords and generate random strings (salts).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.