systemd
journal pluginKEY FEATURES | JOURNAL SOURCES | JOURNAL FIELDS | PLAY MODE | FULL TEXT SEARCH | PERFORMANCE | CONFIGURATION | FAQ
The systemd
journal plugin by Netdata makes viewing, exploring and analyzing systemd
journal logs simple and
efficient.
It automatically discovers available journal sources, allows advanced filtering, offers interactive visual
representations and supports exploring the logs of both individual servers and the logs on infrastructure wide
journal centralization servers.
persistent
and volatile
journals.system
, user
, namespaces
and remote
journals.grep
) on all journal fields, for any time-frame.journalctl
does.journalctl -f
, showing new log entries immediately after they are
received.systemd-journal.plugin
is a Netdata Function Plugin.
To protect your privacy, as with all Netdata Functions, a free Netdata Cloud user account is required to access it. For more information check this discussion.
The following are limitations related to the availability of the plugin:
libsystemd
is not
available in Alpine Linux (there is a libsystemd
, but it is a dummy that returns failure on all calls). We plan to
change this, by shipping Netdata containers based on Debian.systemd
support for Alpine Linux), the plugin is not available on static
builds of
Netdata (which are based on muslc
, not glibc
).systemd
journal.
However, when running in this mode, the plugin offers also negative matches on the data (like filtering for all logs
that do not have set some field), and this is the reason "full data query" mode is also offered as an option even on
newer versions of systemd
.To use the plugin, install one of our native distribution packages, or install it from source.
systemd
journal featuresThe following are limitations related to the features of systemd
journal:
systemd
journal has the ability to assign fields with binary data.
This plugin assumes all fields contain text values (text in this context includes numbers).systemd
journal has the ability to
accept the same field key, multiple times, with multiple values on a single log entry. This plugin will present the
last value and ignore the others for this log entry./var/log/journal
or /run/log/journal
. systemd-remote
has the
ability to store journal files anywhere (user configured). If journal files are not located in /var/log/journal
or /run/log/journal
(and any of their subdirectories), the plugin will not find them.Other than the above, this plugin supports all features of systemd
journals.
The plugin automatically detects the available journal sources, based on the journal files available in
/var/log/journal
(persistent logs) and /run/log/journal
(volatile logs).
The plugin, by default, merges all journal sources together, to provide a unified view of all log messages available.
To improve query performance, we recommend selecting the relevant journal source, before doing more analysis on the logs.
system
journalssystem
journals are the default journals available on all systemd
based systems.
system
journals contain:
kmsg
),systemd-journald
via syslog
,user
journalsUnlike journalctl
, the Netdata plugin allows viewing, exploring and querying the journal files of all users.
By default, each user, with a UID outside the range of system users (0 - 999), dynamic service users,
and the nobody user (65534), will get their own set of user
journal files. For more information about
this policy check Users, Groups, UIDs and GIDs on systemd Systems.
Keep in mind that user
journals are merged with the system
journals when they are propagated to a journal
centralization server. So, at the centralization server, the remote
journals contain both the system
and user
journals of the sender.
namespaces
journalsThe plugin auto-detects the namespaces available and provides a list of all namespaces at the "sources" list on the UI.
Journal namespaces are both a mechanism for logically isolating the log stream of projects consisting of one or more services from the rest of the system and a mechanism for improving performance.
systemd
service units may be assigned to a specific journal namespace through the LogNamespace=
unit file setting.
Keep in mind that namespaces require special configuration to be propagated to a journal centralization server. This makes them a little more difficult to handle, from the administration perspective.
remote
journalsRemote journals are created by systemd-journal-remote
. This systemd
feature allows creating logs centralization
points within your infrastructure, based exclusively on systemd
.
Usually remote
journals are named by the IP of the server sending these logs. The Netdata plugin automatically
extracts these IPs and performs a reverse DNS lookup to find their hostnames. When this is successful,
remote
journals are named by the hostnames of the origin servers.
For information about configuring a journals' centralization server, check this FAQ item.
systemd
journals are designed to support multiple fields per log entry. The power of systemd
journals is that,
unlike other log management systems, it supports dynamic and variable fields for each log message,
while all fields and their values are indexed for fast querying.
This means that each application can log messages annotated with its own unique fields and values, and systemd
journals will automatically index all of them, without any configuration or manual action.
For a description of the most frequent fields found in systemd
journals, check man systemd.journal-fields
.
Fields found in the journal files are automatically added to the UI in multiple places to help you explore and filter the data.
The plugin automatically enriches certain fields to make them more user-friendly:
_BOOT_ID
: the hex value is annotated with the timestamp of the first message encountered for this boot id.PRIORITY
: the numeric value is replaced with the human-readable name of each priority.SYSLOG_FACILITY
: the encoded value is replaced with the human-readable name of each facility.ERRNO
: the numeric value is annotated with the short name of each value._UID
_AUDIT_LOGINUID
, _SYSTEMD_OWNER_UID
, OBJECT_UID
, OBJECT_SYSTEMD_OWNER_UID
, OBJECT_AUDIT_LOGINUID
:
the local user database is consulted to annotate them with usernames._GID
, OBJECT_GID
: the local group database is consulted to annotate them with group names._CAP_EFFECTIVE
: the encoded value is annotated with a human-readable list of the linux capabilities._SOURCE_REALTIME_TIMESTAMP
: the numeric value is annotated with human-readable datetime in UTC.The values of all other fields are presented as found in the journals.
IMPORTANT: The UID and GID annotations are added during presentation and are taken from the server running the plugin. For
remote
sources, the names presented may not reflect the actual user and group names on the origin server. The numeric value will still be visible though, as-is on the origin server.
The annotations are not searchable with full-text search. They are only added for the presentation of the fields.
All journal fields available in the journal files are offered as columns on the UI. Use the gear button above the table:
When you click a log line, the info
sidebar will open on the right of the screen, to provide the full list of fields
related to this log line. You can close this info
sidebar, by selecting the filter icon at its top.
The plugin presents a select list of fields as filters to the query, with counters for each of the possible values for the field. This list can used to quickly check which fields and values are available for the entire time-frame of the query.
Internally the plugin has:
Keep in mind that the values presented in the filters, and their sorting is affected by the "full data queries" setting:
When "full data queries" is off, empty values are hidden and cannot be selected. This is due to a limitation of
libsystemd
that does not allow negative or empty matches. Also, values with zero counters may appear in the list.
When "full data queries" is on, Netdata is applying all filtering to the data (not libsystemd
), but this means
that all the data of the entire time-frame, without any filtering applied, have to be read by the plugin to prepare
the response required. So, "full data queries" can be significantly slower over long time-frames.
The plugin presents a histogram of the number of log entries across time.
The data source of this histogram can be any of the fields that are available as filters. For each of the values this field has, across the entire time-frame of the query, the histogram will get corresponding dimensions, showing the number of log entries, per value, over time.
The granularity of the histogram is adjusted automatically to have about 150 columns visible on screen.
The histogram presented by the plugin is interactive:
The plugin supports PLAY mode, to continuously update the screen with new log entries found in the journal files. Just hit the "play" button at the top of the Netdata dashboard screen.
On centralized log servers, PLAY mode provides a unified view of all the new logs encountered across the entire
infrastructure,
from all hosts sending logs to the central logs server via systemd-remote
.
The plugin supports searching for any text on all fields of the log entries.
Full text search is combined with the selected filters.
The text box accepts asterisks *
as wildcards. So, a*b*c
means match anything that contains a
, then b
and
then c
with anything between them.
Journal files are designed to be accessed by multiple readers and one writer, concurrently.
Readers (like this Netdata plugin), open the journal files and libsystemd
, behind the scenes, maps regions
of the files into memory, to satisfy each query.
On logs aggregation servers, the performance of the queries depend on the following factors:
This is why we suggest to select a source when possible.
Journal files perform a lot of reading while querying, so the fastest the disks, the faster the query will finish.
Increased memory will help the kernel cache the most frequently used parts of the journal files, avoiding disk I/O and speeding up queries.
Queries are significantly faster when just a few filters are selected.
In general, for a faster experience, keep a low number of rows within the visible timeframe.
Even on long timeframes, selecting a couple of filters that will result in a few dozen thousand log entries will provide fast / rapid responses, usually less than a second. To the contrary, viewing timeframes with millions of entries may result in longer delays.
The plugin aborts journal queries when your browser cancels inflight requests. This allows you to work on the UI while there are background queries running.
At the time of this writing, this Netdata plugin is about 25-30 times faster than journalctl
on queries that access
multiple journal files, over long time-frames.
During the development of this plugin, we submitted, to systemd
, a number of patches to improve journalctl
performance by a factor of 14:
However, even after these patches are merged, journalctl
will still be 2x slower than this Netdata plugin,
on multi-journal queries.
The problem lies in the way libsystemd
handles multi-journal file queries. To overcome this problem,
the Netdata plugin queries each file individually and it then it merges the results to be returned.
This is transparent, thanks to the facets
library in libnetdata
that handles on-the-fly indexing, filtering,
and searching of any dataset, independently of its source.
This Netdata plugin does not require any configuration or maintenance.
Yes. You can centralize your logs using systemd-journal-remote
, and then install Netdata
on this logs centralization server to explore the logs of all your infrastructure.
This plugin will automatically provide multi-node views of your logs and also give you the ability to combine the logs of multiple servers, as you see fit.
Check configuring a logs centralization server.
Yes. When your nodes are connected to a Netdata parent, all their functions are available via the parent's UI. So, from the parent UI, you can access the functions of all your nodes.
Keep in mind that to protect your privacy, in order to access Netdata functions, you need a free Netdata Cloud account.
No. When you access the agent directly, none of your data passes through Netdata Cloud. You need a free Netdata Cloud account only to verify your identity and enable the use of Netdata Functions. Once this is done, all the data flow directly from your Netdata agent to your web browser.
Also check this discussion.
When you access Netdata via https://app.netdata.cloud
, your data travel via Netdata Cloud,
but they are not stored in Netdata Cloud. This is to allow you access your Netdata agents from
anywhere. All communication from/to Netdata Cloud is encrypted.
volatile
and persistent
journals?systemd
journald
allows creating both volatile
journals in a tmpfs
ram drive,
and persistent
journals stored on disk.
volatile
journals are particularly useful when the system monitored is sensitive to
disk I/O, or does not have any writable disks at all.
For more information check man systemd-journald
.
systemd
journals have almost infinite cardinality at their labels and all of them are indexed,
even if every single message has unique fields and values.
When you send systemd
journal logs to Loki, even if you use the relabel_rules
argument to
loki.source.journal
with a JSON format, you need to specify which of the fields from journald
you want inherited by Loki. This means you need to know the most important fields beforehand.
At the same time you loose all the flexibility systemd
journal provides:
indexing on all fields and all their values.
Loki generally assumes that all logs are like a table. All entries in a stream share the same fields. But journald does exactly the opposite. Each log entry is unique and may have its own unique fields.
So, Loki and systemd-journal
are good for different use cases.
systemd-journal
already runs in your systems. You use it today. It is there inside all your systems
collecting the system and applications logs. And for its use case, it has advantages over other
centralization solutions. So, why not use it?
systemd
logs centralization server?Yes. It is simple, fast and the software to do it is already in your systems.
For application and system logs, systemd
journal is ideal and the visibility you can get
by centralizing your system logs and the use of this Netdata plugin, is unparalleled.
A short summary to get journal server running can be found below.
There are two strategies you can apply, when it comes down to a centralized server for systemd
journal logs.
For more options and reference to documentation, check man systemd-journal-remote
and man systemd-journal-upload
.
ℹ️ passive is a journal server that waits for clients to push their metrics to it.
⚠️ IMPORTANT These instructions will copy your logs to a central server, without any encryption or authorization. DO NOT USE THIS ON NON-TRUSTED NETWORKS.
On the centralization server install systemd-journal-remote
:
# change this according to your distro
sudo apt-get install systemd-journal-remote
Make sure the journal transfer protocol is http
:
sudo cp /lib/systemd/system/systemd-journal-remote.service /etc/systemd/system/
# edit it to make sure it says:
# --listen-http=-3
# not:
# --listen-https=-3
sudo nano /etc/systemd/system/systemd-journal-remote.service
# reload systemd
sudo systemctl daemon-reload
Optionally, if you want to change the port (the default is 19532
), edit systemd-journal-remote.socket
# edit the socket file
sudo systemctl edit systemd-journal-remote.socket
and add the following lines into the instructed place, and choose your desired port; save and exit.
[Socket]
ListenStream=<DESIRED_PORT>
Finally, enable it, so that it will start automatically upon receiving a connection:
# enable systemd-journal-remote
sudo systemctl enable --now systemd-journal-remote.socket
sudo systemctl enable systemd-journal-remote.service
systemd-journal-remote
is now listening for incoming journals from remote hosts.
On the clients, install systemd-journal-remote
:
# change this according to your distro
sudo apt-get install systemd-journal-remote
Edit /etc/systemd/journal-upload.conf
and set the IP address and the port of the server, like so:
[Upload]
URL=http://centralization.server.ip:19532
Edit systemd-journal-upload
, and add Restart=always
to make sure the client will keep trying to push logs, even if the server is temporarily not there, like this:
sudo systemctl edit systemd-journal-upload
At the top, add:
[Service]
Restart=always
Enable and start systemd-journal-upload
, like this:
sudo systemctl enable systemd-journal-upload
sudo systemctl start systemd-journal-upload
To verify the central server is receiving logs, run this on the central server:
sudo ls -l /var/log/journal/remote/
You should see new files from the client's IP.
Also, systemctl status systemd-journal-remote
should show something like this:
systemd-journal-remote.service - Journal Remote Sink Service
Loaded: loaded (/etc/systemd/system/systemd-journal-remote.service; indirect; preset: disabled)
Active: active (running) since Sun 2023-10-15 14:29:46 EEST; 2h 24min ago
TriggeredBy: ● systemd-journal-remote.socket
Docs: man:systemd-journal-remote(8)
man:journal-remote.conf(5)
Main PID: 2118153 (systemd-journal)
Status: "Processing requests..."
Tasks: 1 (limit: 154152)
Memory: 2.2M
CPU: 71ms
CGroup: /system.slice/systemd-journal-remote.service
└─2118153 /usr/lib/systemd/systemd-journal-remote --listen-http=-3 --output=/var/log/journal/remote/
Note the status: "Processing requests..."
and the PID under CGroup
.
On the client systemctl status systemd-journal-upload
should show something like this:
● systemd-journal-upload.service - Journal Remote Upload Service
Loaded: loaded (/lib/systemd/system/systemd-journal-upload.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/systemd-journal-upload.service.d
└─override.conf
Active: active (running) since Sun 2023-10-15 10:39:04 UTC; 3h 17min ago
Docs: man:systemd-journal-upload(8)
Main PID: 4169 (systemd-journal)
Status: "Processing input..."
Tasks: 1 (limit: 13868)
Memory: 3.5M
CPU: 1.081s
CGroup: /system.slice/systemd-journal-upload.service
└─4169 /lib/systemd/systemd-journal-upload --save-state
Note the Status: "Processing input..."
and the PID under CGroup
.
ℹ️ passive is a journal server that waits for clients to push their metrics to it.
On the centralization server install systemd-journal-remote
and openssl
:
# change this according to your distro
sudo apt-get install systemd-journal-remote openssl
Make sure the journal transfer protocol is https
:
sudo cp /lib/systemd/system/systemd-journal-remote.service /etc/systemd/system/
# edit it to make sure it says:
# --listen-https=-3
# not:
# --listen-http=-3
sudo nano /etc/systemd/system/systemd-journal-remote.service
# reload systemd
sudo systemctl daemon-reload
Optionally, if you want to change the port (the default is 19532
), edit systemd-journal-remote.socket
# edit the socket file
sudo systemctl edit systemd-journal-remote.socket
and add the following lines into the instructed place, and choose your desired port; save and exit.
[Socket]
ListenStream=<DESIRED_PORT>
Finally, enable it, so that it will start automatically upon receiving a connection:
# enable systemd-journal-remote
sudo systemctl enable --now systemd-journal-remote.socket
sudo systemctl enable systemd-journal-remote.service
systemd-journal-remote
is now listening for incoming journals from remote hosts.
Use this script to create a self-signed certificates authority and certificates for all your servers.
wget -O systemd-journal-self-signed-certs.sh "https://gist.githubusercontent.com/ktsaou/d62b8a6501cf9a0da94f03cbbb71c5c7/raw/c346e61e0a66f45dc4095d254bd23917f0a01bd0/systemd-journal-self-signed-certs.sh"
chmod 755 systemd-journal-self-signed-certs.sh
Edit the script and at its top, set your settings:
# The directory to save the generated certificates (and everything about this certificate authority).
# This is only used on the node generating the certificates (usually on the journals server).
DIR="/etc/ssl/systemd-journal-remote"
# The journals centralization server name (the CN of the server certificate).
SERVER="server-hostname"
# All the DNS names or IPs this server is reachable at (the certificate will include them).
# Journal clients can use any of them to connect to this server.
# systemd-journal-upload validates its URL= hostname, against this list.
SERVER_ALIASES=("DNS:server-hostname1" "DNS:server-hostname2" "IP:1.2.3.4" "IP:10.1.1.1" "IP:172.16.1.1")
# All the names of the journal clients that will be sending logs to the server (the CNs of their certificates).
# These names are used by systemd-journal-remote to name the journal files in /var/log/journal/remote/.
# Also the remote hosts will be presented using these names on Netdata dashboards.
CLIENTS=("vm1" "vm2" "vm3" "add_as_may_as_needed")
Then run the script:
sudo ./systemd-journal-self-signed-certs.sh
The script will create the directory /etc/ssl/systemd-journal-remote
and in it you will find all the certificates needed.
There will also be files named runme-on-XXX.sh
. There will be 1 script for the server and 1 script for each of the clients. You can copy and paste (or scp
) these scripts on your server and each of your clients and run them as root:
scp /etc/ssl/systemd-journal-remote/runme-on-XXX.sh XXX:/tmp/
Once the above is done, ssh
to each server/client and do:
sudo bash /tmp/runme-on-XXX.sh
The scripts install the needed certificates, fix their file permissions to be accessible by systemd-journal-remote/upload, change /etc/systemd/journal-remote.conf
(on the server) or /etc/systemd/journal-upload.conf
on the clients and restart the relevant services.
On the clients, install systemd-journal-remote
:
# change this according to your distro
sudo apt-get install systemd-journal-remote
Edit /etc/systemd/journal-upload.conf
and set the IP address and the port of the server, like so:
[Upload]
URL=https://centralization.server.ip:19532
Make sure that centralization.server.ip
is one of the SERVER_ALIASES
when you created the certificates.
Edit systemd-journal-upload
, and add Restart=always
to make sure the client will keep trying to push logs, even if the server is temporarily not there, like this:
sudo systemctl edit systemd-journal-upload
At the top, add:
[Service]
Restart=always
Enable and start systemd-journal-upload
, like this:
sudo systemctl enable systemd-journal-upload
Copy the relevant runme-on-XXX.sh
script as described on server setup and run it:
sudo bash /tmp/runme-on-XXX.sh
As of this writing namespaces
support by systemd
is limited:
systemd-journal-upload
automatically uploads system
and user
journals, but not namespaces
journals. For this
you need to spawn a systemd-journal-upload
per namespace.