We live in a containerized world, and traditional monitoring and logging are being forever changed. The dynamic and ephemeral nature of containers creates new logging challenges.
Docker addresses these in some ways. Docker Engine provides various logging drivers that determine where logs are sent or written to. The default driver for Docker logs is “json-file,” which writes the logs to local files on the Docker host in json format. If the default driver is used, the logs are removed upon removal of the container. There are other logging drivers that allow Docker logs to be sent via the driver to a remote destination. Some examples are syslog, gelf, Fluentd, and journald.
But Docker alone doesn’t address all container logging needs. Once all the logs have been collected and we have access to them, how do we make use of them and interpret the results?
That is the question that this article answers. It provides an overview of Docker logs, Docker logs structure, and what exactly Docker logs can tell us. (To be clear, we’ll focus in particular on logs from Dockerized applications, rather than Docker daemon logs, which are a separate topic.)
What are Docker Logs?
In a nutshell, Docker logs are the console output of running containers. They provide the stdout and stderr streams of processes that run inside a container.
How Do I View Docker Logs?
Viewing an individual container’s logs is simple. You need only to identify the container ID or the image name, then type:
Structure of Docker Logs and What is in Them
Anything written by a container on stdout and stderr is collected by the Docker engine and sent to the logging driver. The logs contain the actual message output plus a bunch of metadata (which becomes very important at large scale).
Each line of the log consists of a timestamp (with date, full year, and down to milliseconds), a string, and perhaps an origin (stdout vs stderr).
An example output:
Another example output:
The log output may be different depending on what logging driver is in use. For example, priority (e.g. debug, warning, error, info) and the PID (process name or process ID) are information that exist for syslog, but not for container output. If you use an advanced logging driver like gelf, a lot of container information will be added, and every log message is a dict.
The --details flag will add on extra attributes, such as environment variables and labels, provided to --log-opt when creating the container.
Monitoring your containerized applications is extremely important, and logs play a crucial role in this endeavor. Thankfully, the Docker engine has a good CLI tool to allow for log viewing, plus support for various logging drivers that allow you to funnel your Docker logs elsewhere.
However, to get true value into your system and to optimize the use of your logs, it makes sense to use a comprehensive log analysis platform that can give you critical insight into the ins and outs of your containers. While it is possible to monitor logs for a few Docker containers manually, staying on top of Docker log data at scale is feasible only with the help of a comprehensive log analysis tool.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.