Popular Searches

How to Monitor Docker Container Logs

Docker automatically aggregates container standard output and error streams (stdout/stderr) into log feeds which are retained by the Docker daemon. You can easily monitor logs to understand what’s going on in your containers.

The logs contain the output you’d see in your terminal when attached to a container in interactive mode (-it). Logs will only be available if the foreground process in your container actually emits some output. You should make sure your containers log errors to stderr so Docker commands are able to expose them.

You can view container logs at any time during a container’s lifecycle. When a container’s running, you can stream logs in realtime. For stopped containers, you can access all the logs captured prior to the termination.

Viewing Container Logs

To view container logs, use the docker logs command:

docker logs my-container

Replace my-container with the name or ID of the container you want to inspect. You can use docker ps -a to get the IDs and names of your containers.

The logs command prints the container’s entire log output to your terminal. The output will not be continuous. If you’d like to keep streaming new logs, add the --follow flag to the command. This is equivalent to using tail -f with regular log files on your machine.

Customising What’s Displayed

The docker logs command supports several flags that let you adjust its output:

  • –timestamps – Display complete timestamps at the start of each log line.
  • –until and --since – These flags let you fetch lines logged during a particular time period. Either pass a complete timestamp (2021-04-30T20:00:00Z) or a friendly relative time (e.g. 1h = 1 hour ago).
  • –tail – Fetch a given number of lines from the log. --tail 10 will display the last ten lines logged by the container.
  • –details – This is a special flag which adds extra information to the log output, based on the options passed to the logging driver. We’ll look at logging drivers in the next section. Typical values displayed with --details include container labels and environment variables.

You can combine these flags to get logs in the format you require. The until, since and tail flags won’t take effect if you’re using follow to continuously stream log data.

Docker Logging Drivers

Docker collects and stores container logs using one of several logging drivers. You can set the active logging driver on a per-container basis. When no logging driver is specified, Docker uses the json-file driver.

This driver stores container logs in a JSON file. This format is fairly human-readable and can be readily consumed by third-party tools. If you’re not going to access log files directly, switching to the local driver will save you some storage space. It uses a custom log storage format.

Other built-in log drivers include syslog (write to the syslog daemon running on your machine), journald (use a running journald instance) and fluentd (to use a fluentd daemon). Drivers are also available for Amazon CloudWatch, Google Cloud Platform, Event Tracing for Windows and other log monitoring solutions.

Docker supports third-party logging drivers via plugins. You can find drivers on Docker Hub. To install a plugin driver, run docker plugin install plugin-name. You’ll then be able to reference it as a logging driver as plugin-name.

Specifying a Logging Driver

You can specify the logging driver for a container by passing the --log-driver flag to docker run:

docker run --log-driver systemd my-image:latest

You can change the default logging driver globally by updating your Docker daemon configuration. Edit (or create) /etc/docker/daemon.json. Set the log-driver key to the name of a logging driver. Docker will use this driver for all containers created without a --log-driver flag.

    "log-driver": "systemd"

Many logging drivers come with their own configuration options. These are set using the --log-opts container flag, or log-opts in daemon.json. Here’s an example relevant to the default json-file driver. It instructs Docker to rotate log files once they’re larger than 8MB. Only five files will be retained at any time.

docker run

docker run --log-driver json-file --log-opts max-size=8M --log-opts max-file=5


    "log-driver": "json-file",
    "log-opts": {
        "max-size": "8M",
        "max-file": 5

Driver Delivery Modes

Logs can be delivered in either blocking or non-blocking modes. Docker defaults to blocking delivery. Logs from the container will be sent to the driver immediately. This guarantees log delivery but could impact performance. The application will wait until the log write is complete. This can cause a perceptible delay if the logging driver is busy.

When in non-blocking mode, Docker writes logs to an in-memory buffer. The container doesn’t need to wait for the logging driver to complete its write. This can significantly improve performance on active machines with slow storage.

The tradeoff with non-blocking mode is the possibility of lost logs. This can occur when logs are emitted more quickly than the driver can process them. The in-memory buffer could be filled, causing cached logs to be cleared before they’ve been handed to the driver.


You can enable non-blocking delivery by setting the mode logging option, either with --log-opts or daemon.json. You can set the size of the in-memory log buffer with the max-buffer-size option. Setting this high reduces the risk of lost logs, provided you’ve got sufficient RAM available.

docker run --log-opt mode=non-blocking --log-opt max-buffer-size=8M my-image:latest

Logging Best Practices

Your containers should work with Docker’s logging system wherever possible. Emitting logs to stdout and stderr allows Docker and other tools to aggregate them in a standardised way.

Log output doesn’t need to include timestamps. Docker’s logging drivers will automatically record the time at which an event occurred.

Sometimes you might have complex logging requirements that docker logs alone can’t satisfy. If that’s the case, you might need to implement your own logging solution within your container. You can store logs directly on the filesystem, using a Docker volume, or call an external API service.

Some stacks call for a dedicated logging container that sits alongside your application containers. The logging container, often called a “sidecar”, reads temporary log files which your application containers create in a shared Docker volume. The sidecar handles the aggregation of these logs into a format which can be uploaded to a log monitoring service.

This approach can be useful for more complex deployments, although it’s trickier to setup and scale. It typically leaves you without the immediate convenience of Docker’s built-in log commands.


Docker has versatile log monitoring capabilities provided by a suite of logging drivers. Each container can use a unique logging driver, letting you store logs in a format appropriate to each app’s requirements.


Logs include anything emitted by a container’s standard output streams. You can use echo, print, console.log() or your programming language’s equivalent to add lines to the docker logs output. Logs are retained until your container is deleted with docker rm.

James Walker James Walker
James Walker is a contributor to CloudSavvy IT. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. He has experience managing complete end-to-end web development workflows, using technologies including Linux, GitLab, Docker, and Kubernetes. Read Full Bio »

The above article may contain affiliate links, which help support CloudSavvy IT.