blog に戻る

2023年03月07日 Chas Clawson and Colin Fernandes

Tips and best practices for Docker container management

Docker container management

The arrival of Docker container technology brought with it an amazing array of capabilities. By encapsulating an entire software package, including its dependencies and libraries, into a single, portable container, Docker has made deployment across platforms such as AWS, Google Cloud, Microsoft Azure, and Apache a simple and straightforward process.

When people talk about Docker, they probably talk about Docker Engine, the runtime that allows you to build and run containers. But before you can run a Docker container, it must be built, starting with a Dockerfile.

The Dockerfile defines everything needed to run the container image, including the OS network specifications and file locations. Now that you have a Dockerfile, you can build a Docker Image, which is the portable, static component that gets run on the Docker Engine. A Docker Image is a set of instructions (or a template) to build Docker containers.

To manage composition and clustering, Docker offers Docker Compose, which allows you to define and run containerized applications. Then you can use Docker Swarm – Docker’s homegrown container orchestration platform – to turn a pool of Docker hosts into a single, virtual Docker host. Swarm silently manages the scaling of your application to multiple hosts.

Emerging practices in containerization let you bifurcate processes down even further into microservices, which breaks up an application into collections of shared, virtualized services.

While microservices is more of an architecture paradigm, one of the ways this is achieved is through the containerization of these discrete application services. Abstracting applications further using serverless functions is another way to implement microservice design, and these functions can then interact and respond to triggers initiated by containers.

In a microservices architecture, the application can be made independent of the host environment by encapsulating each of them in Docker containers. It helps developers with the process of composing and packaging applications into containers. Though microservice architecture is powerful and infinitely scalable, it comes with new considerations and concerns about managing dependencies, security, and application resilience.

Below is a look at some common obstacles you’ll encounter as you manage Docker containers and how to overcome them.

Design considerations for Docker container environments

The key to a manageable environment is clean design. When building a new environment that uses Docker containers, keep these basic design approaches in mind.

Containers should be lightweight

The technology behind Docker technology isn’t new. Essentially, containers are virtual machine (VM) images similar to those used in all virtual computing. But the value of a Docker container comes from its tiny footprint, which can be a small fraction of a traditional VM.

To keep your architecture simple, limit compute processes to one per container. Combining processes within containers often defeats the purpose of microservice design and quickly leads to complications in troubleshooting the application, reviewing and managing logs, and streamlining your continuous delivery and deployment pipeline.

Similarly, containers should not be used to retain data. Containers are made to start, stop, and even disappear. Docker volumes are designed for shared storage because they provide a specific and persistent location to house data accessed by multiple sources, and can store different data formats. While containers should remain as small and agile as possible, repositories can store databases, monitoring logs, and more in the same persistent home.

Containers should be fast

A great advantage of container technology is speed. Rather than waiting minutes for all the processes and commands of a VM to spin up, a containerized application can launch as quickly as a single process. Linking and associating containers that pool together to perform complex compute processes greatly accelerates delivery.

Containers should be disposable

Unlike VMs, containers were built to be disposable. Some containers exist for just milliseconds, executing a single task and then blinking out of existence as quickly as they appeared. Where traditional architecture focused on finding permanent homes for applications to live and run, container technology brings an on-demand capability that allows you to automatically create, deploy and destroy processes and avoid design bloat.

Docker containers and immutability

Containers are designed to be completely self-sufficient, holding all the necessary code, configuration and dependencies to operate in almost any environment. Because docker images are built using a layered approach, you can start with a small proven "parent" image and then build more functionality by adding intermediate layers that will be merged by the Docker daemon when it instantiates the container.

During continuous integration and testing, promote Docker images at each phase in the pipeline. It may seem tempting to create new containers, but by creating a new Docker image you are actually changing the working production recipe, so you can’t be sure that the image that passed all the quality gates is the one that got to production.

By promoting known good images as immutable and stable binaries all the way through the quality gates to production, updated containers can be easily rolled back if problems are encountered. If they aren’t, the older container you’re replacing can be destroyed, lowering overhead.

Another key tip: beware the Docker COMMIT command, which snapshots a Docker image of a running container. This can be useful for troubleshooting and analysis, but it does not completely reproduce the source image, leaving behind any data stored within the container. Dockerfiles, which are completely reproducible, are a much preferred approach.

Similarly, the LATEST tag can be problematic. Once an image in production receives the latest tag it cannot be rolled back. The tag also creates dependency issues, as the parent layer is replaced by a new version that is not backward compatible. The latest tag should also be avoided when deploying containers in production as you can’t track what version of the image is running.

These guidelines for immutable containers will speed up container deployment, reduce operational overhead, and make troubleshooting and rollbacks a straightforward process.

Securing your Docker containers

Modern DevOps integrates security at every level of development and deployment, so containers should be treated no differently. Here are tips for securing your Docker containers.

  • Don’t run Docker containers with root level access. Use the -u tag at the beginning of every container, which will default access to user, rather than administrator.

  • Don’t store Docker credentials within the container. Instead use Docker environmental variables to manage credentials so a breach in one container won’t snowball.

  • Check and manage runtime privilege and Linux capabilities. The default setting for new containers is unprivileged, which means they will not be able to access any other devices. For containers that require collaboration, apply the -privileged tag, though this will allow access to all devices.

  • Use security tools. Docker tools that can scan a container image for known security vulnerabilities can greatly increase security. Third party applications can help add visibility into your containers and manage security through easy GUI.

  • Consider private registries. Docker Hub makes available a vast array of free and shared Docker registry options. But many companies aren’t comfortable with the security arrangement this represents and choose to host their own registries, or turn to on-premise artifact repository services like JFrog Artifactory. Evaluate your security needs with your team and decide if public registries will work for you.

Security is a constant in a forward-facing DevOps environment, so make certain your containers are as safe as the rest of your infrastructure. Consider employing additional security measures for your Docker environment to ensure it is fully protected.

Optimizing your Docker environment

As mentioned above, containers should be fast. If not managed properly, they will bloat, bogging down your environment and reducing the capabilities they were designed to deliver. A few ways to constrain CPU and memory allocations and optimize your environment.

  • CPU Share Constraint. Dictate the percentage of processor time a container is granted with this Docker command, which allows you to fix the time one or all available CPUs will devote to the container. This resource offers a good walk-through for various CPU constraints.

  • Block IO bandwidth (Blkio) constraint. Upon creation, all containers are assigned the same value (500) for block IO bandwidth (blkio). By modifying the -blkio-weight flag values in individual container values it is possible to change the container’s blkio weight relative to the weighting of all other running containers.

  • Constrain Memory Usage. Kernel memory in containers works differently than user memory. While the latter is a flexible allocation that can be automatically swapped to direct performance where needed, kernel memory is a fixed allocation. This means container memory commitments can quickly swell and slow your environment. Correct this by constraining container memory allocation to whatever percentage of maximum available memory you desire. You can find more details here.

Docker makes it easy to optimize these and other container variables so you can fully realize the fast, lightweight potential of container technology.

About Docker environment variables

Docker lets you easily set up and run environment variables. This can be done in the command line interface or external file. Here are a few examples of Docker environment variables.

  • COMPOSE_API_VERSION

  • CLASSPATH

  • COMPOSE_FILE

  • DOCKER_CERT_PATH

  • DOCKER_DRIVER

  • HOSTNAME

  • JAVA_HOME

  • NAME

How to pass environment variables in Docker

You can run Docker environment variables with very little effort. You can do so using one of three methods.

  • Using the --env or -e flag

  • Using the --env-file option

  • Using the ENV Dockerfile instruction

Setting Docker environment variables: ARG vs ENV

  • ARG is available during the Docker build, but not after creating the image or running containers.

  • ENV are values available in containers, and can be used with RUN-style commands during the build.

Setting your ENV variables must be done when starting your containers, but you are also able to set variables directly in your Dockerfile. As mentioned above, you can use ARG to set ENV variables, but when building the image, you can only use ARG.

Docker container networking management

Take full control over networking within and between containers by tweaking default settings to better operate in your environment.

By default, Docker containers use IP addresses to communicate, and each new one is assigned its own number and address. But this can present a problem because containers are ephemeral; they appear, start, stop, and disappear all the time, and the IP address assigned to them can come and go just as quickly. Address this by using these environment variables to pick and choose what containers and ports to expose to internal and external networking:

  • name_PORT assigns the port a full URL; further define with the following addendums

  • _num_protocol assigns the correct protocol, usually TCP/IP

  • _num_protocol_ADDR lets you set the container’s IP address

  • _num_protocol_PORT dictates the preferred port number to expose to traffic

Docker docs offer resources for working with the correct command and naming structure to achieve the container environment that meets your unique needs.

Centralized, container-aware log management

Traditional Linux-based tools designed to run on a single host machine and that rely on analyzing log files on disk don't work well when trying to scale to multi-container clustered applications. They don’t work well for monitoring single-container applications because disk contents are not persisted when containers are shut down, unless they are written to a data volume.

Sumo Logic provides a unified, centralized approach to log management using container-aware monitoring tools and lets you continuously monitor your Docker infrastructures. You can correlate container events, configuration information, and Docker host and Docker daemon logs to get a complete view of your Docker environment. There is no need to parse different log formats or manage logging dependencies between containers.

Properly managing containers in Docker takes effort

The true power of container technology lies in its ability to perform complex tasks with minimal resources, which translates to a lowered cost of ownership. But properly leveraging all the capabilities of containers requires immersion into the structure and philosophy of the technology behind them.

Following the guidelines above while designing a containerization architecture will lay the groundwork for success, however you’ll need to follow through and continually apply container management best practice to truly optimize your Docker environment.

Learn more about Sumo Logic App for Docker Community Edition.

Learn more about Sumo Logic App for Docker Enterprise Edition.

Learn more about Sumo Logic App for Docker ULM.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial

Chas Clawson and Colin Fernandes

Cloud SIEM Engineer | Senior Director of Product Marketing

More posts by Chas Clawson and Colin Fernandes.

More posts by Chas Clawson and Colin Fernandes.

これを読んだ人も楽しんでいます