Docker by Example

If you’re the kind of developer who learns best by doing, then this guide is for you. Rather than learning Docker by focusing on CLI commands, this guide focuses on learning how to author Dockerfiles to run programs in containers.

Starting with simple examples, you’ll become familiar with important Dockerfile instructions and learn relevant CLI commands as you go.

The pace is quick and explanations are succinct, but this guide will get you up and running with a solid foundation in professional Docker best practices.

See Getting Started in the next section to prepare for running the examples on your own.

How-To Geek

Docker for beginners: everything you need to know.

4

Your changes have been saved

Email Is sent

Please verify your email address.

You’ve reached your account maximum for followed topics.

Save 20% Sitewide with Urban Armor Gear's 4th of July Sale

Psa: the xbox 360 store is shutting down this month, considering a new tablet don't overlook the ipad mini, quick links, docker basics, how does docker work, why do so many people use docker, getting started, creating images, image registries, managing your containers, persistent data storage, maintaining security, working with multiple containers, container orchestration, a powerful platform for containers.

Docker creates packaged applications called containers. Each container provides an isolated environment similar to a virtual machine (VM). Unlike VMs, Docker containers don't run a full operating system . They share your host's kernel and virtualize at a software level.

Docker has become a standard tool for software developers and system administrators. It's a neat way to quickly launch applications without impacting the rest of your system. You can spin up a new service with a single docker run command.

Containers encapsulate everything needed to run an application, from OS package dependencies to your own source code. You define a container's creation steps as instructions in a

. Docker uses the Dockerfile to construct an image.

Images define the software available in containers. This is loosely equivalent to starting a VM with an operating system ISO. If you create an image, any Docker user will be able to launch your app with docker run .

Containers utilize operating system kernel features to provide partially virtualized environments. It's possible to create containers from scratch with commands like

. This starts a process with a specified root directory instead of the system root. But using kernel features directly is fiddly, insecure, and error-prone.

Docker is a complete solution for the production, distribution, and use of containers. Modern Docker releases are comprised of several independent components . First, there's the Docker CLI, which is what you interact with in your terminal. The CLI sends commands to a Docker daemon. This can run locally or on a remote host . The daemon is responsible for managing containers and the images they're created from.

The final component is called the container runtime. The runtime invokes kernel features to actually launch containers. Docker is compatible with runtimes that adhere to the OCI specification.  This open standard allows for interoperability between different containerization tools.

You don't need to worry too much about Docker's inner workings when you're first getting started. Installing docker on your system will give you everything you need to build and run containers.

Containers have become so popular because they solve many common challenges in software development. The ability to containerize once and run everywhere reduces the gap between your development environment and your production servers.

Using containers gives you confidence that every environment is identical. If you have a new team member, they only need to docker run to set up their own development instance. When you launch your service, you can use your Docker image to deploy to production. The live environment will exactly match your local instance, avoiding "it works on my machine" scenarios.

Docker is more convenient than a full-blown virtual machine. VMs are general-purpose tools designed to support every possible workload. By contrast, containers are lightweight, self-sufficient, and better suited to throwaway use cases. As Docker shares the host's kernel, containers have a negligible impact on system performance. Container launch time is almost instantaneous, as you're only starting processes, not an entire operating system.

Docker is available on all popular Linux distributions. It also runs on Windows and macOS. Follow the  Docker setup instructions for your platform to get it up and running.

You can check that your installation is working by starting a simple container:

docker run hello-world

This will start a new container with the basic hello-world image. The image emits some output explaining how to use Docker. The container then exits, dropping you back to your terminal.

Screenshot of Docker "hello world" image output

Once you've run hello-world , you're ready to create your own Docker images. A Dockerfile describes how to run your service by installing required software and copying in files. Here's a simple example using the Apache web server:

FROM httpd:latest

RUN echo "LoadModule headers_module modules/mod_headers.so" >> /usr/local/apache2/conf/httpd.conf

COPY .htaccess /var/www/html/.htaccess

COPY index.html /var/www/html/index.html

COPY css/ /var/www/html/css

The FROM line defines the base image. In this case, we're starting from the official Apache image. Docker applies the remaining instructions in your Dockerfile on top of the base image.

The RUN stage runs a command within the container. This can be any command available in the container's environment. We're enabling the headers Apache module, which could be used by the .htaccess file to set up routing rules.

The final lines copy the HTML and CSS files in your working directory into the container image. Your image now contains everything you need to run your website.

Now, you can build the image:

docker build -t my-website:v1 .

Docker will use your Dockerfile to construct the image. You'll see output in your terminal as Docker runs each of your instructions.

Screenshot of Docker Build output

The -t in the command tags your image with a given name ( my-website:v1 ). This makes it easier to refer to in the future. Tags have two components, separated by a colon. The first part sets the image name, while the second usually denotes its version. If you omit the colon, Docker will default to using latest as the tag version.

The . at the end of the command tells Docker to use the Dockerfile in your local working directory. This also sets the build context , allowing you to use files and folders in your working directory with COPY instructions in your Dockerfile.

Once you've created your image, you can start a container using docker run :

docker run -d -p 8080:80 my-website:v1

We're using a few extra flags with docker run here. The -d flag makes the Docker CLI detach from the container, allowing it to run in the background. A port mapping is defined with -p , so port 8080 on your host maps to port 80 in the container. You should see your web page if you visit localhost:8080 in your browser.

Docker images are formed from layers. Each instruction in your Dockerfile creates a new layer. You can use advanced building features to reference multiple base images , discarding intermediary layers from earlier images.

Once you have an image, you can push it to a registry. Registries provide centralized storage so that you can share containers with others. The default registry is Docker Hub .

When you run a command that references an image, Docker first checks whether it's available locally. If it isn't, it will try to pull it from Docker Hub. You can manually pull images with the docker pull command:

docker pull httpd:latest

If you want to publish an image, create a Docker Hub account. Run docker login and enter your username and password.

Next, tag your image using your Docker Hub username:

docker tag my-image:latest docker-hub-username/my-image:latest

Now, you can push your image:

docker push docker-hub-username/my-image:latest

Other users will be able to pull your image and start containers with it.

You can run your own registry if you need private image storage. Several third-party services also  offer Docker registries as alternatives to Docker Hub.

The Docker CLI has several commands to let you manage your running containers. Here are some of the most useful ones to know:

Listing Containers

docker ps shows you all your running containers. Adding the -a flag will show stopped containers, too.

Screenshot of listing Docker containers

Stopping and Starting Containers

To stop a container, run docker stop my-container . Replace my-container with the container's name or ID. You can get this information from the ps command. A stopped container is restarted with docker start my-container .

Containers usually run for as long as their main process stays alive. Restart policies control what happens when a container stops or your host restarts. Pass --restart always to docker run to make a container restart immediately after it stops.

Getting a Shell

You can run a command in a container using docker exec my-container my-command . This is useful when you want to manually invoke an executable that's separate to the container's main process.

Add the -it flag if you need interactive access. This lets you drop into a shell by running docker exec -it my-container sh .

Monitoring Logs

Docker automatically collects output emitted to a container's standard input and output streams. The docker logs my-container command will show a container's logs inside your terminal. The --follow flag sets up a continuous stream so that you can view logs in real time.

Screenshot of Docker logs from an Apache web server container

Cleaning Up Resources

Old containers and images can quickly pile up on your system. Use docker rm my-container to delete a container by its ID or name.

The command for images is docker rmi my-image:latest . Pass the image's ID or full tag name. If you specify a tag, the image won't be deleted until it has no more tags assigned. Otherwise, the given tag will be removed but the image's other tags will remain usable.

Screenshot of docker system prune command

Bulk clean-ups are possible using the docker prune command . This gives you an easy way to remove all stopped containers and redundant images.

Graphical Management

If the terminal's not your thing, you can use third-party tools to  set up a graphical interface for Docker . Web dashboards let you quickly monitor and manage your installation. They also help you take remote control of your containers.

Illustration of Portainer on a laptop

Docker containers are ephemeral by default. Changes made to a container's filesystem won't persist after the container stops. It's not safe to run any form of file storage system in a container started with a basic docker run command.

There are a few different approaches to managing persistent data . The most common is to use a Docker Volume. Volumes are storage units that are mounted into container filesystems. Any data in a volume will remain intact after its linked container stops, letting you connect another container in the future.

Dockerized workloads can be more secure than their bare metal counterparts, as Docker provides some separation between the operating system and your services. Nonetheless, Docker is a potential security issue, as it normally runs as root and could be exploited to run malicious software.

If you're only running Docker as a development tool, the default installation is generally safe to use. Production servers and machines with a network-exposed daemon socket should be hardened before you go live.

Screenshot of a Docker Bench for Security report

Audit your Docker installation to identify potential security issues. There are automated tools available that can help you find weaknesses and suggest resolutions. You can also scan individual container images for issues that could be exploited from within.

The docker command only works with one container at a time. You'll often want to use containers in aggregate. Docker Compose is a tool that lets you define your containers declaratively in a YAML file. You can start them all up with a single command.

This is helpful when your project depends on other services, such as a web backend that relies on a database server. You can define both containers in your docker-compose.yml and benefit from streamlined management with automatic networking .

Here's a simple docker-compose.yml file:

version: "3"

image: app-server:latest

image: database-server:latest

- database-data:/data

database-data:

This defines two containers ( app and database ). A volume is created for the database. This gets mounted to /data in the container. The app server's port 80 is exposed as 8000 on the host. Run docker-compose up -d to spin up both services, including the network and volume.

The use of Docker Compose lets you write reusable container definitions that you can share with others. You could commit a docker-compose.yml into your version control instead of having developers memorize docker run commands.

There are other approaches to running multiple containers, too. Docker App is an emerging solution that provides another level of abstraction. Elsewhere in the ecosystem, Podman is a Docker alternative that lets you create "pods" of containers within your terminal.

Docker isn't normally run as-is in production. It's now more common to use an orchestration platform such as Kubernetes or Docker Swarm mode. These tools are designed to handle multiple container replicas, which improves scalability and reliability.

Illustration showing the Docker and Kubernetes logos

Docker is only one component in the broader containerization movement. Orchestrators utilize the same container runtime technologies to provide an environment that's a better fit for production. Using multiple container instances allows for rolling updates as well as distribution across machines, making your deployment more resilient to change and outage. The regular docker CLI targets one host and works with individual containers.

Docker gives you everything you need to work with containers. It has become a key tool for software development and system administration. The principal benefits are increased isolation and portability for individual services.

Getting acquainted with Docker requires an understanding of the basic container and image concepts. You can apply these to create your specialized images and environments that containerize your workloads.

Docker Simplified: A Hands-On Guide for Absolute Beginners

Docker Simplified: A Hands-On Guide for Absolute Beginners

Whether you are planning to start your career in DevOps, or you are already into it, if you do not have Docker listed on your resume, it’s undoubtedly time for you to think about it, as Docker is one of the critical skill for anyone who is into DevOps arena.

In this post, I will try my best to explain Docker in the simplest way I can.

Before we take a deep dive and start exploring Docker, let’s take a look at what topics we will be covering as part of this beginner’s guide.

  • What is Docker?
  • The problem Docker solves
  • Advantages and disadvantages of using Docker
  • Core components of Docker

Docker Terminology

What is docker hub, docker editions, installing docker.

  • Some essential Docker commands to get you started

Let’s begin by understanding, What is Docker?

In simple terms, Docker is a software platform that simplifies the process of building, running, managing and distributing applications. It does this by virtualizing the operating system of the computer on which it is installed and running.

The first edition of Docker was released in 2013.

Docker is developed using the GO programming language.

1*waybfdGDf7yb8ZpDfIgEsA

Looking at the rich set of functionality Docker has got to offer, it’s been widely accepted by some of the world’s leading organizations and universities, such as Visa, PayPal, Cornell University and Indiana University (just to name a few) to run and manage their applications using Docker.

Now let’s try to understand the problem, and the solution Docker has got to offer

The problem.

Let’s say you have three different Python-based applications that you plan to host on a single server (which could either be a physical or a virtual machine).

Each of these applications makes use of a different version of Python, as well as the associated libraries and dependencies, differ from one application to another.

Since we cannot have different versions of Python installed on the same machine, this prevents us from hosting all three applications on the same computer.

The Solution

Let’s look at how we could solve this problem without making use of Docker. In such a scenario, we could solve this problem either by having three physical machines, or a single physical machine, which is powerful enough to host and run three virtual machines on it.

Both the options would allow us to install different versions of Python on each of these machines, along with their associated dependencies.

Irrespective of which solution we choose, the costs associated with procuring and maintaining the hardware are quite expensive.

Now, let’s check out how Docker could be an efficient and cost-effective solution to this problem.

To understand this, we need to take a look at how exactly Docker functions.

1*MbxLUFB2HRPmLAn60tQKZA

The machine on which Docker is installed and running is usually referred to as a Docker Host or Host in simple terms.

So, whenever you plan to deploy an application on the host, it would create a logical entity on it to host that application. In Docker terminology, we call this logical entity a Container or Docker Container to be more precise.

A Docker Container doesn’t have any operating system installed and running on it. But it would have a virtual copy of the process table, network interface(s), and the file system mount point(s). These have been inherited from the operating system of the host on which the container is hosted and running.

Whereas the kernel of the host’s operating system is shared across all the containers that are running on it.

This allows each container to be isolated from the other present on the same host. Thus it supports multiple containers with different application requirements and dependencies to run on the same host, as long as they have the same operating system requirements.

To understand how Docker has been beneficial in solving this problem, you need to refer to the next section, which discusses the advantages and disadvantages of using Docker.

In short, Docker would virtualize the operating system of the host on which it is installed and running, rather than virtualizing the hardware components.

The Advantages and Disadvantages of using Docker

Advantages of using docker.

Some of the key benefits of using Docker are listed below:

  • Docker supports multiple applications with different application requirements and dependencies, to be hosted together on the same host, as long as they have the same operating system requirements.
  • Storage Optimized. A large number of applications can be hosted on the same host, as containers are usually few megabytes in size and consume very little disk space.
  • Robustness. A container does not have an operating system installed on it. Thus, it consumes very little memory in comparison to a virtual machine (which would have a complete operating system installed and running on it). This also reduces the bootup time to just a few seconds, as compared to a couple of minutes required to boot up a virtual machine.
  • Reduces costs. Docker is less demanding when it comes to the hardware required to run it.

Disadvantages of using Docker

  • Applications with different operating system requirements cannot be hosted together on the same Docker Host. For example, let’s say we have 4 different applications, out of which 3 applications require a Linux-based operating system and the other application requires a Windows-based operating system. In such a scenario, the 3 applications that require Linux-based operating system can be hosted on a single Docker Host, whereas the application that requires a Windows-based operating system needs to be hosted on a different Docker Host.

Core Components of Docker

Docker Engine is one of the core components of Docker. It is responsible for the overall functioning of the Docker platform.

Docker Engine is a client-server based application and consists of 3 main components.

1*MYX0ClbWoitxS0RNUVvj8A

The Server runs a daemon known as dockerd (Docker Daemon) , which is nothing but a process. It is responsible for creating and managing Docker Images, Containers, Networks and Volumes on the Docker platform.

The REST API specifies how the applications can interact with the Server, and instruct it to get their job done.

The Client is nothing but a command line interface, that allows users to interact with Docker using the commands.

Let us take a quick look at some of the terminology associated with Docker.

Docker Images and Docker Containers are the two essential things that you will come across daily while working with Docker .

In simple terms, a Docker Image is a template that contains the application, and all the dependencies required to run that application on Docker.

On the other hand, as stated earlier, a Docker Container is a logical entity. In more precise terms, it is a running instance of the Docker Image.

Docker Hub is the official online repository where you could find all the Docker Images that are available for us to use.

Docker Hub also allows us to store and distribute our custom images as well if we wish to do so. We could also make them either public or private, based on our requirements.

Please Note: Free users are only allowed to keep one Docker Image as private. If we wish to keep more than one Docker Image as private, we need to subscribe to a paid subscription plan.

Docker is available in 2 different editions, as listed below:

  • Community Edition (CE)
  • Enterprise Edition (EE)

The Community Edition is suitable for individual developers and small teams. It offers limited functionality, in comparison to the Enterprise Edition.

The Enterprise Edition, on the other hand, is suitable for large teams and for using Docker in production environments.

The Enterprise Edition is further categorized into three different editions, as listed below:

  • Basic Edition
  • Standard Edition
  • Advanced Edition

One last thing that we need to know before we go ahead and get our hands dirty with Docker is actually to have Docker installed.

Below are the links to the official Docker CE installation guides. You can follow these guides to install Docker on your machine, as they are simple and straightforward.

  • CentOS Linux
  • Debian Linux
  • Fedora Linux
  • Ubuntu Linux
  • Microsoft Windows

Want to skip installation and head off straight to practicing Docker?

Just in case you are feeling too lazy to install Docker, or you don’t have enough resources available on your computer, you need not have to worry — here’s the solution to your problem.

You can head over to Play with Docker , which is an online playground for Docker. It allows users to practice Docker commands immediately, without having to install anything on your machine. The best part is it’s simple to use and available free of cost.

Docker Commands

Now it’s time to get our hands dirty with Docker commands, for which we all have been waiting till now.

docker create

The first command which we will be looking at is the docker create command.

This command allows us to create a new container.

The syntax for this command is as shown below:

Please Note: Anything enclosed within the square brackets is optional. This is applicable to all the commands that you would see on this guide.

Some of the examples of using this command are shown below:

In the above example, the docker create command would create a new container using the latest Fedora image.

Before creating the container, it will check if the latest official image of the Fedora is available on the Docker Host or not. If the latest image isn’t available on the Docker Host, it will then go ahead and download the Fedora image from the Docker Hub before creating the container. If the Fedora image is already present on the Docker Host, it will make use of that image and create the container.

If the container was created successfully, Docker will return the container ID. For instance, in the above example 02576e880a2ccbb4ce5c51032ea3b3bb8316e5b626861fc87d28627c810af03 is the container ID returned by Docker.

Each container has a unique container ID. We refer to the container using its container ID for performing various operations on the container, such as starting, stopping, restarting, and so on.

Now, let us refer to another example of docker create command, which has options and commands being passed to it.

In the above example, the docker create command creates a container using the Ubuntu image (As stated earlier, if the image isn’t available on the Docker Host, it will go ahead and download the latest image from the Docker Hub before creating the container).

The options -t and -i instruct Docker to allocate a terminal to the container so that the user can interact with the container. It also instructs Docker to execute the bash command whenever the container is started.

The next command we will look at is the docker ps command.

The docker ps command allows us to view all the containers that are running on the Docker Host.

It only displays the containers that are presently running on the Docker Host.

If you want to view all the containers that were created on this Docker Host, irrespective of their current status, such as whether they are running or exited, then you would need to include the option -a, which in turn would display all the containers that were created on this Docker Host.

Before we proceed further, let’s try to decode and understand the output of the docker ps command.

CONTAINER ID: A unique string consisting of alpha-numeric characters, associated with each container.

IMAGE: Name of the Docker Image used to create this container.

COMMAND: Any application specific command(s) that needs to be executed when the container is started.

CREATED: This shows the time elapsed since this container has been created.

STATUS: This shows the current status of the container, along with the time elapsed, in its present state.

If the container is running, it will display as Up along with the time period elapsed (for example, Up About an hour or Up 3 minutes).

If the container is stopped, then it will display as Exited followed by the exit status code within round brackets, along with the time period elapsed (for example, Exited (0) 3 weeks ago or Exited (137) 15 seconds ago, where 0 and 137 are the exit codes).

PORTS: This displays any port mappings defined for the container.

NAMES: Apart from the CONTAINER ID, each container is also assigned a unique name. We can refer to a container either using its container ID or its unique name. Docker automatically assigns a unique silly name to each container it creates. But if you want to specify your own name to the container, you can do that by including the — — name (double hyphen name) option to the docker create or the docker run (we will look at the docker run command later) command.

I hope this gives you a better understanding of the output of the docker ps command.

docker start

The next command we will look at, is the docker start command.

This command starts any stopped container(s).

We can start a container either by specifying the first few unique characters of its container ID or by specifying its name.

In the above example, Docker starts the container beginning with the container ID 30986.

Whereas in this example, Docker starts the container named elated_franklin.

docker stop

The next command on the list is the docker stop command.

This command stops any running container(s).

It is similar to the docker start command.

We can stop the container either by specifying the first few unique characters of its container ID or by specifying its name.

In the above example, Docker will stop the container beginning with the container ID 30986.

Whereas in this example, Docker will stop the container named elated_franklin.

docker restart

The next command we will look at is the docker restart command.

This command restarts any running container(s).

We can restart the container either by specifying the first few unique characters of its container ID or by specifying its name.

In the above example, Docker will restart the container beginning with the container ID 30986.

Whereas in this example, Docker will restart the container named elated_franklin.

The next command we will be looking at is the docker run command.

This command first creates the container, and then it starts the container. In short, this command is a combination of the docker create and the docker start command.

It has a syntax similar to that of the docker create command.

In the above example, Docker will create the container using the latest Ubuntu image and then immediately start the container.

If we execute the above command, it would start the container and immediately stop it — we wouldn’t get any chance to interact with the container at all.

If we want to interact with the container, then we need to specify the options: -it (hyphen followed by i and t) to the docker run command presents us with the terminal, using which we could interact with the container by typing in appropriate commands. Below is an example of the same.

In order to come out of the container, you need to type exit in the terminal.

Moving on to the next command — if we want to delete a container, we use the docker rm command.

In the above example, we are instructing Docker to delete 2 containers within a single command. The first container to be deleted is specified using its container ID, and the second container to be deleted is specified using its name.

Please Note: The containers need to be in a stopped state in order to be deleted.

docker images

docker images is the next command on the list.

This command lists out all the Docker Images that are present on your Docker Host.

Let us decode the output of the docker images command.

REPOSITORY: This represents the unique name of the Docker Image.

TAG: Each image is associated with a unique tag. A tag basically represents a version of the image.

A tag is usually represented either using a word or set of numbers or a combination of alphanumeric characters.

IMAGE ID: A unique string consisting of alpha-numeric characters, associated with each image.

CREATED: This shows the time elapsed since this image has been created.

SIZE: This shows the size of the image.

The next command on the list is the docker rmi command.

The docker rmi command allows us to remove an image(s) from the Docker Host.

The above command removes the image named mysql from the Docker Host.

The above command removes the images named httpd and fedora from the Docker Host.

The above command removes the image starting with the image ID 94e81 from the Docker Host.

The above command removes the image named ubuntu, with the tag trusty from the Docker Host.

These were some of the basic Docker commands you will see. There are many more Docker commands to explore.

Containerization has recently gotten the attention it deserves, although it has been around for a long time. Some of the top tech companies like Google, Amazon Web Services (AWS), Intel, Tesla, and Juniper Networks have their own custom version of container engines. They heavily rely on them to build, run, manage, and distribute their applications.

Docker is an extremely powerful containerization engine, and it has a lot to offer when it comes to building, running, managing and distributing your applications efficiently.

You have just seen Docker at a very high level. There is a lot more to learn about Docker, such as:

  • Docker commands (More powerful commands)
  • Docker Images (Build your own custom images)
  • Docker Networking (Setup and configure networking)
  • Docker Services (Grouping containers that use the same image)
  • Docker Stack (Grouping services required by an application)
  • Docker Compose (Tool for managing and running multiple containers)
  • Docker Swarm (Grouping and managing one or more machines on which docker is running)
  • And much more…

If you have found Docker to be fascinating, and are interested in learning more about it, then I would recommend that you enroll in the courses which are listed below. I found them to be very informative and straight to the point.

If you are an absolute beginner, then I would suggest you enroll in this course , which has been designed for beginners.

If you have some good knowledge about Docker, and are pretty much confident with the basic stuff and want to expand your knowledge, then I would suggest you should enroll into this course , which is aimed more towards advanced topics related to Docker.

Docker is a future-proofed skill and is just picking up momentum.

Investing your time and money into learning Docker wouldn’t be something that you would repent.

Hope you found this post to be informative. feel free to share it across. This really means a lot to me.

Before you say goodbye…

Let’s stay in touch, click here to enter your email address (Use this link if the above widget doesn’t show up on your screen).

Thank you so much for taking your precious time to read this post.

Disclaimer: All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any endorsement by them. There may be affiliate links within this post.

If this article was helpful, share it .

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Docker for Beginners logo

Learn to build and deploy your distributed applications easily to the cloud with Docker

Written and developed by Prakhar Srivastav

  Star

Introduction

What is docker.

Wikipedia defines Docker as

an open-source project that automates the deployment of software applications inside containers by providing an additional layer of abstraction and automation of OS-level virtualization on Linux.

Wow! That's a mouthful. In simpler words, Docker is a tool that allows developers, sys-admins etc. to easily deploy their applications in a sandbox (called containers ) to run on the host operating system i.e. Linux. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development. Unlike virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

What are containers?

The industry standard today is to use Virtual Machines (VMs) to run software applications. VMs run applications inside a guest Operating System, which runs on virtual hardware powered by the server’s host OS.

VMs are great at providing full process isolation for applications: there are very few ways a problem in the host operating system can affect the software running in the guest operating system, and vice-versa. But this isolation comes at great cost — the computational overhead spent virtualizing hardware for a guest OS to use is substantial.

Containers take a different approach: by leveraging the low-level mechanics of the host operating system, containers provide most of the isolation of virtual machines at a fraction of the computing power.

Why use containers?

Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop. This gives developers the ability to create predictable environments that are isolated from the rest of the applications and can be run anywhere.

From an operations standpoint, apart from portability containers also give more granular control over resources giving your infrastructure improved efficiency which can result in better utilization of your compute resources.

Docker interest over time

Google Trends for Docker

Due to these benefits, containers (& Docker) have seen widespread adoption. Companies like Google, Facebook, Netflix and Salesforce leverage containers to make large engineering teams more productive and to improve utilization of compute resources. In fact, Google credited containers for eliminating the need for an entire data center.

What will this tutorial teach me?

This tutorial aims to be the one-stop shop for getting your hands dirty with Docker. Apart from demystifying the Docker landscape, it'll give you hands-on experience with building and deploying your own webapps on the Cloud. We'll be using Amazon Web Services to deploy a static website, and two dynamic webapps on EC2 using Elastic Beanstalk and Elastic Container Service . Even if you have no prior experience with deployments, this tutorial should be all you need to get started.

Getting Started

This document contains a series of several sections, each of which explains a particular aspect of Docker. In each section, we will be typing commands (or writing code). All the code used in the tutorial is available in the Github repo .

Note: This tutorial uses version 18.05.0-ce of Docker. If you find any part of the tutorial incompatible with a future version, please raise an issue . Thanks!

Prerequisites

There are no specific skills needed for this tutorial beyond a basic comfort with the command line and using a text editor. This tutorial uses git clone to clone the repository locally. If you don't have Git installed on your system, either install it or remember to manually download the zip files from Github. Prior experience in developing web applications will be helpful but is not required. As we proceed further along the tutorial, we'll make use of a few cloud services. If you're interested in following along, please create an account on each of these websites:

  • Amazon Web Services

Setting up your computer

Getting all the tooling setup on your computer can be a daunting task, but thankfully as Docker has become stable, getting Docker up and running on your favorite OS has become very easy.

Until a few releases ago, running Docker on OSX and Windows was quite a hassle. Lately however, Docker has invested significantly into improving the on-boarding experience for its users on these OSes, thus running Docker now is a cakewalk. The getting started guide on Docker has detailed instructions for setting up Docker on Mac , Linux and Windows .

Once you are done installing Docker, test your Docker installation by running the following:

Hello World

Playing with busybox.

Now that we have everything setup, it's time to get our hands dirty. In this section, we are going to run a Busybox container on our system and get a taste of the docker run command.

To get started, let's run the following in our terminal:

Note: Depending on how you've installed docker on your system, you might see a permission denied error after running the above command. If you're on a Mac, make sure the Docker engine is running. If you're on Linux, then prefix your docker commands with sudo . Alternatively, you can create a docker group to get rid of this issue.

The pull command fetches the busybox image from the Docker registry and saves it to our system. You can use the docker images command to see a list of all images on your system.

Great! Let's now run a Docker container based on this image. To do that we are going to use the almighty docker run command.

Wait, nothing happened! Is that a bug? Well, no. Behind the scenes, a lot of stuff happened. When you call run , the Docker client finds the image (busybox in this case), loads up the container and then runs a command in that container. When we run docker run busybox , we didn't provide a command, so the container booted up, ran an empty command and then exited. Well, yeah - kind of a bummer. Let's try something more exciting.

Nice - finally we see some output. In this case, the Docker client dutifully ran the echo command in our busybox container and then exited it. If you've noticed, all of that happened pretty quickly. Imagine booting up a virtual machine, running a command and then killing it. Now you know why they say containers are fast! Ok, now it's time to see the docker ps command. The docker ps command shows you all containers that are currently running.

Since no containers are running, we see a blank line. Let's try a more useful variant: docker ps -a

So what we see above is a list of all containers that we ran. Do notice that the STATUS column shows that these containers exited a few minutes ago.

You're probably wondering if there is a way to run more than just one command in a container. Let's try that now:

Running the run command with the -it flags attaches us to an interactive tty in the container. Now we can run as many commands in the container as we want. Take some time to run your favorite commands.

Danger Zone : If you're feeling particularly adventurous you can try rm -rf bin in the container. Make sure you run this command in the container and not in your laptop/desktop. Doing this will make any other commands like ls , uptime not work. Once everything stops working, you can exit the container (type exit and press Enter) and then start it up again with the docker run -it busybox sh command. Since Docker creates a new container every time, everything should start working again.

That concludes a whirlwind tour of the mighty docker run command, which would most likely be the command you'll use most often. It makes sense to spend some time getting comfortable with it. To find out more about run , use docker run --help to see a list of all flags it supports. As we proceed further, we'll see a few more variants of docker run .

Before we move ahead though, let's quickly talk about deleting containers. We saw above that we can still see remnants of the container even after we've exited by running docker ps -a . Throughout this tutorial, you'll run docker run multiple times and leaving stray containers will eat up disk space. Hence, as a rule of thumb, I clean up containers once I'm done with them. To do that, you can run the docker rm command. Just copy the container IDs from above and paste them alongside the command.

On deletion, you should see the IDs echoed back to you. If you have a bunch of containers to delete in one go, copy-pasting IDs can be tedious. In that case, you can simply run -

This command deletes all containers that have a status of exited . In case you're wondering, the -q flag, only returns the numeric IDs and -f filters output based on conditions provided. One last thing that'll be useful is the --rm flag that can be passed to docker run which automatically deletes the container once it's exited from. For one off docker runs, --rm flag is very useful.

In later versions of Docker, the docker container prune command can be used to achieve the same effect.

Lastly, you can also delete images that you no longer need by running docker rmi .

Terminology

In the last section, we used a lot of Docker-specific jargon which might be confusing to some. So before we go further, let me clarify some terminology that is used frequently in the Docker ecosystem.

  • Images - The blueprints of our application which form the basis of containers. In the demo above, we used the docker pull command to download the busybox image.
  • Containers - Created from Docker images and run the actual application. We create a container using docker run which we did using the busybox image that we downloaded. A list of running containers can be seen using the docker ps command.
  • Docker Daemon - The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operating system which clients talk to.
  • Docker Client - The command line tool that allows the user to interact with the daemon. More generally, there can be other forms of clients too - such as Kitematic which provide a GUI to the users.
  • Docker Hub - A registry of Docker images. You can think of the registry as a directory of all available Docker images. If required, one can host their own Docker registries and can use them for pulling images.

Webapps with Docker

Great! So we have now looked at docker run , played with a Docker container and also got a hang of some terminology. Armed with all this knowledge, we are now ready to get to the real-stuff, i.e. deploying web applications with Docker!

Static Sites

Let's start by taking baby-steps. The first thing we're going to look at is how we can run a dead-simple static website. We're going to pull a Docker image from Docker Hub, run the container and see how easy it is to run a webserver.

Let's begin. The image that we are going to use is a single-page website that I've already created for the purpose of this demo and hosted on the registry - prakhar1989/static-site . We can download and run the image directly in one go using docker run . As noted above, the --rm flag automatically removes the container when it exits and the -it flag specifies an interactive terminal which makes it easier to kill the container with Ctrl+C (on windows).

Since the image doesn't exist locally, the client will first fetch the image from the registry and then run the image. If all goes well, you should see a Nginx is running... message in your terminal. Okay now that the server is running, how to see the website? What port is it running on? And more importantly, how do we access the container directly from our host machine? Hit Ctrl+C to stop the container.

Well, in this case, the client is not exposing any ports so we need to re-run the docker run command to publish ports. While we're at it, we should also find a way so that our terminal is not attached to the running container. This way, you can happily close your terminal and keep the container running. This is called detached mode.

In the above command, -d will detach our terminal, -P will publish all exposed ports to random ports and finally --name corresponds to a name we want to give. Now we can see the ports by running the docker port [CONTAINER] command

You can open http://localhost:32769 in your browser.

Note: If you're using docker-toolbox, then you might need to use docker-machine ip default to get the IP.

You can also specify a custom port to which the client will forward connections to the container.

static site

To stop a detached container, run docker stop by giving the container ID. In this case, we can use the name static-site we used to start the container.

I'm sure you agree that was super simple. To deploy this on a real server you would just need to install Docker, and run the above Docker command. Now that you've seen how to run a webserver inside a Docker image, you must be wondering - how do I create my own Docker image? This is the question we'll be exploring in the next section.

Docker Images

We've looked at images before, but in this section we'll dive deeper into what Docker images are and build our own image! Lastly, we'll also use that image to run our application locally and finally deploy on AWS to share it with our friends! Excited? Great! Let's get started.

Docker images are the basis of containers. In the previous example, we pulled the Busybox image from the registry and asked the Docker client to run a container based on that image. To see the list of images that are available locally, use the docker images command.

The above gives a list of images that I've pulled from the registry, along with ones that I've created myself (we'll shortly see how). The TAG refers to a particular snapshot of the image and the IMAGE ID is the corresponding unique identifier for that image.

For simplicity, you can think of an image akin to a git repository - images can be committed with changes and have multiple versions. If you don't provide a specific version number, the client defaults to latest . For example, you can pull a specific version of ubuntu image

To get a new Docker image you can either get it from a registry (such as the Docker Hub) or create your own. There are tens of thousands of images available on Docker Hub . You can also search for images directly from the command line using docker search .

An important distinction to be aware of when it comes to images is the difference between base and child images.

Base images are images that have no parent image, usually images with an OS like ubuntu, busybox or debian.

Child images are images that build on base images and add additional functionality.

Then there are official and user images, which can be both base and child images.

Official images are images that are officially maintained and supported by the folks at Docker. These are typically one word long. In the list of images above, the python , ubuntu , busybox and hello-world images are official images.

User images are images created and shared by users like you and me. They build on base images and add additional functionality. Typically, these are formatted as user/image-name .

Our First Image

Now that we have a better understanding of images, it's time to create our own. Our goal in this section will be to create an image that sandboxes a simple Flask application. For the purposes of this workshop, I've already created a fun little Flask app that displays a random cat .gif every time it is loaded - because you know, who doesn't like cats? If you haven't already, please go ahead and clone the repository locally like so -

This should be cloned on the machine where you are running the docker commands and not inside a docker container.

The next step now is to create an image with this web app. As mentioned above, all user images are based on a base image. Since our application is written in Python, the base image we're going to use will be Python 3 .

A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own dockerfiles.

The application directory does contain a Dockerfile but since we're doing this for the first time, we'll create one from scratch. To start, create a new blank file in our favorite text-editor and save it in the same folder as the flask app by the name of Dockerfile .

We start with specifying our base image. Use the FROM keyword to do that -

The next step usually is to write the commands of copying the files and installing the dependencies. First, we set a working directory and then copy all the files for our app.

Now, that we have the files, we can install the dependencies.

The next thing we need to specify is the port number that needs to be exposed. Since our flask app is running on port 5000 , that's what we'll indicate.

The last step is to write the command for running the application, which is simply - python ./app.py . We use the CMD command to do that -

The primary purpose of CMD is to tell the container which command it should run when it is started. With that, our Dockerfile is now ready. This is how it looks -

Now that we have our Dockerfile , we can build our image. The docker build command does the heavy-lifting of creating a Docker image from a Dockerfile .

The section below shows you the output of running the same. Before you run the command yourself (don't forget the period), make sure to replace my username with yours. This username should be the same one you created when you registered on Docker hub . If you haven't done that yet, please go ahead and create an account. The docker build command is quite simple - it takes an optional tag name with -t and a location of the directory containing the Dockerfile .

If you don't have the python:3.8 image, the client will first pull the image and then create your image. Hence, your output from running the command will look different from mine. If everything went well, your image should be ready! Run docker images and see if your image shows.

The last step in this section is to run the image and see if it actually works (replacing my username with yours).

The command we just ran used port 5000 for the server inside the container and exposed this externally on port 8888. Head over to the URL with port 8888, where your app should be live.

cat gif website

Congratulations! You have successfully created your first docker image.

Docker on AWS

What good is an application that can't be shared with friends, right? So in this section we are going to see how we can deploy our awesome application to the cloud so that we can share it with our friends! We're going to use AWS Elastic Beanstalk to get our application up and running in a few clicks. We'll also see how easy it is to make our application scalable and manageable with Beanstalk!

Docker push

The first thing that we need to do before we deploy our app to AWS is to publish our image on a registry which can be accessed by AWS. There are many different Docker registries you can use (you can even host your own ). For now, let's use Docker Hub to publish the image.

If this is the first time you are pushing an image, the client will ask you to login. Provide the same credentials that you used for logging into Docker Hub.

To publish, just type the below command remembering to replace the name of the image tag above with yours. It is important to have the format of yourusername/image_name so that the client knows where to publish.

Once that is done, you can view your image on Docker Hub. For example, here's the web page for my image.

Note: One thing that I'd like to clarify before we go ahead is that it is not imperative to host your image on a public registry (or any registry) in order to deploy to AWS. In case you're writing code for the next million-dollar unicorn startup you can totally skip this step. The reason why we're pushing our images publicly is that it makes deployment super simple by skipping a few intermediate configuration steps.

Now that your image is online, anyone who has docker installed can play with your app by typing just a single command.

If you've pulled your hair out in setting up local dev environments / sharing application configuration in the past, you very well know how awesome this sounds. That's why Docker is so cool!

AWS Elastic Beanstalk (EB) is a PaaS (Platform as a Service) offered by AWS. If you've used Heroku, Google App Engine etc. you'll feel right at home. As a developer, you just tell EB how to run your app and it takes care of the rest - including scaling, monitoring and even updates. In April 2014, EB added support for running single-container Docker deployments which is what we'll use to deploy our app. Although EB has a very intuitive CLI , it does require some setup, and to keep things simple we'll use the web UI to launch our application.

To follow along, you need a functioning AWS account. If you haven't already, please go ahead and do that now - you will need to enter your credit card information. But don't worry, it's free and anything we do in this tutorial will also be free! Let's get started.

Here are the steps:

  • Login to your AWS console .
  • Click on Elastic Beanstalk. It will be in the compute section on the top left. Alternatively, you can access the Elastic Beanstalk console .

Elastic Beanstalk start

  • Click on "Create New Application" in the top right
  • Give your app a memorable (but unique) name and provide an (optional) description
  • In the New Environment screen, create a new environment and choose the Web Server Environment .
  • Fill in the environment information by choosing a domain. This URL is what you'll share with your friends so make sure it's easy to remember.
  • Under base configuration section. Choose Docker from the predefined platform .

Elastic Beanstalk Environment Type

  • Now we need to upload our application code. But since our application is packaged in a Docker container, we just need to tell EB about our container. Open the Dockerrun.aws.json file located in the flask-app folder and edit the Name of the image to your image's name. Don't worry, I'll explain the contents of the file shortly. When you are done, click on the radio button for "Upload your Code", choose this file, and click on "Upload".
  • Now click on "Create environment". The final screen that you see will have a few spinners indicating that your environment is being set up. It typically takes around 5 minutes for the first-time setup.

While we wait, let's quickly see what the Dockerrun.aws.json file contains. This file is basically an AWS specific file that tells EB details about our application and docker configuration.

The file should be pretty self-explanatory, but you can always reference the official documentation for more information. We provide the name of the image that EB should use along with a port that the container should open.

Hopefully by now, our instance should be ready. Head over to the EB page and you should see a green tick indicating that your app is alive and kicking.

EB deploy

Go ahead and open the URL in your browser and you should see the application in all its glory. Feel free to email / IM / snapchat this link to your friends and family so that they can enjoy a few cat gifs, too.

Once you done basking in the glory of your app, remember to terminate the environment so that you don't end up getting charged for extra resources.

EB deploy

Congratulations! You have deployed your first Docker application! That might seem like a lot of steps, but with the command-line tool for EB you can almost mimic the functionality of Heroku in a few keystrokes! Hopefully, you agree that Docker takes away a lot of the pains of building and deploying applications in the cloud. I would encourage you to read the AWS documentation on single-container Docker environments to get an idea of what features exist.

In the next (and final) part of the tutorial, we'll up the ante a bit and deploy an application that mimics the real-world more closely; an app with a persistent back-end storage tier. Let's get straight to it!

Multi-container Environments

In the last section, we saw how easy and fun it is to run applications with Docker. We started with a simple static website and then tried a Flask app. Both of which we could run locally and in the cloud with just a few commands. One thing both these apps had in common was that they were running in a single container .

Those of you who have experience running services in production know that usually apps nowadays are not that simple. There's almost always a database (or any other kind of persistent storage) involved. Systems such as Redis and Memcached have become de rigueur of most web application architectures. Hence, in this section we are going to spend some time learning how to Dockerize applications which rely on different services to run.

In particular, we are going to see how we can run and manage multi-container docker environments. Why multi-container you might ask? Well, one of the key points of Docker is the way it provides isolation. The idea of bundling a process with its dependencies in a sandbox (called containers) is what makes this so powerful.

Just like it's a good strategy to decouple your application tiers, it is wise to keep containers for each of the services separate. Each tier is likely to have different resource needs and those needs might grow at different rates. By separating the tiers into different containers, we can compose each tier using the most appropriate instance type based on different resource needs. This also plays in very well with the whole microservices movement which is one of the main reasons why Docker (or any other container technology) is at the forefront of modern microservices architectures.

SF Food Trucks

The app that we're going to Dockerize is called SF Food Trucks. My goal in building this app was to have something that is useful (in that it resembles a real-world application), relies on at least one service, but is not too complex for the purpose of this tutorial. This is what I came up with.

SF Food Trucks

The app's backend is written in Python (Flask) and for search it uses Elasticsearch . Like everything else in this tutorial, the entire source is available on Github . We'll use this as our candidate application for learning out how to build, run and deploy a multi-container environment.

First up, let's clone the repository locally.

The flask-app folder contains the Python application, while the utils folder has some utilities to load the data into Elasticsearch. The directory also contains some YAML files and a Dockerfile, all of which we'll see in greater detail as we progress through this tutorial. If you are curious, feel free to take a look at the files.

Now that you're excited (hopefully), let's think of how we can Dockerize the app. We can see that the application consists of a Flask backend server and an Elasticsearch service. A natural way to split this app would be to have two containers - one running the Flask process and another running the Elasticsearch (ES) process. That way if our app becomes popular, we can scale it by adding more containers depending on where the bottleneck lies.

Great, so we need two containers. That shouldn't be hard right? We've already built our own Flask container in the previous section. And for Elasticsearch, let's see if we can find something on the hub.

Quite unsurprisingly, there exists an officially supported image for Elasticsearch. To get ES running, we can simply use docker run and have a single-node ES container running locally within no time.

Note: Elastic, the company behind Elasticsearch, maintains its own registry for Elastic products. It's recommended to use the images from that registry if you plan to use Elasticsearch.

Let's first pull the image

and then run it in development mode by specifying ports and setting an environment variable that configures the Elasticsearch cluster to run as a single-node.

Note: If your container runs into memory issues, you might need to tweak some JVM flags to limit its memory consumption.

As seen above, we use --name es to give our container a name which makes it easy to use in subsequent commands. Once the container is started, we can see the logs by running docker container logs with the container name (or ID) to inspect the logs. You should see logs similar to below if Elasticsearch started successfully.

Note: Elasticsearch takes a few seconds to start so you might need to wait before you see initialized in the logs.

Now, lets try to see if can send a request to the Elasticsearch container. We use the 9200 port to send a cURL request to the container.

Sweet! It's looking good! While we are at it, let's get our Flask container running too. But before we get to that, we need a Dockerfile . In the last section, we used python:3.8 image as our base image. This time, however, apart from installing Python dependencies via pip , we want our application to also generate our minified Javascript file for production. For this, we'll require Nodejs. Since we need a custom build step, we'll start from the ubuntu base image to build our Dockerfile from scratch.

Note: if you find that an existing image doesn't cater to your needs, feel free to start from another base image and tweak it yourself. For most of the images on Docker Hub, you should be able to find the corresponding Dockerfile on Github. Reading through existing Dockerfiles is one of the best ways to learn how to roll your own.

Our Dockerfile for the flask app looks like below -

Quite a few new things here so let's quickly go over this file. We start off with the Ubuntu LTS base image and use the package manager apt-get to install the dependencies namely - Python and Node. The yqq flag is used to suppress output and assumes "Yes" to all prompts.

We then use the ADD command to copy our application into a new volume in the container - /opt/flask-app . This is where our code will reside. We also set this as our working directory, so that the following commands will be run in the context of this location. Now that our system-wide dependencies are installed, we get around to installing app-specific ones. First off we tackle Node by installing the packages from npm and running the build command as defined in our package.json file . We finish the file off by installing the Python packages, exposing the port and defining the CMD to run as we did in the last section.

Finally, we can go ahead, build the image and run the container (replace yourusername with your username below).

In the first run, this will take some time as the Docker client will download the ubuntu image, run all the commands and prepare your image. Re-running docker build after any subsequent changes you make to the application code will almost be instantaneous. Now let's try running our app.

Oops! Our flask app was unable to run since it was unable to connect to Elasticsearch. How do we tell one container about the other container and get them to talk to each other? The answer lies in the next section.

Docker Network

Before we talk about the features Docker provides especially to deal with such scenarios, let's see if we can figure out a way to get around the problem. Hopefully, this should give you an appreciation for the specific feature that we are going to study.

Okay, so let's run docker container ls (which is same as docker ps ) and see what we have.

So we have one ES container running on 0.0.0.0:9200 port which we can directly access. If we can tell our Flask app to connect to this URL, it should be able to connect and talk to ES, right? Let's dig into our Python code and see how the connection details are defined.

To make this work, we need to tell the Flask container that the ES container is running on 0.0.0.0 host (the port by default is 9200 ) and that should make it work, right? Unfortunately, that is not correct since the IP 0.0.0.0 is the IP to access ES container from the host machine i.e. from my Mac. Another container will not be able to access this on the same IP address. Okay if not that IP, then which IP address should the ES container be accessible by? I'm glad you asked this question.

Now is a good time to start our exploration of networking in Docker. When docker is installed, it creates three networks automatically.

The bridge network is the network in which containers are run by default. So that means that when I ran the ES container, it was running in this bridge network. To validate this, let's inspect the network.

You can see that our container 277451c15ec1 is listed under the Containers section in the output. What we also see is the IP address this container has been allotted - 172.17.0.2 . Is this the IP address that we're looking for? Let's find out by running our flask container and trying to access this IP.

This should be fairly straightforward to you by now. We start the container in the interactive mode with the bash process. The --rm is a convenient flag for running one off commands since the container gets cleaned up when its work is done. We try a curl but we need to install it first. Once we do that, we see that we can indeed talk to ES on 172.17.0.2:9200 . Awesome!

Although we have figured out a way to make the containers talk to each other, there are still two problems with this approach -

How do we tell the Flask container that es hostname stands for 172.17.0.2 or some other IP since the IP can change?

Since the bridge network is shared by every container by default, this method is not secure . How do we isolate our network?

The good news that Docker has a great answer to our questions. It allows us to define our own networks while keeping them isolated using the docker network command.

Let's first go ahead and create our own network.

The network create command creates a new bridge network, which is what we need at the moment. In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. There are other kinds of networks that you can create, and you are encouraged to read about them in the official docs .

Now that we have a network, we can launch our containers inside this network using the --net flag. Let's do that - but first, in order to launch a new container with the same name, we will stop and remove our ES container that is running in the bridge (default) network.

As you can see, our es container is now running inside the foodtrucks-net bridge network. Now let's inspect what happens when we launch in our foodtrucks-net network.

Wohoo! That works! On user-defined networks like foodtrucks-net, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery . Great! Let's launch our Flask container for real now -

Head over to http://0.0.0.0:5000 and see your glorious app live! Although that might have seemed like a lot of work, we actually just typed 4 commands to go from zero to running. I've collated the commands in a bash script .

Now imagine you are distributing your app to a friend, or running on a server that has docker installed. You can get a whole app running with just one command!

And that's it! If you ask me, I find this to be an extremely awesome, and a powerful way of sharing and running your applications!

Docker Compose

Till now we've spent all our time exploring the Docker client. In the Docker ecosystem, however, there are a bunch of other open-source tools which play very nicely with Docker. A few of them are -

  • Docker Machine - Create Docker hosts on your computer, on cloud providers, and inside your own data center
  • Docker Compose - A tool for defining and running multi-container Docker applications.
  • Docker Swarm - A native clustering solution for Docker
  • Kubernetes - Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

In this section, we are going to look at one of these tools, Docker Compose, and see how it can make dealing with multi-container apps easier.

The background story of Docker Compose is quite interesting. Roughly around January 2014, a company called OrchardUp launched a tool called Fig. The idea behind Fig was to make isolated development environments work with Docker. The project was very well received on Hacker News - I oddly remember reading about it but didn't quite get the hang of it.

The first comment on the forum actually does a good job of explaining what Fig is all about.

So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on. But that's it: Docker as of now, remains at the process level.
While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the management of such group of containers as a single entity. And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".

It turns out that a lot of people using docker agree with this sentiment. Slowly and steadily as Fig became popular, Docker Inc. took notice, acquired the company and re-branded Fig as Docker Compose.

So what is Compose used for? Compose is a tool that is used for defining and running multi-container Docker apps in an easy way. It provides a configuration file called docker-compose.yml that can be used to bring up an application and the suite of services it depends on with just one command. Compose works in all environments: production, staging, development, testing, as well as CI workflows, although Compose is ideal for development and testing environments.

Let's see if we can create a docker-compose.yml file for our SF-Foodtrucks app and evaluate whether Docker Compose lives up to its promise.

The first step, however, is to install Docker Compose. If you're running Windows or Mac, Docker Compose is already installed as it comes in the Docker Toolbox. Linux users can easily get their hands on Docker Compose by following the instructions on the docs. Since Compose is written in Python, you can also simply do pip install docker-compose . Test your installation with -

Now that we have it installed, we can jump on the next step i.e. the Docker Compose file docker-compose.yml . The syntax for YAML is quite simple and the repo already contains the docker-compose file that we'll be using.

Let me breakdown what the file above means. At the parent level, we define the names of our services - es and web . The image parameter is always required, and for each service that we want Docker to run, we can add additional parameters. For es , we just refer to the elasticsearch image available on Elastic registry. For our Flask app, we refer to the image that we built at the beginning of this section.

Other parameters such as command and ports provide more information about the container. The volumes parameter specifies a mount point in our web container where the code will reside. This is purely optional and is useful if you need access to logs, etc. We'll later see how this can be useful during development. Refer to the online reference to learn more about the parameters this file supports. We also add volumes for the es container so that the data we load persists between restarts. We also specify depends_on , which tells docker to start the es container before web . You can read more about it on docker compose docs .

Note: You must be inside the directory with the docker-compose.yml file in order to execute most Compose commands.

Great! Now the file is ready, let's see docker-compose in action. But before we start, we need to make sure the ports and names are free. So if you have the Flask and ES containers running, lets turn them off.

Now we can run docker-compose . Navigate to the food trucks directory and run docker-compose up .

Head over to the IP to see your app live. That was amazing wasn't it? Just a few lines of configuration and we have two Docker containers running successfully in unison. Let's stop the services and re-run in detached mode.

Unsurprisingly, we can see both the containers running successfully. Where do the names come from? Those were created automatically by Compose. But does Compose also create the network automatically? Good question! Let's find out.

First off, let us stop the services from running. We can always bring them back up in just one command. Data volumes will persist, so it’s possible to start the cluster again with the same data using docker-compose up. To destroy the cluster and the data volumes, just type docker-compose down -v .

While we're are at it, we'll also remove the foodtrucks network that we created last time.

Great! Now that we have a clean slate, let's re-run our services and see if Compose does its magic.

So far, so good. Time to see if any networks were created.

You can see that compose went ahead and created a new network called foodtrucks_default and attached both the new services in that network so that each of these are discoverable to the other. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Development Workflow

Before we jump to the next section, there's one last thing I wanted to cover about docker-compose. As stated earlier, docker-compose is really great for development and testing. So let's see how we can configure compose to make our lives easier during development.

Throughout this tutorial, we've worked with readymade docker images. While we've built images from scratch, we haven't touched any application code yet and mostly restricted ourselves to editing Dockerfiles and YAML configurations. One thing that you must be wondering is how does the workflow look during development? Is one supposed to keep creating Docker images for every change, then publish it and then run it to see if the changes work as expected? I'm sure that sounds super tedious. There has to be a better way. In this section, that's what we're going to explore.

Let's see how we can make a change in the Foodtrucks app we just ran. Make sure you have the app running,

Now let's see if we can change this app to display a Hello world! message when a request is made to /hello route. Currently, the app responds with a 404.

Why does this happen? Since ours is a Flask app, we can see app.py ( link ) for answers. In Flask, routes are defined with @app.route syntax. In the file, you'll see that we only have three routes defined - / , /debug and /search . The / route renders the main app, the debug route is used to return some debug information and finally search is used by the app to query elasticsearch.

Given that context, how would we add a new route for hello ? You guessed it! Let's open flask-app/app.py in our favorite editor and make the following change

Now let's try making a request again

Oh no! That didn't work! What did we do wrong? While we did make the change in app.py , the file resides in our machine (or the host machine), but since Docker is running our containers based off the yourusername/foodtrucks-web image, it doesn't know about this change. To validate this, lets try the following -

What we're trying to do here is to validate that our changes are not in the app.py that's running in the container. We do this by running the command docker-compose run , which is similar to its cousin docker run but takes additional arguments for the service (which is web in our case ). As soon as we run bash , the shell opens in /opt/flask-app as specified in our Dockerfile . From the grep command we can see that our changes are not in the file.

Lets see how we can fix it. First off, we need to tell docker compose to not use the image and instead use the files locally. We'll also set debug mode to true so that Flask knows to reload the server when app.py changes. Replace the web portion of the docker-compose.yml file like so:

With that change ( diff ), let's stop and start the containers.

As a final step, lets make the change in app.py by adding a new route. Now we try to curl

Wohoo! We get a valid response! Try playing around by making more changes in the app.

That concludes our tour of Docker Compose. With Docker Compose, you can also pause your services, run a one-off command on a container and even scale the number of containers. I also recommend you checkout a few other use-cases of Docker compose. Hopefully, I was able to show you how easy it is to manage multi-container environments with Compose. In the final section, we are going to deploy our app to AWS!

AWS Elastic Container Service

In the last section we used docker-compose to run our app locally with a single command: docker-compose up . Now that we have a functioning app we want to share this with the world, get some users, make tons of money and buy a big house in Miami. Executing the last three are beyond the scope of the tutorial, so we'll spend our time instead on figuring out how we can deploy our multi-container apps on the cloud with AWS.

If you've read this far you are pretty much convinced that Docker is a pretty cool technology. And you are not alone. Seeing the meteoric rise of Docker, almost all Cloud vendors started working on adding support for deploying Docker apps on their platform. As of today, you can deploy containers on Google Cloud Platform , AWS , Azure and many others. We already got a primer on deploying single container apps with Elastic Beanstalk and in this section we are going to look at Elastic Container Service (or ECS) by AWS.

AWS ECS is a scalable and super flexible container management service that supports Docker containers. It allows you to operate a Docker cluster on top of EC2 instances via an easy-to-use API. Where Beanstalk came with reasonable defaults, ECS allows you to completely tune your environment as per your needs. This makes ECS, in my opinion, quite complex to get started with.

Luckily for us, ECS has a friendly CLI tool that understands Docker Compose files and automatically provisions the cluster on ECS! Since we already have a functioning docker-compose.yml it should not take a lot of effort in getting up and running on AWS. So let's get started!

The first step is to install the CLI. Instructions to install the CLI on both Mac and Linux are explained very clearly in the official docs . Go ahead, install the CLI and when you are done, verify the install by running

Next, we'll be working on configuring the CLI so that we can talk to ECS. We'll be following the steps as detailed in the official guide on AWS ECS docs. In case of any confusion, please feel free to refer to that guide.

The first step will involve creating a profile that we'll use for the rest of the tutorial. To continue, you'll need your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . To obtain these, follow the steps as detailed under the section titled Access Key and Secret Access Key on this page .

Next, we need to get a keypair which we'll be using to log into the instances. Head over to your EC2 Console and create a new keypair. Download the keypair and store it in a safe location. Another thing to note before you move away from this screen is the region name. In my case, I have named my key - ecs and set my region as us-east-1 . This is what I'll assume for the rest of this walkthrough.

EC2 Keypair

The next step is to configure the CLI.

We provide the configure command with the region name we want our cluster to reside in and a cluster name. Make sure you provide the same region name that you used when creating the keypair. If you've not configured the AWS CLI on your computer before, you can use the official guide , which explains everything in great detail on how to get everything going.

The next step enables the CLI to create a CloudFormation template.

Here we provide the name of the keypair we downloaded initially ( ecs in my case), the number of instances that we want to use ( --size ) and the type of instances that we want the containers to run on. The --capability-iam flag tells the CLI that we acknowledge that this command may create IAM resources.

The last and final step is where we'll use our docker-compose.yml file. We'll need to make a few minor changes, so instead of modifying the original, let's make a copy of it. The contents of this file (after making the changes) look like (below) -

The only changes we made from the original docker-compose.yml are of providing the mem_limit (in bytes) and cpu_shares values for each container and adding some logging configuration. This allows us to view logs generated by our containers in AWS CloudWatch . Head over to CloudWatch to create a log group called foodtrucks . Note that since ElasticSearch typically ends up taking more memory, we've given around 3.4 GB of memory limit. Another thing we need to do before we move onto the next step is to publish our image on Docker Hub.

Great! Now let's run the final command that will deploy our app on ECS!

It's not a coincidence that the invocation above looks similar to the one we used with Docker Compose . If everything went well, you should see a desiredStatus=RUNNING lastStatus=RUNNING as the last line.

Awesome! Our app is live, but how can we access it?

Go ahead and open http://54.86.14.14 in your browser and you should see the Food Trucks in all its black-yellow glory! Since we're on the topic, let's see how our AWS ECS console looks.

Cluster

We can see above that our ECS cluster called 'foodtrucks' was created and is now running 1 task with 2 container instances. Spend some time browsing this console to get a hang of all the options that are here.

Once you've played around with the deployed app, remember to turn down the cluster -

So there you have it. With just a few commands we were able to deploy our awesome app on the AWS cloud!

And that's a wrap! After a long, exhaustive but fun tutorial you are now ready to take the container world by storm! If you followed along till the very end then you should definitely be proud of yourself. You learned how to setup Docker, run your own containers, play with static and dynamic websites and most importantly got hands on experience with deploying your applications to the cloud!

I hope that finishing this tutorial makes you more confident in your abilities to deal with servers. When you have an idea of building your next app, you can be sure that you'll be able to get it in front of people with minimal effort.

Your journey into the container world has just started! My goal with this tutorial was to whet your appetite and show you the power of Docker. In the sea of new technology, it can be hard to navigate the waters alone and tutorials such as this one can provide a helping hand. This is the Docker tutorial I wish I had when I was starting out. Hopefully, it served its purpose of getting you excited about containers so that you no longer have to watch the action from the sides.

Below are a few additional resources that will be beneficial. For your next project, I strongly encourage you to use Docker. Keep in mind - practice makes perfect!

Additional Resources

  • Awesome Docker
  • Docker Weekly and archives
  • Codeship Blog

Off you go, young padawan!

Give Feedback

Now that the tutorial is over, it's my turn to ask questions. How did you like the tutorial? Did you find the tutorial to be a complete mess or did you have fun and learn something?

Send in your thoughts directly to me or just create an issue . I'm on Twitter , too, so if that's your deal, feel free to holler there!

I would totally love to hear about your experience with this tutorial. Give suggestions on how to make this better or let me know about my mistakes. I want this tutorial to be one of the best introductory tutorials on the web and I can't do it without your help.

Registration is open - Live, Instructor-led Online Classes - Elasticsearch in March - Solr in April - OpenSearch in May. See all classes

Solr / Elasticsearch Experts – Search & Big Data Analytics

How to Use Docker to Containerize Java Web Applications: Tutorial for Beginners

Containers are no longer a thing of the future – they are all around us. Companies use them to run everything – from the simplest scripts to large applications. You create a container and run the same thing locally, in the test environment, in QA, and finally in production. A stateless box built with minimal requirements and unlike virtual machines – without the need of virtualizing the whole operating system. No issues with libraries, no issues with interoperability – a dream come true, at least in the majority of the cases. Those are only some of the benefits of running containerized applications.

And when it comes to a lot of us, developers, when we say containers we think Docker. That’s because of Docker’s simplicity, ecosystem, and availability on various platforms. In this blog post we will learn how to create a web application in Java, so that it can run in a container, how to build the container, and finally, how to run it and make it observable .

Benefits of Docker Containers

You can imagine the container   image  as a package that packs up your application code along with everything that is needed to run it, but nothing more. It is lightweight as it contains only the application and needed dependencies and the runtime environment. It is standalone as there are no external dependencies needed. They are secure because of the isolation that is turned on by default.

A container   image  becomes the container  at runtime when it is run inside the environment capable of running container images – in our case the Docker engine . Docker Engine is available for major operating systems. From a user perspective, as long as you are using images that can be run on your OS, you don’t have to worry about any kind of compatibility issues.

Docker itself is more than just the Docker Engine . It is the whole environment that helps you run, build, and manage container images. It comes with a variety of tools that speed up working with containers and help you achieve your goal faster.

Should You Run Java in Docker Containers?

But can I run my Java applications on Docker, inside the container ? The answer is – yes, you can. Do you need Docker to run Java applications ? You probably know the answer by now. No, you don’t need Docker to run Java applications, you will be perfectly fine running them in a dedicated environment, exactly the same as you were running them till now. But if you would like to move forward, use exactly the same package in all the environments, be sure that everything will work in the same way – just use containers and soon you won’t be able to live without them.

What Is a Java Docker Container?

Before we get into the details of how to work with containers, let’s answer one of the questions that appear – how does Java fit into all of that?

The answer is really straightforward – a Java Docker container  is a standard container image that is packed with the Java runtime environment and all the dependencies needed to run your application. Such containers can be downloaded into the environment where Docker Engine is installed and just start it. No additional installation is needed, no Java Runtime Environment, nothing. Just Docker Engine and the container image containing your Java application. It is that simple.

How Do You Dockerize a Java Web Application: A Step-by-Step Tutorial for Developers

Let’s now have a look at how to work with Docker itself, how to create an image, start it and work with it.

Installing Docker

Installing the Docker Engine depends on the operating system you want to run your Java container images. You can install it by just downloading the installation package for your operating system from Docker’s website  or using one of the package managers, the one that is suitable for your target platform. If you don’t have Docker installed and you would like to learn more about it I encourage you to take the time now and look at the official Docker installation guide  page and after that continue with the rest of the article.

The Dockerfile

To build a new Java Docker image we need a file called Dockerfile . It is a text file that contains instructions that can and will be executed by Docker using the command line during the container image build. It is all that is needed.

Here is an example Dockerfile for a hypothetical Java application:

And that’s all. As you can see, this is pretty straightforward, but let’s discuss that step by step.

The first thing in the Dockerfile is the line that tells Docker which image we would like to use as the base container image in our application:

Docker images can be inherited from other images. For example, the above Java Docker image is the official image of the OpenJDK that comes with all the needed packages and tools required to run Java applications. You can find all the OpenJDK container images in the Docker Hub . We could of course list all the commands that are required to install Java Development Kit manually, but this makes it simpler.

The next step is calling the  WORKDIR  command. We use it to set the working directory to /application  so that it is easier to run the rest of the commands.

The COPY  command copies the awesome-app-1.0.jar  file which contains our application to the working directory, which in our case is the /application .

And finally, the CMD  command executes a command. In our example, it just runs the java -jar awesome-app-1.0.jar  command launching the application. And that is all that we need.

Of course, there are other commands. Just to mention some:

  • RUN  allows us to run any UNIX command during the container image build
  • EXPOSE  exposes the ports from the container itself
  • ENV  sets the environmental variable
  • ADD  which copies new files to the container image file system

You can learn more about all of them from the Docker file reference  available in the official Docker documentation.

You may have noticed that we’ve used an already built application from the local file system. This is a viable solution, but you may also want to build the application during Java container image build. We will look into how to achieve that just after we will learn how to work with the image.

Working with the Java Docker Image

Once we have the Dockerfile  created we can build our Java container image and then run it turning it into a running container.

To build the image you would run a command like this:

The above command tells Docker to build the image, give it a name and a tag (we will get back to that a bit later), and use the Dockerfile  located in the current working directory (the .  character).

Once the build is finished, we can start the container by running:

Those are the basics, but we know that Java projects are not built manually and don’t come prepackaged already. Instead, we use one of the build tools.

Let’s now look into Maven  and Gradle , two popular Java dependency and build management tools, and how to use them with Docker.

Building Java Docker Images with Maven

To demo how to create a Java Docker image with Maven I used the Spring Initializr  as the simplest way to quickly build a web application using Maven. I just generated the simplest Maven project with Spring Web  as the only dependency.

I downloaded the created archive and unpacked it which resulted in the following directory structure:

In addition to that, I created a simple Java class that works as the Spring RestController:

The Dockerfile  that would use Maven and build the container for us could look as follows:

The Dockerfile  here is pretty straightforward. We use the Maven Java container image that includes Maven 3.8 and JDK 11. Next, we create the /project  directory, copy the contents of the local directory to it, set the working directory to the directory that we created, and run the Maven build command. The last step is responsible for running our Docker Java container.

Now let’s try building the image by running:

Once the build is done, we can try running the container by running the following command:

The command tells the Docker Engine to run the given Docker container in the background (the -d  option) and expose the 8080  port from inside the container to the outside world. If we didn’t do that, this port would not be reachable from the outside world. We could also use the EXPOSE  command in the Dockerfile  if we want to have some ports opened by default.

To see what is running inside the Docker Engine, we can just run the following command:

And the result would be:

It tells us that our Java Docker container is running for 2 minutes and is called infallible_driscoll  – yes, Docker will give random names to our containers, though we can control that as well. Random names may be fun and entertaining, but in production environments you may want to give your containers a meaningful name – for example one that includes the name of the application running in it. That way, you’ll be able to easily identify what is running in the container without looking at the container image name or inspecting.

And test if it really runs by running a simple curl  command:

And the result of the above command would look as follows:

So our Spring RestController works.

Building Java Container Images with Gradle

Building the Java Docker image for Gradle is no different from what we just discussed when it comes to Maven. The only difference would be that we have to use Gradle as the build tool of choice, and the Dockerfile  would look a bit different.

I won’t repeat all the steps here as they would be the same, but I wanted to show you how the Dockerfile  looks when working with Gradle:

You can see it is very similar. The only difference is the base Java container image and the command that we run to build the project. Also, I adjusted the last command, the one responsible for running the application, just because of the final location of the jar file. Everything else stays the same – not only when it comes to the configuration but also when it comes to the commands that are run.

Tagging Docker Container Image

I would like to mention one more thing when building the Java Docker images – tags . Each container image has a tag – you can think of it as the version associated with it, a version that can be used to fetch and use the given version. By default, Docker will use a tag called latest , which points to the most recent version. But using the latest  tag is not the best option. You may want to stick to a given major version not to run into compatibility issues.

The same goes with your container images – they should have tags, so you know what you are running. In our examples, we used the 0.0.1-SNAPSHOT  as the tag. During the build process, we used the following command:

The sematext  in the above command is the organization, and the docker-example-demo  is the name of the container image. Together they create the full Java container image name. The 0.0.1-SNAPHOT  is the tag, and you provide it after the :  character. You should pay attention to proper versioning of your containers so that the tags are meaningful and are not confusing your users.

Starting and Stopping the Docker Container

The first time you want to start the container you use the docker run  command to create a writable container layer over the container image that you would like to run, for example:

This is only needed the first time you run the container image. In the above example we run our built container image and we assigned a name my_cnt  to that container. We can now try stopping it by running:

That will stop the container and along with it the application running inside the container itself. We can then start the container again, but this time not using the run  command, but start . We do it like this:

Finally, when the container is stopped and we no longer need it we can remove it by using the rm command:

This is a simple control over the lifecycle of the container.

Publishing the Docker Container Image

Finally, there is one last thing that you should be aware of – publishing of Java Docker container images. By default, when you build your container image it will be stored on your local disk and it won’t be available to any other system. That’s why we need to publish the image to a repository, such as the Docker Hub.

After logging in with your Docker account (you can create one at https://hub.docker.com ) just select the tagged  container image and run the following command:

After a while, depending on your internet connection and the size of the Java container image, your image will be pushed to Docker Hub. From now on the image is available to be pulled from Docker Hub, which means other machines can now access it. To do that you just use the pull  command:

Docker Hub is not the only option for pushing containers to remote repositories and you can run your own container registry. In the case of some organizations this is a necessity. We won’t be talking about such options, because this is out of scope of this post, but I wanted to mention that there are such possibilities.

Best Practices to Build a Java Container with Docker

When creating and running Java web applications inside Docker containers, there are best practices that you should follow if you want your environment to run for a longer time without any kind of issues..

1. Use Explicit Versions

When working with dependencies in your software, you usually set a given library that your application uses to a certain version. If that is a direct dependency, it won’t be upgraded until you manually change the version. And you should do the same with Docker container images.

By default, Docker will try to pull a container image with the tag called latest , which means that you may expect the image version to change over time. You shouldn’t rely on that mechanism. Instead, you should specify the exact version of the container image you want to use. It doesn’t matter if you use that image inside the Dockerfile or you are just running the container. Try sticking with the version, or you may encounter unexpected results when the version changes.

2. Use Multi-Stage Builds

When discussing the creation of the Java Docker container images, we said it is sometimes beneficial to build the application and the container image during a single build. While this is true, we don’t necessarily want to build them in the same Dockerfile. Such a Dockerfile can soon become large, complicated, and thus hard to maintain.

Luckily Docker comes with a solution for that. Instead of having a single, large Dockerfile, we can divide the creation into multiple stages – one that builds the application and one that builds the Java container image.

For example, we could introduce two steps in our build process, and this example Dockerfile illustrates that:

In the first step, which we refer to as build_step , we build the project using Maven  and put the build in the directory called /project . The second step of the build uses a different image, creates a new folder called /application,  and copies the build result into that folder. The key difference is that the COPY  command specifies an additional flag, –from , which tells Docker the step from where the artifact should be copied.

3. Automate Any Needed Manual Steps

This is very simple and very important. You should remember that every manual step needed when your Java application starts should be automated. Such automation should be a part of the Dockerfile file or be present in a script run during container start. There may be various necessary things such as downloading external data, preparing some data, and virtually anything that is required for the application to start. If something requires a manual step, it should be automated when run in containers.

4. Separate Responsibilities

Unlike traditional application servers, containers are designed to split the responsibilities and have a single responsibility per container. The container should be a single unit of deployment and have a single responsibility, a single concern. Ideally, they should have a single process running.

Imagine a deployment of OpenSearch – for a larger deployment, you will usually have 3 master nodes, two client nodes, and multiple data nodes – each running in a separate container.

Some of the benefits of such an approach are isolation, scalability, and ease of management. Isolation because each process runs on its own and will not interfere with others. Scaling as we can scale each type of container independently from the others. Finally, ease of management because of how Docker monitors the container’s lifecycle and can react to it going down.

5. Limit Privileges

One of the good practices to ensure the security of your Java containers is limited privileges. The rule of thumb – don’t run your applications inside the container as root . You want to minimize the potential attack vectors that a malicious user can access.

Imagine that your container or application has a bug that allows executing code with the privileges of the user running the application inside the container. Such a command, run as root, has all the access rights, can read and write to every location, and much much more. With limited access and a dedicated user, the possibilities of a potential attack are limited.

I’m afraid that Docker will run our commands as root by default , but we can easily change that. Let’s alter the initial Dockerfile that we created and create a new user there. After the changes, our new Dockerfile would look as follows:

What we just did is we created a new group called juser  and a user called juser . Next we gave ownership of the /application  folder inside the Java Docker image to the newly created user and finally switched to that user. After that, we just continue and start the application, simple as that.

6. Make Sure Java Is Container-Aware

To put it straight – don’t use older Java versions. Older Java versions are unaware of the containerized environments and can thus produce problems when running, such as not being bound to the limits applied on the container. Because of that, the minimal version of Java you should be using is Java 1.8.0_191, but you would be far better with using one of the more recent versions.

If you want to learn about some of the issues that can happen when running earlier versions of Java in Docker containers, look at our DockerCon lighting talk OOps, OOMs, Oh My! Containering JVM Applications .

7. Build One Java Docker Image for All Environments

The benefit of having Dockerized Java applications is that you can and should build a single image in your CI/CD pipeline and then re-use that container image across all environments where it should be run. Regardless of whether you will be deploying to production, test, QA, or any other environment, you should use the same Java Docker image. The difference should only be in the configuration. And this is where the next best practice comes in.

8. Separate Configuration and Code

All the variables needed to run the Java container should be provided at runtime. Depending on your system architecture, configuration variables should be provided to the container or injected. Don’t hard code configuration variables – this will make it impossible to re-use the container images across multiple environments.

There are various ways you can achieve that. You can pass the configuration via environment variables which is very common in the container world. You can use network-based configuration services – for example, the Spring Cloud Config .  Lastly, you can just mount a volume with a dedicated properties file containing all the needed configurations. The key is not to hard code the configuration in the Java Docker image.

9. Tune Your Java Virtual Machine Parameters

Running your Java code inside the container doesn’t mean you don’t have to take care of proper configuration. That includes properly tuning the Java Virtual Machine parameters, so it can flawlessly work inside your containerized environment. That includes both Java Virtual Machine performance tuning  as well as garbage collection tuning .

10. Monitor Your Java Docker Containers

You need to know what is happening in your environment. However, manually looking at each Java Docker container is impossible or very difficult since Docker comes with many new management challenges . That’s why you need a monitoring tool that will gather and present metrics from your containerized environment; allow you to look into the Java logs  from the applications running inside the containers; show you Docker host utilization metrics; and at least have basic alerting functionality, so that you don’t have to be constantly looking into the presented data.

It may not be obvious at the beginning of your journey with containers. Still, monitoring becomes a necessity as soon as you go to production and want to be sure that the environment is healthy.

Monitor & Debug Containerized Java Applications with Sematext

example of docker application

Sematext Monitoring

Observability is not easy when it comes to Docker containers. You can have lots of containers. They are dynamic and can be scaled automatically. Looking at Docker logs , metrics and health is just not possible manually. You need a good Docker monitoring tool  that can help you with all those tasks providing you with all the necessary information. Sematext Monitoring  with its Container Monitoring capabilities is just such a tool. To learn more about Sematext Infrastructure Monitoring, check out the short video below.

With a lightweight agent running as another container in your environment, you get access to all the host and container metrics  – there are no blind spots. You can easily see how your Docker host resources are utilized, how the containers’ resources are utilized and, more importantly, monitor the Java applications running inside Docker containers with the automatic discovery. Ship the logs from your applications and view them together with metrics, alert on them, all inside a single monitoring solution.

There’s a 14-day free trial for you to try out Sematext Monitoring. Sign up and learn how Sematext can help you make your containers observable!

Start Free Trial

You might also like

Dockerizing a Java Application

Last updated: January 16, 2024

example of docker application

  • Docker Image

announcement - icon

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, you can get started over on the documentation page .

And, you can also ask questions and leave feedback on the Azure Container Apps GitHub page .

Get non-trivial analysis (and trivial, too!) suggested right inside your IDE or Git platform so you can code smart, create more value, and stay confident when you push.

Get CodiumAI for free and become part of a community of over 280,000 developers who are already experiencing improved and quicker coding.

Write code that works the way you meant it to:

>> CodiumAI. Meaningful Code Tests for Busy Devs

DbSchema is a super-flexible database designer, which can take you from designing the DB with your team all the way to safely deploying the schema .

The way it does all of that is by using a design model , a database-independent image of the schema, which can be shared in a team using GIT and compared or deployed on to any database.

And, of course, it can be heavily visual, allowing you to interact with the database using diagrams, visually compose queries, explore the data, generate random data, import data or build HTML5 database reports.

>> Take a look at DBSchema

Slow MySQL query performance is all too common. Of course it is. A good way to go is, naturally, a dedicated profiler that actually understands the ins and outs of MySQL.

The Jet Profiler was built for MySQL only , so it can do things like real-time query performance, focus on most used tables or most frequent queries, quickly identify performance issues and basically help you optimize your queries.

Critically, it has very minimal impact on your server's performance, with most of the profiling work done separately - so it needs no server changes, agents or separate services.

Basically, you install the desktop application, connect to your MySQL server , hit the record button, and you'll have results within minutes:

>> Try out the Profiler

A quick guide to materially improve your tests with Junit 5:

Do JSON right with Jackson

Download the E-book

Get the most out of the Apache HTTP Client

Get Started with Apache Maven:

Working on getting your persistence layer right with Spring?

Explore the eBook

Building a REST API with Spring?

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> REST With Spring (new)

Get started with Spring and Spring Boot, through the reference Learn Spring course:

>> LEARN SPRING

Looking for the ideal Linux distro for running modern Spring apps in the cloud?

Meet Alpaquita Linux : lightweight, secure, and powerful enough to handle heavy workloads.

This distro is specifically designed for running Java apps . It builds upon Alpine and features significant enhancements to excel in high-density container environments while meeting enterprise-grade security standards.

Specifically, the container image size is ~30% smaller than standard options, and it consumes up to 30% less RAM:

>> Try Alpaquita Containers now.

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth , to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project .

You can explore the course here:

>> Learn Spring Security

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot .

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Get started with Spring Boot and with core Spring, through the Learn Spring course:

1. Overview

In this article, we’ll show how to Dockerize a Java runnable jar-based application . Do read about the benefits of using Docker .

2. Building the Runnable Jar

We’ll be using Maven to build a runnable jar .

So, our application has a simple class, HelloWorld.java , with a main method:

And we’re using the maven-jar-plugin to generate a runnable jar:

3. Writing the Dockerfile

Let’s write the steps to Dockerize our runnable jar in a Dockerfile . The Dockerfile resides in the root directory of the build context :

Here, in the first line, we’re importing the OpenJDK Java version 17 image as our base image from their official repository. Subsequent lines  will create additional layers over this base image as we advance .

In the second line, we specify the maintainer for our image, which is baeldung.com in this case. This step doesn’t create any additional layers.

In the third line, we create a new layer by copying the generated jar, docker-java-jar-0.0.1-SNAPSHOT.jar , from the target folder of the build context into the root folder of our container with the name app.jar .

And in the final line, we specify the main application with the unified command that gets executed for this image . In this case, we tell the container to run the app.jar using the java -jar command. Also, this line does not introduce any additional layer.

4. Building and Testing the Image

Now that we have our Dockerfile , let’s use Maven to build and package our runnable jar:

After that, let’s build our Docker image:

Here, we use the -t flag to specify a name and tag in <name>:<tag> format . In this case, docker-java-jar is our image name, and the tag is latest . The “.” signifies the path where our Dockerfile resides. In this example, it’s simply the current directory.

Note: We can build different Docker images with the same name and different tags.

Finally, let’s run our Docker image from the command line:

The above command runs our Docker image, identifying it by the name and the tag in the <name>:<tag> format.

4. Conclusion

In this article, we’ve seen steps involved in Dockerizing a runnable Java jar. The code sample used in this article is available over on GitHub .

Just published a new writeup on how to run a standard Java/Boot application as a Docker container, using the Liberica JDK on top of Alpaquita Linux:

>> Spring Boot Application on Liberica Runtime Container.

Slow MySQL query performance is all too common. Of course it is.

The Jet Profiler was built entirely for MySQL , so it's fine-tuned for it and does advanced everything with relaly minimal impact and no server changes.

Explore the secure, reliable, and high-performance Test Execution Cloud built for scale. Right in your IDE:

Basically, write code that works the way you meant it to.

Build your API with SPRING - book cover

  • Containers and virtualization

example of docker application

Getty Images

6 use cases for Docker containers -- and when to pass

From app testing to reducing infrastructure costs and beyond, Docker has many great use cases. But developers should remember that, like any technology, Docker has limitations.

Chris Tozzi

  • Chris Tozzi

By now, you've probably heard all about Docker containers -- the latest, greatest way to deploy applications.

But which use cases does Docker support? When should or shouldn't you use Docker as an alternative to VMs or other application deployment techniques?

Let's answer these questions.

What are Docker containers?

Docker containers are lightweight application hosting environments. Like VMs, they are designed to be easily portable between different computers and isolate workloads.

However, one of the main differences between Docker and VMs is that Docker containers share OS resources with the server that hosts the Docker containers. VMs use a virtualized guest OS instead.

Because sharing an OS consumes fewer resources than running standalone guest OSes on top of a host OS, Docker containers are more efficient, and admins can run more containers on a single host server than VMs. Docker containers also typically start faster than VMs because they don't boot a complete OS.

Docker vs. other container runtimes

Docker is only one of several container engines available, but there is some ambiguity surrounding the term Docker containers .

Technically speaking, the most important aspect of Docker is its runtime, which is the software that executes containers. In addition to Docker's runtime, which is the basis for containerd, modern containers can also be executed by runtimes like CRI-O and Linux Containers .

Most modern container runtimes can run any modern container, if the container conforms with the Open Container Initiative standards . But Docker was the first major container runtime to gain widespread adoption, and people still use Docker as a shorthand for referring to containers in general -- like how Xerox can be used to refer to any type of photocopier. Thus, when people talk about Docker containers, they are sometimes referring to any type of container, not necessarily containers designed to work with Docker alone.

That said, the nuances and semantics in this regard are not important for understanding Docker use cases. Almost any use case that Docker supports can also be supported by other mainstream container runtimes . We call them Docker use cases throughout this article, but we're not strictly speaking about Docker alone here.

Docker use case examples

Docker containers can deploy virtually any type of application. But they lend themselves particularly well to certain use cases and application formats.

Microservices-based apps

Applications designed using a microservices architecture are a natural fit for Docker containers. This is because developers can deploy each microservice in a separate container and then integrate the containers to build out a complete application using orchestration tools, like Docker Swarm and Kubernetes , and a service mesh , like Istio or VMware Tanzu.

Technically speaking, you could deploy microservices inside VMs or bare-metal servers as well. But containers' low resource consumption and fast start times make them better suited to microservices apps, where each microservice can be deployed -- and updated -- separately.

Pre-deployment application testing

The ability to test applications inside Docker containers and then deploy them into production using the same containers is another major Docker use case.

When developers test applications in the same environment where the applications will run into production, they don't need to worry as much that configuration differences between the test environment and the production environment will lead to unanticipated problems.

Early application development

Docker comes in handy for developers who are in the early stages of creating an app and want a simple way to build and run it for testing purposes. By creating Docker container images for the app and executing them with Docker or another runtime, developers can test the app from a local development PC without execution on the host OS. They can also apply configuration settings for applications that are different from those on the host OS.

This is advantageous because application testing would otherwise require setting up a dedicated testing environment. Developers might do that when applications mature and they need to start testing them systematically. But, if you're just starting out with a new code base, spinning up a Docker container is a convenient way to test things without the work of creating a special dev/test environment.

Multi-cloud or hybrid cloud applications

Docker containers are portable, which means they can move easily from one server or cloud environment to another with minimal configuration changes required.

Teams working with multi-cloud or hybrid cloud architectures can package their application once using containers and then deploy it to the cloud or hybrid cloud environment of their choice. They can also rapidly move applications between clouds or from on premises and back into the cloud.

Deploying OS-agnostic applications

The same Docker container can typically run on any version of Linux without the need to apply special configurations based on the Linux distribution or version. Because of this, Docker containers have been used by projects like Subuser as the basis for creating an OS-agnostic application deployment solution for Linux.

That's important because there is not generally a lot of consistency between Linux distributions when it comes to installing applications. Each distribution or family of distributions has its own package management system, and an application packaged -- for example, Ubuntu or Debian -- cannot typically be installed on a Linux distribution, like RHEL, without special effort. Docker solves this problem because the same Docker image can run on all of these systems.

That said, there are limitations to this Docker use case. Docker containers created for Linux can't run on Windows and vice versa, so Docker is not completely OS-agnostic.

Cost control

The efficiency of Docker containers relative to VMs makes Docker a handy option for teams that want to reduce how much they spend on infrastructure. By taking applications running in VMs and redeploying them with Docker, organizations will likely reduce their total resource consumption.

In the cloud, that translates to lower IaaS costs and a lower cloud computing bill. On premises, teams can host the same workloads with fewer servers, which also translates to lower costs.

When not to use Docker

While Docker comes in handy for many use cases, it's not the best choice for every application deployment scenario.

Common reasons not to use Docker include the following:

  • Security considerations. Applications that require strict isolation are better deployed in VMs than in containers, which don't isolate applications fully from each other or the host OS.
  • GUI applications. Although it's technically possible to run GUI apps in Docker containers, this is harder than running command-line interface apps. This is why video games are not deployed using Docker, for instance.
  • Small-scale deployments. Teams that deploy a small number of apps or don't update their apps frequently might be better off sticking with VMs, which are simpler to manage. The complexity of orchestrating Docker containers and managing storage for containers outweigh the benefits of Docker for small-scale deployments.

Need a replacement? Try these 5 Docker alternatives

How to compare Docker images with container-diff

How to run Docker on an Azure VM

Compare Docker vs. Podman for container management

Containers vs. VMs: Is the VM all that bad?

Related Resources

  • How 5G affects data centres and how to prepare –TechTarget ComputerWeekly.com
  • Storage for containers and virtual environments –TechTarget ComputerWeekly.com
  • Developing and Deploying Containers Using Red Hat and AWS Solutions –Red Hat

Dig Deeper on Containers and virtualization

example of docker application

Azure Kubernetes Service (AKS)

RahulAwati

application containerization (app containerization)

RobertSheldon

Docker vs. OpenShift: What are the main differences?

MatthewGrasberger

Part of: Working with Docker containers

Advances in container and cloud technologies have morphed the debate over container deployment on bare-metal servers vs. VMs, with strong pros and cons for each.

There are a lot of varied options for container images. And although that freedom of choice breeds flexibility, it can also make it easy to choose the wrong one.

Docker images, Dockerfiles and containers are all instrumental in a Docker setup. Learn what tasks each component completes and how they interact with each other to run applications.

Version control is a critical part of software development and deployment. IT teams can use Docker tags to better manage and organize container images.

Software design documents still rank as important SDLC components -- even in the age of DevOps. Learn why they are important, how...

User stories fulfill the same purpose as software requirements but through different means. Learn how to write user stories and ...

Cypress and Playwright stand out as notable options among the many automated testing tools available. Learn how to make an ...

Managing microservices without API gateways might be uncommon, but not unheard of. Consider the benefits, downsides and available...

The switch from microservices to monolith could save costs and improve performance. Explore key considerations and questions to ...

The RESTful API Modeling Language, or RAML, can be a powerful tool for developers looking to create an efficient, standardized ...

Centralized identity management is vital to the protection of your organization's resources. Do you know how to secure Azure ...

CIOs are taking a hard look at the VMware portfolio as the number of alternatives rises in the hybrid cloud infrastructure market.

Building AI apps in the cloud requires you to pay more attention to your cloud workload management because of how AI impacts ...

Compare Datadog vs. New Relic capabilities including alerts, log management, incident management and more. Learn which tool is ...

Many organizations struggle to manage their vast collection of AWS accounts, but Control Tower can help. The service automates ...

There are several important variables within the Amazon EKS pricing model. Dig into the numbers to ensure you deploy the service ...

Does the world really need another programming language? Yes, say developers behind Zig. Here are five of the top features Zig ...

Virtual threads in Java currently lack integration with the stream API, particularly for parallel streams. Here's how a JDK 22 ...

The three-level DBMS architecture makes database design more secure, extensible and accessible for client applications. Learn the...

Lenovo sets itself apart from others by offering configured AI products to speed deployment and shape direction, while also ...

In today's world of security threats, it's critical to keep OSes up to date. As the end-of-life date for CentOS 7 approaches, ...

With many organizations developing or evaluating generative AI initiatives, HPE increased its commitment to the space through a ...

Docker Examples

Docker Examples Using Python and REST API

Docker is the new standard for containers because it makes developers easily ship and deploy applications into an isolated environment and will work on every host. This is because when you create a Docker image, you can include whatever dependencies that you need. In this article I will show a couple of Docker examples to deploy python based rest-api applications.

Table of Contents

Running the First Docker Example: Hello World

I will run all docker examples in this post in Ubuntu 20.04. If you haven’t setup Docker in your Ubuntu, you can follow these instructions to install docker . After you install Docker, let’s start with running a hello world image

In line 1, I ran docker with a sudo privilege. By default, Docker requires to run in privilege user. The “hello-world” is a docker image name. Docker will automatically download this image only if you have never downloaded it before.

The rest of the line is the output when the hello-world image run. After you run this image, Docker will create a container and run the application inside the container. In this hello-world container, the application is returned immediately so the container will stop.

Container is running instances of Docker image.

To list all the docker containers with all the statuses, you could run docker ps -a. Without -a arguments you will only see containers that are still in running state. You could see from line 2 above hello-world container status is “exited”. This means that your container has stopped.

It is a good idea to remove stopped containers. The line below will remove stopped container with the container id. You could also replace container id with the container name, in this case the container name is gifted_elion .

If you don’t give a container a name when running Docker, Docker will give random name.

In the above, you can see your downloaded images with command Docker images.

Python REST API Docker Example

In these Docker examples, we will build a python REST API server. Then, we’ll package it into Docker image, and then run the image.

Let’s write our python app example, and name it rest-api.py.

We have built our python app. Now, let’s make Docker images. Firstly, to write docker images, we need to write Dockerfile. Dockerfile will contain our base Operating System, install dependencies, and run some scripts.

Line 1. Base image. Checkout this link for complete python docker images . It’s a good habit to always include the version number tag. If you don’t use a tag or you just use the latest version, you might get a surprise when you update your image in the future because it may break your application. Line 3. Install dependencies. Line 5. Change working directory. This is like running cd command from linux terminal. Line 6. Copy your file from host to docker image. Line 7. Change file permission. Line 8. Change file ownership to xfs. Line 10. Set the user that runs the python script to xfs. Please note xfs is the user in python base image that is equivalent to www-data in Ubuntu, because they have similar userid. This is not really necessary in container.

You can run your python command as root and the container will still run in an isolated environment, because the root in container is different from root in host. Line 11. Run the python script.

Now let’s build the image using command docker build .

The command docker build -t is stand for tag, which is our image name. You can also name it with name:tag format, (ex: rest-api-image:1.0). By default, it will have the latest tag.

The last argument is the path where Dockerfile and files to be copied are located. In this case the dot means to look files on the same directory where you run the docker build command is.

Docker Examples To Run Docker Image

We have built our own image. Now let’s run it.

If you call curl http://localhost:5100, you won’t be able to connect. This is because the container is listening in their own private ip address. We need to forward the port to the host. Let’s run the docker again.

This time, I added the command above with parameter -p 5100:5100. It has syntax -p <host:container>. What it means is that we bind port TCP 5100 on host IP Address and forward it to port 5100 on container.

Now you will be able to connect to rest-api in container.

Docker Examples To Run Docker In Background Mode

Next, we need to run our container in the background. It should automatically restart if the python app crashes.

The output of the command above is only numbers. You can verify if the containers are running by typing this command: docker ps .

The parameters explanations are listed below:

  • -d : Run as daemon in background
  • –restart always : Restart the container if python app crash
  • –name : The name of the container
  • -p 5100:5100 : Assign port <host:container>

List Running Docker

Docker examples to see output logs.

Our container is now running in background, we can still see the output logs by using this command:

Adds -f to continously stream the logs:

Now let’s try to call our API with curl command.

If you notice on line 5 above, I got an error 405 Method Not Allowed. This is because we don’t have the API /name/<name> with GET method. In line 8, I recalled the API with POST method. And if you see the docker logs screen, you will see the output logs:

Docker Examples To Stop And Delete Container

In production, you will want this container to always run, but there may be cases where you want to stop that container or even delete it.

Let’s verify the container status.

From the output above your see the container status now “Exited”. Now let’s delete the container.

Let’s verify our container again.

You should not get any output, because the container is deleted.

The difference between docker stop and docker rm is that docker stop will simply halt your container from running. Meanwhile, docker rm will delete container from local storage eg: /var/lib/ docker / containers / and including the associated resource like volume and network.
Running docker rm will only remove the container. It doesn’t remove the image.

Docker Examples To Delete Docker Image

Let’s say you want to delete a docker image. It could be an outdated version of docker image. First let’s list that rest-api-image is in local repository in your system.

Now let’s delete our rest-api-image.

Running Command Inside A Container

You can get inside a running container and run commands by using docker exec .

Parameters explanation:

  • -i : Interactive mode
  • -t : Allocate a pseudo-TTY
  • rest-api : Container name
  • sh : The command inside container that you want to run

The -t or TTY is useful so you can write any input from terminal, just like when you SSH to a remote server.

I hope this article was helpful for you. And if you have any questions or comments, please feel free to put it in comments section.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes

dockersamples/example-voting-app

Folders and files.

NameName
225 Commits

Repository files navigation

Example voting app.

A simple distributed application running across multiple Docker containers.

Getting started

Download Docker Desktop for Mac or Windows. Docker Compose will be automatically installed. On Linux, make sure you have the latest version of Compose .

This solution uses Python, Node.js, .NET, with Redis for messaging and Postgres for storage.

Run in this directory to build and run the app:

The vote app will be running at http://localhost:5000 , and the results will be at http://localhost:5001 .

Alternately, if you want to run it on a Docker Swarm , first make sure you have a swarm. If you don't, run:

Once you have your swarm, in this directory run:

Run the app in Kubernetes

The folder k8s-specifications contains the YAML specifications of the Voting App's services.

Run the following command to create the deployments and services. Note it will create these resources in your current namespace ( default if you haven't changed it.)

The vote web app is then available on port 31000 on each host of the cluster, the result web app is available on port 31001.

To remove them, run:

Architecture

Architecture diagram

  • A front-end web app in Python which lets you vote between two options
  • A Redis which collects new votes
  • A .NET worker which consumes votes and stores them in…
  • A Postgres database backed by a Docker volume
  • A Node.js web app which shows the results of the voting in real time

The voting application only accepts one vote per client browser. It does not register additional votes if a vote has already been submitted from a client.

This isn't an example of a properly architected perfectly designed distributed app... it's just a simple example of the various types of pieces and languages you might see (queues, persistent data, etc), and how to deal with them in Docker at a basic level.

Contributors 33

@bfirsh

  • JavaScript 14.9%
  • Dockerfile 14.1%
  • Python 7.7%

Logo

  • AI Development Use ML tools and algorithms to build intelligent AI-driven apps
  • Generative AI Build intelligent generative AI solutions that adapt and evolve
  • ML Development Build and deploy custom ML models to automate tasks
  • MLOps Build sophisticated ML and CI/CD pipelines and shorten production cycles
  • LLM Development Accelerate LLM adoption and streamline its implementation
  • Data Science Consulting Get expert guidance to leverage data for operational improvements
  • Gen AI and ML Build domain-specific generative AI solutions on AWS
  • Migration and Modernization Move your current workloads to AWS Cloud environment
  • SaaS Migration and Engineering Build, migrate, and optimize SaaS applications through cloud-native solution
  • Data Science and Engineering Build and optimize your data processing pipelines to improve operational efficiency
  • Serverless Manage complex workflows and ensure optimal resource utilization
  • Cloud Management Improve AWS efficiency, automation, and visibility for better cloud operations
  • Product Engineering and Development Build products powered by latest tech stacks and design thinking
  • Custom Software Development Build scalable, robust software designed to meet your business needs
  • Performance Engineering and Testing Build products that perform optimally in normal and extreme load conditions
  • Quality Engineering Ensure product quality and customer satisfaction
  • Project Strategy Build an agile, adaptive, and transformative project strategy
  • Digital Experience Design Create digital experiences that engage users at every touch point
  • Financial Services Build secure, scalable, precision-engineered BFSI solutions
  • Retail and E-commerce Ensure a consistent customer experience and operational efficiency
  • Healthcare and Life Sciences Build secure, compliant solutions for better patient care and improved efficiency
  • Supply Chain & Logistics Bring agility, resilience, and intelligence to your supply chain
  • Marketing and Technology Transform marketing efforts and optimize campaigns through intelligent automation
  • Manufacturing Adopt modern solutions to automate workflow and improve product quality
  • Case Studies
  • AI & ML Insights Gain insights on the latest stories, reports, surveys, and updates on AI/ML
  • Product Engineering Insights Get a deeper understanding of product development with our expert insights
  • Cloud Engineering Insights Stay updated with the latest trends and best practices in cloud engineering
  • Blog A collection of insights on latest technology, best practices and proven strategies
  • Ebooks Download ebooks from our experts to know the winning strategies, technologies, and trends
  • News and Tech Insights Keep up with the latest technology news and insights

Docker Use Cases: A Demonstrative Guide with Real-world Examples

Be a part of the container revolution. Know the Docker use cases to improve software development, application portability & deployment, and agility.

example of docker application

Table of Contents

  • 3 common myths around Docker

Use cases of Docker containers: Fundamentals you should know

Docker use cases for businesses, docker uses cases for various industry verticals, when not to use docker.

Concerns of IT operations and software development communities over a fatal level of dysfunction in the industry gave momentum to the DevOps movement. Simultaneously the rapid evolution of containers changed the dynamic of modern IT infrastructure. And now, what’s become a hotter topic than all is Docker – a container that can run apps on the same old server or OS. Most importantly, there are many docker use cases that your business can benefit from.

For instance, being an early leader in technology, GE Appliances faced the unavoidable drawback of accumulating legacy practices and legacy data processing systems. Accordingly, the development-to-delivery time for their applications was around 6 weeks on average, with a largely manual delivery chain. Besides, an initial migration to the cloud failed to eliminate many existing practices.

Finally, GE appliances switched to Docker, which allowed them to build key services around the open platform with a greater density of applications. This was impossible earlier with virtual machines. Moreover, they could support legacy applications and accelerate the migration from ancient (mid-20th century) legacy data centers.

That being said, you could be one of the next organizations to leverage Docker in DevOps to create ready-to-run containerized applications. We will walk you through everything you need to know about Docker in this blog.

Simform is a renowned Containerization & Orchestration Consulting and Implementation company, helping businesses implement Docker into their IT environments to succeed with the development & deployment of modernized applications. Contact us today to get the best out of the docker platform!

Clearing the confusion: 3 common myths around Docker

Feeling like you are on uncharted waters? Fret not! Before we dive into what Docker is, let us squash out some common misunderstandings about Docker to ensure you are on the same page with us.

#1 Docker is all-or-nothing

Many small and medium enterprises tend to believe that they have to use Docker to evolve the apps’ entire architecture. However, that is not the case as Docker is compatible even with an app built on a LAMP stack, given you isolate APIs and necessary web services.

#2 Without the cloud, Docker cannot be implemented

Cloud services are the talk of the day in terms of automation, storage, development, and whatnot. But there is no mandatory requirement that indicates that your app should be on the cloud to use Docker. Sure, Docker offers various benefits with cloud services, but the requirement of your app comes first. Just because it provides a virtualized OS, it does not mean you have to migrate towards the cloud.

#3 Security concerns with Docker is high

Though DevOps and containers have been practiced and used for years within the IT industry, Docker is still new. In light of the security concerns that come with Virtual Machines (VMs), many businesses falsely believe that Docker is prone to security breaches or is inherently insecure. Though that may have been true in the past, currently, Docker uses Kernel-level security via virtualized servers. Therefore, Docker is secure as any other platform can be !

Ultimate Guide to Choose a DevOps Consulting and Implementation Company

Containers can be considered a platform, an app layer abstraction, or a virtualized OS that helps package everything coded for an app in one machine in the form of a bundled image or a file. This bundle, for instance, will consist of all libraries, resources, codes, etc., required to run the app under development on another machine without any extravagant prerequisites.

Imagine the considerable amount of development time and cost you can save, especially while evolving your software architecture or migrating towards the cloud or something new. It’s a game-changer!

So, what is all this hype about Docker? You might wonder, what is Docker?

Docker is a container that functions in the form of Platform-as-a-Service (PaaS) and focuses on OS-level virtualization.

Docker Infrastructure

What is Docker used for?

For starters, when your development pipeline involves the usage of multiple computing environments, Docker would help create an environment with standardized instructions. In other words, Docker would help to reduce the inconsistency between different systems and avoid repeating any development task.

Moreover, everyone involved in the project would be working in the same computing environment. For instance, Docker-compose is used to create a configuration file committed to a particular code repository. Any team member can access this file, create their own and respective development environment according to their system, and maintain code consistency. Being able to configure faster is just the icing on the cake!

But that list of examples of where and how Docker can be used does not stop at that. Why are companies embracing them? What’s the reason behind it being so popular besides its development advantages? Let us draw a clear picture of that by venturing into what are the benefits of Docker.

  • Portability: Easily transfer and publish the same changes in all development machines irrespective of which team member makes a new addition or a change.
  • Reusability: Reuse components created in one machine on another n number of times as per requirement for two or more different purposes by forking them as desired.
  • Computing environment isolation: Regardless of the platform the application is deployed, everything, including the code, dependencies, and more, will stay consistent. Improved productivity considerably!
  • Mobility: Docker is compatible with most used parent platforms like Linux, macOS, and Windows. It can run anywhere, provided there is one targeted OS.
  • Testing: Docker-based result files are images. These images are versioned and can easily be rolled back for any iteration. This methodology supports Continuous Integration (CI) and Continuous Deployment (CD) practices for continuous testing at any given time.
  • Scaling: Without any massive architecture overhaul, apps with small processes can be built inclusive of internal API collaborations at scale. In simple words, you can create apps with room for scaling as per your desire.

This is just a bird’s eye view of the benefits you can reap by using Docker. The list keeps increasing!

1) Adoption of DevOps

Companies evolving their apps’ software architecture or are startups may struggle to keep up with the changing technology and market expectations. In such cases, DevOps implementation helps build and deploy apps faster, or features, are known facts. But Docker adds more icing to the cake on that front as an irreplaceable tool.

At any given time, in any computing environment, Docker guarantees that a feature developed or an app created in the development environment will work in staging and the production environment. You can also roll back to old versions or the newer ones at your convenience. Given such effective particles, Docker in DevOps seems to provide seamless control over changes to be made in an app flexibly. The possibilities that Docker use cases offer are almost limitless similar to the possibilities you can obtain from cloud services.

2) App infrastructure isolation

If you are a business already practicing DevOps or into software evolution, you might be aware of the issues that arise in the name of dependency hell. When you install a software package created on one machine on another, there are issues that the DevOps team may face in running apps with specific versions, libraries, dependencies, and so much more.

With the help of Docker, you can run multiple apps or the same app on different machines without letting something as versions or other such factors affect the development process. This is made possible as Docker uses the kernel of a host system and yet runs like an isolated application, unlike VMs. This helps build apps where infrastructure can be isolated from the computing environment requirements during development. It also reduces a considerable amount of hassle and confusion in DevOps collaboration between teams.

3) Multi-tenancy support

When building an application that requires multi-tenancy, the development process may get complicated given the numerous dependencies and independent functions involved. Managing the development operations may become challenging to handle. Moreover, rearchitecting the app may also be required in some cases, which is a headache and time-consuming. Not to mention the cost your business would have to incur.

With the help of Docker, you can quickly isolate different computing environments, as mentioned a couple of times in this article. Development teams will have the power to run multiple instances of the application tiers on different tenants, respectively. The increased speed of Docker to spin up operations also serves as an easy way to view or manage the containers provisioned on any system.

Multi-tenancy Cluster Created With Docker

4) Improvement in software testing

Application development involves testing, a necessary process that cannot be simply ignored. The amount of effort and procedures that go into the types of testing conducted and test cases created for every development and deployment process is tedious. Multiply the headache when the test has to be conducted on different machines. What if you can run all your tests automatically? Moreover, what if you can simultaneously isolate your development tests from deployment tests?

Dockers help your business to accomplish just that. It reduces the number of attempts required to rerun tests. If tests fail on the clients’ end, they will also fail on the local machine. In other words, test results will be the same in all computing environments.

5) Smart Disaster Recovery (DR)

Committing the development pipeline to one OS or a couple of assigned machines is always an excellent plan to keep discrepancies under control. Yet, not all challenges can be foreseen. What if the data, transferred to a new machine, is corrupted? Though Docker can handle dependency issues, can it manage such data storage corruptions?

The answer is yes. With the help of Docker use cases, you can instantly create and destruct container tasks as you please. Furthermore, Docker provides possibilities to commit your data within the container platform before the image file is transferred to another machine. This way, even if something goes wrong, you can use the Docker images to restore your data. Moreover, if your business is already using the cloud or considering migrating towards the cloud, you can set up a DR plan on the cloud zone at ease.

6) Continuous rapid deployment

Gone are the days when deployment began only after development processes were complete. DevOps and Docker together manage to reduce deployment to seconds. Before Docker, applications built had to be run on the parent server and environment before making it live on any other environment. Not to mention the configuration management and maintenance of consistency in versions that increase the deployment time.

With Docker, you can easily run the app on any server environment, and evidently, it also provides version-controlled container images. Furthermore, stage environments can be set up via the Docker engine, which enables CD abilities.

7) Creation of microservices architecture

Organizations, be it small or large, are actively adopting microservices replacing monolithic apps for various reasons such as scalability, performance, etc. However, it is fundamental to be aware of the issue that microservice architecture may pose regarding dependencies since architecture is broken down into individual and smaller components.

Containerization with Docker provides the ability to isolate these individual components into different workload environments. Thus, teams can independently work on their assigned components and yet deploy and scale them simultaneously by collaborating with other component-related teams. Specifically, Docker Hub and Docker Desktop focus on running microservice-based applications if you are interested.

Note : Find out 5 best practices for implementing a microservices architecture and also other insights on pros, cons, benefits, and more.

8) Migration of legacy apps to containers

Time to remind you again that Docker is not only for advanced application development. Even if your business has a legacy app, it can be migrated to containers. So if you are initially considering not building or evolving towards microservices, you can simply use a lift and shift approach to move your existing app to the container platform.

You may not reap immediate gains since the leap from legacy to containers is vast. However, it will help you containerize your app with DevOps practices, leading to numerous benefits over time. Furthermore, even if you use the LAMP stack, Docker will still be compatible with your requirements. Food for thought!

Build an Enterprise Kubernetes Strategies like Spotify, GitHub and other Tech Superpowers

9) simplification of code configuration.

The most significant advantage of a VM is its ability to run on any given platform with its own configuration on top of the infrastructure your app follows. While Docker seems to fulfill the same, it avoids the overhead you may incur using a VM. Unlike a virtual machine, with Docker, you can create your own configuration and set it up on a supporting computing environment that you created.

In addition to that, you can accomplish all this in the form of simple code and deploy it as per the requirement. Imagine the possibilities you can obtain from such a use case. You could also describe this as the decoupling of infrastructure according to the application environment. With any extra hardware or complicated coding, obtain the freedom to run your app across any PaaS or IaaS.

10) Management of development pipeline

As stated above, Docker supports different computing environments. Given that, when developed codes are transferred between different systems to production, there would be many environments it passes through. The code would be influenced by minor changes or differences at each development phase in each environment. For instance, without Docker, the modifications incurred along the way could be irreparable, resulting in inconsistent codes and infrastructure.

However, with Docker, the development and deployment pipeline can strictly maintain consistency in coding, infrastructure creation, and even in the usage of resources. Furthermore, Docker’s resulting image file helps achieve a zero-change state in the written code even when it passes across countless development environments and production phases.

11) Increased developer productivity

Any good software project will focus on writing codes that are close to production as much as possible and create development environments that are fast and interactive. With the help of Docker, a development environment can be built with low memory capacity without adding an unnecessary memory footprint to the memory that the host repository already holds. A dozen services can run at low memory with Docker.

Since Docker also supports sharing data volumes, it helps make the application code available on any host OS container even if the environment is running on a VM. Furthermore, given that the same VM is present in all development systems, the data volumes, despite being stored in different folders, can be synced across all systems since it all runs on a host OS. All this proves to be a beneficial aspect for improving the productivity of development teams.

12) Consolidation of server requirements

Docker provides a powerful consolidation of multiple servers even when the memory footprint is available on various OSs. Not to mention, you can also share the unused memory across numerous instances. With such a possibility, you can even use Docker to create, deploy and monitor a multi-tier app with any number of containers. The ability to isolate the app and its development environments is a boon offered by Docker. While it may involve different servers in some cases, even those can be consolidated to save cost.

One may wonder if such a server consolidation may result in a decline in performance. However, with Docker, since the development environment is virtualized, performance decline is negligible, and that’s one of the upsides of using it.

13) Porting across cloud providers

The age of containers and Docker is already on the rise. Almost all the cloud service providers have started supporting Docker-based operations today. With that statement, there is a rise in the likelihood of migrating your app to the cloud or creating a cloud-ready app from scratch to implement DevOps methodologies for your development and deployment processes. So why set up your app to run locally when it can run on the cloud? Simply your workflow as best as possible !

1) Healthcare

National Institutes of Health (NIH) have been using containerization technology, especially Docker, for its mobility and flexibility in delivering imaging software services to more than 40 hospitals in the US. It also uses Docker to curate data from different websites and integrates it with  Artificial Intelligence (AI) to help doctors make informed decisions.

NIH leverages Docker to use containers to test new imaging technologies in the hospital sector. It also allows it to implement continuous integration and testing practices with all hospital collaborators and ensure that the latest technologies are used.

Docker Use Case For Healthcare

2) Media and entertainment

With the rise of containers, Netflix decided to use Docker and create its own infrastructure to execute deep functions with its integrated Amazon EC2 services. This project was called Titus.

At Netflix, Docker was used in Titus to specifically help with deployment and serve as a job scheduling system. Users are created in batches in this platform, and Titus helps create sophisticated infrastructure at a fast rate, assisting developers in specifying the exact code and dependencies required. With the combination of Linux and Docker, Netlflix also implemented their own multi-tenant isolation. This helps to deploy new updates faster and also cater to users with detailed preferences.

How Netflix Became A Master of DevOps?

Carnival Corporation is a British-American cruise operator controlling more than 100 vessels. It wanted to digitize various operations and transform the experience for guests by letting them interact and participate more by personalizing their needs. So it has a program known as The MedallionClass, where more than 300 services are deployed across a cruise ship via this program built with Microservices Architecture.

This MedallionClass program uses a hybrid cloud infrastructure, and with the help of Docker, Carnival Corp. can deploy services to any ship and also run them to improve the guest experience at any given time. In simple words, Docker helps to connect various elements such as food, beverages, accommodation, transportation, etc., on its cruise ships. Ultimately this serves to imitate each cruise ship as a mobile city.

Knowis is a digital banking solution provider. It undertook the steps to migrate away from monolithic to modernized architectures that support container technologies. Creating a scalable application infrastructure to satisfy expanding banking markets is often the primary goal for banking sectors.

With the help of containers such as Docker, Knowis is able to react flexibly to the market requirements. It also leverages container orchestration via Kubernetes to balance the workload as well. This way, the IT performance of banks remains stable even under multiple access and higher loads.

5) eCommerce

Alibaba Cloud provides cost-efficient possibilities for startups and Small and Medium Businesses (SMBs) to create eCommerce websites. It may be a competitor of other cloud platforms. But the USP of this platform is catered towards eCommerce.

It allows developers to use Docker to run as many containers as needed instantly and simultaneously process various operations in a given website in a short time. Moreover, it can be integrated with Magento CMS to manage content in the website as per the business vision. Furthermore, all this can be accomplished in a containerized environment.

6) Education

Wiley Education Services (WES) provides online educational services to more than 60 higher educational institutions. It aims to connect higher education partners, and in this process, the development team leverages multiple technologies. One of them is Docker, which has helped it innovate and experiment with creating architecture with room for scaling.

The foremost benefit, Dockers, brings to the WES table is the ability to get one of its websites up and running in the marketplace in no time. This helps to connect students with partners faster. Docker helps close that gap furthermore by allowing teams to quickly create, test, and publish containers at will at a budgetary price.

If your business is interested in the usage of Docker, all the above-mentioned would be good places to begin with. However, there are also occasions when not to use Docker. In our experience, do not use Docker, if your

Product is desktop app

Docker is best for current web apps since it requires server-based software. On the other hand, building a desktop app requires a rich GUI, On the other hand, building a desktop app requires a rich GUI, and Docker, at present, may not meet all its requirements in terms of interface or computing environment. However, in the future, this may change.

Project is simple and small

Docker is helpful to keep track of dependencies and multiple functions if the architecture is vast. In the case of an app with small architectures or simple infrastructures, the usage of Docker could get underrated. Avoid spending unnecessary time and tools at this point. Use Docker when your app grows exponentially.

Development pipeline involves multiple OSs

Docker can run natively on Linux and Windows by default today. It also, however, supports running on macOS. Yet, containers created use the host system’s kernel, and there is no possible way to create a container that uses a different kernel on another system. You may overlook this backdrop if your app doesn’t use any kernel-based function.

On the other hand, if you need to use kernel capacities, you would require a VM to run containers on different machines. This may create more complications unless you have the right experience and understanding of managing all the above.

Requirement is to only boost app performance

Docker eases development processes and increases productivity. The ultimate result of all this is often an increase in the app’s performance as well. However, such an increase is a complimentary result of other improvements in operations. So, if you are wondering if you can leverage Docker just to increase your app performance , our suggestion is to refrain from using it for the time being. Instead, pick other tools and practices that directly help improve performance to save cost and avoid unnecessary complications at your project’s early stages.

Project needs easily managed technology

Docker is still a new technology. Still, many enterprises and developers struggle to understand it and focus on how it is well connected with DevOps, Cloud services, and Microservices. The documentation of Docker is falling behind the pace at which the technology is advancing too. Implementation of Docker also comes at a cost and requires efficient communication between various containers and teams involved. Hence, if you are looking for an easy technology that can be set up quickly, do not choose Docker unless your project is vast and you have the resources to expand.

What’s next? – The conclusion

According to a study , the application containers market was predicted to be more than $4.3 billion in 2022. And when big companies like The New York Times, PayPal, Spotify, Uber, and many more, overcome various challenges by using Docker, other businesses also get the motivation to employ its use. In fact, Docker has evolved at such an astonishing pace that a mere mention of containerization makes us think of Docker.

And now, it is clear that the list of Docker use cases does not stop just at best practices. They almost revolutionize the way development and deployment processes can be practiced. From running web apps, APIs to desktop apps on the cloud or local servers. Interested to know how you can implement DevOps and Docker?

Our experts can ease your roadmap journey today!

Enterprise Kubernetes Strategy

Get in Touch

' src=

Hiren Dhaduk

Hiren is CTO at Simform with an extensive experience in helping enterprises and startups streamline their business performance through data-driven innovation.

Cancel reply

Your email address will not be published. Required fields are marked *

Your comment here*

Sign up for the free Newsletter

For exclusive strategies not found on the blog

Sign up today!

Related Posts

AWS DevOps Competency Partner vs Regular DevOps Company

AWS DevOps Competency Partner vs Regular DevOps Company: How Are They Different?

' src=

Kubernetes Architecture and Components with Diagram

Docker Alternatives

11 Powerful Docker Alternatives to Revolutionize Containerization in 2024

Get insights on Gen AI adoption and implementation in 2024. Download the survey report now!

fb-img

Containerize a Python application

Prerequisites.

  • You have installed the latest version of Docker Desktop .
  • You have a git client . The examples in this section use a command-line based git client, but you can use any client.

This section walks you through containerizing and running a Python application.

Get the sample application

The sample application uses the popular Flask framework.

Clone the sample application to use with this guide. Open a terminal, change directory to a directory that you want to work in, and run the following command to clone the repository:

Initialize Docker assets

Now that you have an application, you can create the necessary Docker assets to containerize your application. You can use Docker Desktop's built-in Docker Init feature to help streamline the process, or you can manually create the assets.

Inside the python-docker directory, run the docker init command. docker init provides some default configuration, but you'll need to answer a few questions about your application. For example, this application uses Flask to run. Refer to the following example to answer the prompts from docker init and use the same answers for your prompts.

If you don't have Docker Desktop installed or prefer creating the assets manually, you can create the following files in your project directory.

Create a file named Dockerfile with the following contents.

Create a file named compose.yaml with the following contents.

Create a file named .dockerignore with the following contents.

You should now have the following contents in your python-docker directory.

To learn more about the files, see the following:

  • .dockerignore
  • compose.yaml

Run the application

Inside the python-docker directory, run the following command in a terminal.

Open a browser and view the application at http://localhost:8000 . You should see a simple Flask application.

In the terminal, press ctrl + c to stop the application.

Run the application in the background

You can run the application detached from the terminal by adding the -d option. Inside the python-docker directory, run the following command in a terminal.

Open a browser and view the application at http://localhost:8000 .

You should see a simple Flask application.

In the terminal, run the following command to stop the application.

For more information about Compose commands, see the Compose CLI reference .

In this section, you learned how you can containerize and run your Python application using Docker.

Related information:

  • Build with Docker guide
  • Docker Compose overview

In the next section, you'll learn how you can develop your application using containers.

Understanding the Docker USER Instruction

example of docker application

Jay Schmidt

In the world of containerization, security and proper user management are crucial aspects that can significantly affect the stability and security of your applications. The USER instruction in a Dockerfile is a fundamental tool that determines which user will execute commands both during the image build process and when running the container. By default, if no USER is specified, Docker will run commands as the root user, which can pose significant security risks. 

In this blog post, we will delve into the best practices and common pitfalls associated with the USER instruction. Additionally, we will provide a hands-on demo to illustrate the importance of these practices. Understanding and correctly implementing the USER instruction is vital for maintaining secure and efficient Docker environments. Let’s explore how to manage user permissions effectively, ensuring that your Docker containers run securely and as intended.

Banner docker tips

Docker Desktop 

The commands and examples provided are intended for use with Docker Desktop , which includes Docker Engine as an integrated component. Running these commands on Docker Community Edition (standalone Docker Engine) is possible, but your output may not match that shown in this post. The blog post How to Check Your Docker Installation: Docker Desktop vs. Docker Engine explains the differences and how to determine what you are using.

UID/GID: A refresher

Before we discuss best practices, let’s review UID/GID concepts and why they are important when using Docker. This relationship factors heavily into the security aspects of these best practices.

Linux and other Unix-like operating systems use a numeric identifier to identify each discrete user called a UID (user ID). Groups are identified by a GID (group ID), which is another numeric identifier. These numeric identifiers are mapped to the text strings used for username and groupname , but the numeric identifiers are used by the system internally.

The operating system uses these identifiers to manage permissions and access to system resources, files, and directories. A file or directory has ownership settings including a UID and a GID, which determine which user and group have access rights to it. Users can be members of multiple groups, which can complicate permissions management but offers flexible access control.

In Docker, these concepts of UID and GID are preserved within containers . When a Docker container is run, it can be configured to run as a specific user with a designated UID and GID. Additionally, when mounting volumes, Docker respects the UID and GID of the files and directories on the host machine, which can affect how files are accessed or modified from within the container. This adherence to Unix-like UID/GID management helps maintain consistent security and access controls across both the host and containerized environments. 

Unlike USER , there is no GROUP directive in the Dockerfile instructions . To set up a group, you specify the groupname (GID) after the username (UID). For example, to run a command as the automation user in the ci group, you would write USER automation:ci in your Dockerfile.

If you do not specify a GID, the list of groups that the user account is configured as part of is used. However, if you do specify a GID, only that GID will be used. 

Current user

Because Docker Desktop uses a virtual machine (VM), the UID/GID of your user account on the host (Linux, Mac, Windows HyperV/WSL2) will almost certainly not have a match inside the Docker VM.

You can always check your UID/GID by using the id command. For example, on my desktop, I am UID 503 with a primary GID of 20:

Best practices

Use a non-root user to limit root access.

As noted above, by default Docker containers will run as UID 0, or root. This means that if the Docker container is compromised, the attacker will have host-level root access to all the resources allocated to the container. By using a non-root user, even if the attacker manages to break out of the application running in the container, they will have limited permissions if the container is running as a non-root user. 

Remember, if you don’t set a USER in your Dockerfile, the user will default to root. Always explicitly set a user, even if it’s just to make it clear who the container will run as.

Specify user by UID and GID

Usernames and groupnames can easily be changed, and different Linux distributions can assign different default values to system users and groups. By using a UID/GID you can ensure that the user is consistently identified, even if the container’s /etc/passwd file changes or is different across distributions. For example:

Create a specific user for the application

If your application requires specific permissions, consider creating a dedicated user for your application in the Dockerfile. This can be done using the RUN command to add the user. 

Note that when we are creating a user and then switching to that user within our Dockerfile, we do not need to use the UID/GID because they are being set within the context of the image via the useradd command. Similarly, you can add a user to a group (and create a group if necessary) via the RUN command.

Ensure that the user you set has the necessary privileges to run the commands in the container. For instance, a non-root user might not have the necessary permissions to bind to ports below 1024. For example:

Switch back to root for privileged operations

If you need to perform privileged operations in the Dockerfile after setting a non-root user, you can switch to the root user and then switch back to the non-root user once those operations are complete. This approach adheres to the principle of least privilege; only tasks that require administrator privileges are run as an administrator. Note that it is not recommended to use sudo for privilege elevation in a Dockerfile. For example:

Combine USER with WORKDIR

As noted above, the UID/GID used within a container applies both within the container and with the host system. This leads to two common problems:

  • Switching to a non-root user and not having permissions to read or write to the directories you wish to use (for example, trying to create a directory under / or trying to write in /root .
  • Mounting a directory from the host system and switching to a user who does not have permission to read/write to the directory or files in the mount.

The following example shows you how the UID and GID behave in different scenarios depending on how you write your Dockerfile. Both examples provide output that shows the UID/GID of the running Docker container. If you are following along, you need to have a running Docker Desktop installation and a basic familiarity with the docker command.

Standard Dockerfile

Most people take this approach when they first begin using Docker ; they go with the defaults and do not specify a USER .

Dockerfile with USER

This example shows how to create a user with a RUN command inside a Dockerfile and then switch to that USER .

Build the two images with:

Default Docker image

Let’s run our first image, the one that does not provide a USER command. As you can see, the UID and GID are 0/0 , so the superuser is root . There are two things at work here. First, we are not defining a UID/GID in the Dockerfile so Docker defaults to the superuser. But how does it become a superuser if my account is not a superuser account? This is because the Docker Engine runs with root permissions, so containers that are built to run as root inherit the permissions from the Docker Engine.

Custom Docker image

Let’s try to fix this — we really don’t want Docker containers running as root. So, in this version, we explicitly set the UID and GID for the user and group. Running this container, we see that our user is set appropriately.

Enforcing best practices

Enforcing best practices in any environment can be challenging, and the best practices outlined in this post are no exception. Docker understands that organizations are continually balancing security and compliance against innovation and agility and is continually working on ways to help with that effort. Our Enhanced Container Isolation (ECI) offering, part of our Hardened Docker Desktop , was designed to address the problematic aspects of having containers running as root.

Enhanced Container Isolation mechanisms, such as user namespaces, help segregate and manage privileges more effectively. User namespaces isolate security-related identifiers and attributes, such as user IDs and group IDs, so that a root user inside a container does not map to the root user outside the container. This feature significantly reduces the risk of privileged escalations by ensuring that even if an attacker compromises the container, the potential damage and access scope remain confined to the containerized environment, dramatically enhancing overall security.

Additionally, Docker Scout can be leveraged on the user desktop to enforce policies not only around CVEs, but around best practices — for example, by ensuring that images run as a non-root user and contain mandatory LABELs.

Staying secure

Through this demonstration, we’ve seen the practical implications and benefits of configuring Docker containers to run as a non-root user, which is crucial for enhancing security by minimizing potential attack surfaces. As demonstrated, Docker inherently runs containers with root privileges unless specified otherwise. This default behavior can lead to significant security risks, particularly if a container becomes compromised, granting attackers potentially wide-ranging access to the host or Docker Engine.

Use custom user and group IDs

The use of custom user and group IDs showcases a more secure practice. By explicitly setting UID and GID, we limit the permissions and capabilities of the process running inside the Docker container, reducing the risks associated with privileged user access. The UID/GID defined inside the Docker container does not need to correspond to any actual user on the host system, which provides additional isolation and security.

User namespaces

Although this post extensively covers the USER instruction in Docker, another approach to secure Docker environments involves the use of namespaces, particularly user namespaces. User namespaces isolate security-related identifiers and attributes, such as user IDs and group IDs, between the host and the containers. 

With user namespaces enabled, Docker can map the user and group IDs inside a container to non-privileged IDs on the host system. This mapping ensures that even if a container’s processes break out and gain root privileges within the Docker container, they do not have root privileges on the host machine. This additional layer of security helps to prevent the escalation of privileges and mitigate potential damage, making it an essential consideration for those looking to bolster their Docker security framework further. Docker’s ECI offering leverages user namespaces as part of its security framework.

When deploying containers, especially in development environments or on Docker Desktop, consider the aspects of container configuration and isolation outlined in this post. Implementing the enhanced security features available in Docker Business , such as Hardened Docker Desktop with Enhanced Container Isolation, can further mitigate risks and ensure a secure, robust operational environment for your applications.

  • Read the Dockerfile reference guide .
  • Get the latest release of Docker Desktop .
  • Explore Docker Guides .
  • New to Docker? Get started .
  • Subscribe to the Docker Newsletter .

Using Generative AI to Create Runnable Markdown

Readmeai: an ai-powered readme generator for developers, how to measure devsecops success: key metrics explained.

Jun 26, 2024

  • Engineering

IMAGES

  1. Docker container

    example of docker application

  2. How to build docker images for Windows desktop applications

    example of docker application

  3. What is Docker

    example of docker application

  4. Create a Docker application with Python easily

    example of docker application

  5. A beginner’s guide to deploying a Docker application to production

    example of docker application

  6. How To Create A Docker Windows Image With Docker Build Tag

    example of docker application

VIDEO

  1. Introduction to Docker

  2. Learn Docker

  3. Create a docker image for an application stored in local repository and run the using docker image

  4. How to Start Docker Service using Standard User by running Docker Desktop Application #tips #docker

  5. Dockerfile Introduction

  6. Why Use Docker: Real-life Use Cases

COMMENTS

  1. Docker Samples · GitHub

    Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes. C# 4.5k 9.8k. wordsmith Public. Sample project with Docker containers running under Kubernetes. Java 253 455. compose-dev-env Public. Example used to try a compose application with Docker Dev Environments.

  2. A beginner's guide to Docker

    2. Create your project. In order to create your first Docker application, I invite you to create a folder on your computer. It must contain the following two files: A ' main.py ' file (python file that will contain the code to be executed). A ' Dockerfile ' file (Docker file that will contain the necessary instructions to create the ...

  3. Containerize an application

    The . at the end of the docker build command tells Docker that it should look for the Dockerfile in the current directory. Start an app container. Now that you have an image, you can run the application in a container using the docker run command. Run your container using the docker run command and specify the name of the image you just created:

  4. Samples overview

    Learn how to containerize different types of services by walking through Official Docker samples. Guides; Manuals; Reference; Search Docker documentation. Main sections. Guides. Manuals. Reference. This section. Reference documentation ... SDK examples; v1.46 reference (latest) API reference by version Version history overview; v1.46 reference ...

  5. Sample apps with Compose

    Learn how to make calls to your application services from Compose files; Learn how to deploy applications and services to a swarm; Samples tailored to demo Compose. These samples focus specifically on Docker Compose: Quickstart: Compose and ELK - Shows how to use Docker Compose to set up and run ELK - Elasticsearch-Logstash-Kibana.

  6. Awesome Docker Compose samples

    Awesome Compose. A curated list of Docker Compose samples. These samples provide a starting point for how to integrate different services using a Compose file and to manage their deployment with Docker Compose. Note The following samples are intended for use in local development environments such as project setups, tinkering with software ...

  7. Sample application to get started with Docker

    This sample application is a simple React frontend that receives data from a Node.js backend. When the application is packaged and shipped, the frontend is compiled into static HTML, CSS, and JS and then bundled with the backend where it is then served as static assets.

  8. Docker 101 Tutorial

    In this self-paced, hands-on tutorial, you will learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. You'll even learn about a few advanced topics, such as networking and image building best practices.

  9. Docker by Example

    Docker by Example. A hands-on orientation to containerizing your apps with Docker. If you're the kind of developer who learns best by doing, then this guide is for you. Rather than learning Docker by focusing on CLI commands, this guide focuses on learning how to author Dockerfiles to run programs in containers. Starting with simple examples ...

  10. Docker Tutorial: Get Going From Scratch

    Docker is a platform for packaging, deploying, and running applications. Docker applications run in containers that can be used on any system: a developer's laptop, systems on premises, or in the cloud. ... Hello-world's instructions gave us an interesting example: With a single Docker command, docker run -it ubuntu bash, ...

  11. Dockerizing Applications: A Step-by-Step Guide

    To start a container from the "my-web-app" image, use the following command: docker run -d -p 8080:80 --name my-app-container my-web-app. This command runs a container in detached mode, maps ...

  12. Docker for Beginners: Everything You Need to Know

    Docker will use your Dockerfile to construct the image. You'll see output in your terminal as Docker runs each of your instructions. The -t in the command tags your image with a given name ( my-website:v1 ). This makes it easier to refer to in the future. Tags have two components, separated by a colon.

  13. Dockerizing a Spring Boot Application

    Next, we'll start up the Spring Boot application: $> java -jar target/docker-message-server-1...jar. Now we have a working Spring Boot application that we can access at localhost:8888/messages. To dockerize the application, we first create a file named Dockerfile with the following content: FROM openjdk:17-jdk-alpine.

  14. What is Docker? Learn How to Use Containers

    In this example, you will use Jib and distroless containers to build a Docker container easily. Using both in combination gives you a minimal, secure, and reproducible container, which works the same way locally and in production. To use Jib, you need to add it as a maven plugin by adding it to your pom.xml:

  15. Docker Simplified: A Hands-On Guide for Absolute Beginners

    Disadvantages of using Docker. Applications with different operating system requirements cannot be hosted together on the same Docker Host. For example, let's say we have 4 different applications, out of which 3 applications require a Linux-based operating system and the other application requires a Windows-based operating system.

  16. A Docker Tutorial for Beginners

    Under base configuration section. Choose Docker from the predefined platform. Now we need to upload our application code. But since our application is packaged in a Docker container, we just need to tell EB about our container. Open the Dockerrun.aws.json file located in the flask-app folder and edit the Name of the image to your image's name ...

  17. How to Containerize Java Web Applications with Docker

    During the build process, we used the following command: docker build -t sematext/docker-example-demo: 0. 0. 1 -SNAPSHOT . The sematext in the above command is the organization, and the docker-example-demo is the name of the container image. Together they create the full Java container image name.

  18. Dockerizing a Java Application

    mvn package. After that, let's build our Docker image: docker image build -t docker-java-jar:latest . Here, we use the -t flag to specify a name and tag in <name>:<tag> format.In this case, docker-java-jar is our image name, and the tag is latest.The "." signifies the path where our Dockerfile resides. In this example, it's simply the current directory.

  19. 6 use cases for Docker containers -- and when to pass

    We call them Docker use cases throughout this article, but we're not strictly speaking about Docker alone here. Docker use case examples. Docker containers can deploy virtually any type of application. But they lend themselves particularly well to certain use cases and application formats. Microservices-based apps

  20. Docker for absolute beginners

    Applications rarely consist out of one part: most of the time multiple containers have to work together to create all functionalities. An example: a website, API and database have to be connected together. This is what Docker Compose allows us to do. We can create a file that defines how containers are connected with one another.

  21. How To Create And Deploy Docker Applications

    First, push your application to the repository. Next, pull your application from your repository to the other machines. You can do this with the docker run command. The command will automatically ...

  22. Docker Examples

    Running the First Docker Example: Hello World. I will run all docker examples in this post in Ubuntu 20.04. If you haven't setup Docker in your Ubuntu, you can follow these instructions to install docker. After you install Docker, let's start with running a hello world image. $ sudo docker run hello-world.

  23. Example distributed app composed of multiple containers for Docker

    This isn't an example of a properly architected perfectly designed distributed app... it's just a simple example of the various types of pieces and languages you might see (queues, persistent data, etc), and how to deal with them in Docker at a basic level.

  24. Docker Use Cases: A Demonstrative Guide with Real-world Examples

    Be a part of the container revolution. Know the Docker use cases to improve software development, application portability & deployment, and agility. Quick Summary :- Docker containers, being portable, light-weight, and self-contained, fast-track development, testing, and server deployments. Find out the best Docker use cases that are ...

  25. How To Create Minimal Docker Images for Python Applications

    A sample Python application you need to build the minimal image for. You can also follow along with the example app we create. Create a Sample Python Application . Let's create a simple Flask application for inventory management. This application will allow you to add, view, update, and delete inventory items.

  26. Containerize a Python application

    Inside the python-docker directory, run the docker init command.docker init provides some default configuration, but you'll need to answer a few questions about your application. For example, this application uses Flask to run. Refer to the following example to answer the prompts from docker init and use the same answers for your prompts. $ docker init Welcome to the Docker Init CLI!

  27. Understanding the Docker USER Instruction

    The following example shows you how the UID and GID behave in different scenarios depending on how you write your Dockerfile. Both examples provide output that shows the UID/GID of the running Docker container. If you are following along, you need to have a running Docker Desktop installation and a basic familiarity with the docker command.