Nathan Peck
Nathan Peck
Senior Developer Advocate at AWS
 Jul 24, 2023 4 min read

Why use containers for your application?

Containers are a popular open source standard for developing, packaging, and operating applications at scale. There are a few key benefits to using containers:

Packaging

Containers provide you with a reliable way to gather your application components and package them together into one build artifact. This is important because modern applications are usually composed of a variety of pieces that must work together in sync. These pieces include not only your code, but also dependencies, binaries, or system libraries.

To build a container you write a Dockerfile. The Dockerfile is a declarative recipe that describes how to build the application. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
FROM public.ecr.aws/docker/library/node:18 AS build
WORKDIR /srv
ADD package.json .
RUN npm install

FROM public.ecr.aws/docker/library/node:18-slim
COPY --from=build /srv .
ADD . .
EXPOSE 3000
CMD ["node", "index.js"]

The example above is a multistage Dockerfile recipe which describes how to build and setup a Node.js application. It starts with downloading a Node.js development environment. Then it fetches dependencies from NPM and installs them (also compiling any binary dependencies). Finally it packages up a slim container image, which is the final build product for delivery to any machine which needs to run the container.

You can find millions of prepackaged container images on public registries such as Docker Hub and Amazon Elastic Container Registry. You can deploy prepackaged container images as is, or use them as a base image for building your own customized image.

Reliability

You can download a container image onto another machine and run it there without having to rerun all the build steps again. The container image is a static snapshot of the final state that was assembled by the Dockerfile recipe. This makes software delivery to your compute much more resilient compared to independently running installation steps across many servers or VM instances. You do the build once, on one machine, and then you run the successful build artifact many times.

Additionally, container images are immutable. By default if the running application writes to it’s local filesystem the changes that it makes are temporary. When you restart the container any changes that were made get wiped away, and a fresh copy of the container is launched off the original container image. This increase reliability by avoiding irrecoverable drift over time from accumulated filesystem mutations.

Isolation

Because Docker containers allow an application to carry along its own filesystem and dependencies, this allows the container to run reliably without conflicting with the state of the host running it. For example, in the above Dockerfile example it does not matter if the person running the container has Node.js version 16 installed globally on their development laptop. When the container image runs it will be using the Node.js version 18 that was installed inside of the container image. Additionally, when the container is rebuilt using its Dockerfile the build will happen in an isolated context, using Node version 18, instead of the local Node version 16.

In addition to isolation between container and host, there is also isolation between multiple containers. You can run multiple conatiners side by side, for example one container with Node version 17, and one container with Node version 18. Additionally, the runtime isolation of containers helps you avoid many common application collision issues. For example, multiple containers running on a host can bind to the same port from their own perspective, but the ports get remapped to different host ports. Additionally, you can limit CPU and memory per container, give each container its own specific environment variables.

Efficiency

Efficiency is a side effect of the lightweight, efficient isolation model of containers. Unlike a heavier virtual machine, you can run many small docker containers on a single machine. It is common to fill an EC2 instance with 10-20 small Docker containers. This helps you get more efficienct usage of the cloud resources you are paying for. Rather than paying for a large EC2 instance and only getting 10-20% utilization out of the instance, you can aim to pack many application containers onto the instance and get 70-80% utilization.

Ready to get started with containers on AWS?

If you are ready to deploy your first container on AWS, consider starting from a prebuilt container pattern. The infrastructure as code patterns on this website are intended to help you get a sample scenario up and running, to give you a good starting point for your own container deployment.