What Are Containers, and How Can I Use Them?
Containers are a Unix concept that allows applications to run in isolated virtualized environments, free of the performance degradation that comes with running virtual machines.
You can think of them like CDs that contain everything your app needs to run. You can send this CD to AWS, and they will handle creating copies of it and distributing it to multiple worker servers. These servers will run the app packaged in your CD, and can be quickly booted and terminated as part of an auto-scaling group to both match shifting load and optimize costs.
While they’re very useful in their own right, containers also serve another crucial purpose: they bring your infrastructure and operations onto the same version management workflow as your code, and they synchronize your development and production environments. Your code will run the same on your local development machine as it will on your server. And, because all of your server configuration is part of this container, it can be managed through git just like you would manage your source code.
There are some limitations of containers though. They’re mostly read-only; orchestration tools like ECS and Kubernetes are designed around containers being stateless. You only really want to use them for compute, never storage—if you’re running containers on a reserved EC2 instance, it may be fine to run a database on them, but applications running on ECS are designed to have flexible start and stop times. All data stored on them is ephemeral, just like RAM.
If your application makes use of a database or local storage, you might want to consider moving that to a separate service, which may be cheaper than running it on EC2 anyway. AWS has many managed database services, and S3 is very cheap for storage compared to EBS or EFS.
Package Your App with Docker
This is the hard part, and the part that is most specific to your app—creating and configuring the container. Ironically, this is very similar to configuring servers, except you’ll only have to do it once, all of your configuration will be in a central place, and you’ll be able to synchronize your development and production environments. The benefits greatly outweigh the initial headache.
You’ll want to install Docker Desktop for your operating system, so you can run and manage containers on your local machine.
The main point of entry that defines all the configuration for Docker is called a Dockerfile. You’ll want to create a new project directory, and create a new Dockerfile, simply named Dockerfilewith no extension:
Within this file, you’ll use Docker commands to instruct Docker on how to build your instance. To start, you’ll likely want to fork a preexisting image from the Docker Hub, such as Ubuntu, with the FROM command:
There are many prebuilt Docker images to choose from, such as images with software like nginx preinstalled.
You can link local folders in this project directory to the actual folders in the image with the COPY command. For example, if you had a project folder called nginx/ with all of the configuration for nginx, you could copy that over with:
This will make sure the eventual container has the proper configuration at the default nginx location.
Docker is far more complicated than this, and there are many commands for running initialization scripts, passing stateful information to containers, and running your app on startup. You can read our full guide on packaging your application with Docker to learn more.
You’ll need to upload your image to an ECR repository. Head over to the ECR Management Console, and create a new repository with a unique name. Choose “View Push Commands,” and you’ll be presented this dialog which should help you link your docker client with the repository, build your image, and upload it to ECR.
In short, you need to log in with your AWS credentials, build the image, give it a tag, and then run:
With your repository’s URI.
Once you’re done with this, head over to the ECS Management Console and select “Get Started.” Choose “Custom” as the image type:
Enter in the full URI for your image; it should look like:
There’s a ton of extra configuration in this dialog under “Advanced Configuration,” and you’ll want to make sure you read everything and fill out everything useful to you. One major thing is to check “Auto-Configure CloudWatch Logs”, which will link your containerized app’s logs to CloudWatch.
By default, your app will run on AWS Fargate as the compute engine, not EC2. Fargate is AWS’s version of Kubernetes; it orchestrates and manages running your containers. The main downside is you won’t have login access to any actual instances, so you’ll need to make any changes through docker updates (which you should anyway). If you don’t want to use Fargate, you can use standard EC2 instances, which will run the AWS ECS Container Agent.
In the next screen, you’ll define how many of your containers to run, and you’ll be given the option to create a load balancer between them:
You can also configure your service to use Auto Scaling, which will scale your application up and down depending on demand.
Once you’re done, click “Create” to launch your cluster. It will be viewable under the “Clusters” tab of the ECS Management Console once it’s created.