Docker is a way of packaging applications and all of their required dependencies and configuration into a single image. Essentially, this turns your server configuration into something that can be managed with git and synchronized across every machine.
What Is Docker, and Why Is It Useful?
Docker makes it much easier to manage your production server configuration. Rather than setting up servers manually, you can automate the whole process to be ran when you build your container image.
This image will be the same for everyone on your team, so you’ll be able to instantly run your app with all the required dependencies managed for you. This also fixes the classic “it doesn’t work on my machine” problem, because Docker images will run the same everywhere.
This image can also be easily distributed and ran on a fleet of servers at very little performance impact. Because Docker isn’t a virtual machine, you don’t have to deal with the overhead of running a guest operating system for each application. This makes them cheap and quite scalable.
For more information on what Docker does, and whether or not you should use it for your app, you can read our breakdown of whether or not it’s worth the headache. For now, we’ll assume you’re ready to get started, and dive into the technical details.
Create a Dockerfile
The entrypoint for the build of your container is called a Dockerfile. Create a new project directory to house your files, then create a new Dockerfile simply named Dockerfile with no extension:
Open this file up in your favorite text editor.
You probably don’t want to start everything from scratch, so you can fork an existing image from the Docker Hub, such as Ubuntu:
Note that even if you do, you’ll have make it FROM scratch.
During the build process, Docker creates a modifiable “layer” that you can build on top of. You are allowed to copy files and run commands as if they were running on the machine, similarly to how you would go about setting up a server manually. You’ll do all of your server setup in this file, essentially automating the process you’d go through if you fired up a blank Linux box and were told to bring it into production. This can be a time-consuming process.
You can execute most of these commands from the command line, and set the image up manually. If you want to get a bash shell in a container, you can run:
And save your changes with:
However, you should only use this for testing, and do all of your actual configuration in a Dockerfile.
Dockerfile Commands
We’ll go through most of the common commands, and explain their usage and the best practices to apply them to. For more extended reference, you can consult this cheatsheet, or consult the “Best Practices For Writing Dockerfiles” docs entry.
COPY
The COPY instruction is fairly simple: it allows you to populate your Docker image with data and configuration.
For example, if you had a folder in your project directory called /config/nginx/ that contained your nginx.conf, sites-available/, and other directories, you could copy that to the default nginx config location in your container:
This way, you can keep all of your nginx config files in the same project directory as everything else, meaning they’ll be version controlled with git alongside the rest of your code.
RUN
The RUN instruction runs a command in your container, and saves the changes to the container’s filesystem.
Each run command creates a new “layer”, so you’ll likely want to do complicated setup inside an install script. You’ll have to copy this script over to the image, and then run it:
Inside this script, you’re free to do any configuring you need, including installing programs from apt.
If you’d like to cut down on your container build times, you can create a base container with all the programs you need already installed, then build your main container FROM that container, though you’ll then need to manage dependencies and configuration separately.
CMD
CMD defines the executable used by your container on startup if nothing else is specified. This is how you will load your app once everything is completed.
Only the last CMD command takes effect. You can override the CMD on startup with the following syntax:
ENTRYPOINT
ENTRYPOINT is a special version of CMD that allows the container to run as an executable. For example, if all the container does is run nginx, you can specify nginx as the ENTRYPOINT:
And then run that container on the command line, passing in arguments as arguments to the entrypoint:
EXPOSE
Expose marks certain container ports as open to the running host. For example, if you’re running a web server, you’ll likely want to
This doesn’t bind the port automatically, but it informs the Docker runtime that the port is available. To actually bind it, you’ll want to use the -P flag (uppercase) with no arguments to bind all exposed ports.
Running Your App
First, you’ll need to build your image:
You can run your container with docker run:
However, this isn’t all too useful on its own, becaue there’s no way to interact with it. To do that, you’ll have to bind ports using the -p flag, like so:
This runs the nginx container and binds the container’s HTTP output to port 3000 on the local machine.
However, with a standard setup you’ll have to rebuild the Docker container everytime you make changes to your app. This is obviously far less than ideal, but luckily there are ways around this. One way is to mount a volume in your container to form a real-time link between your container and the host OS (where you’re doing your development). For example, if your HTML source is in the src/html/ folder of your Docker project directory, you can mount it to /usr/local/nginx/html/ with:
The “ro” flag ensures this connection is read-only, so the Docker container cannot make any changes to the host OS. This can also be used for allowing Docker containers to access persistent data stored on the host OS.