Why Is This Useful?

For example, you could use this to host the static content and code for your website, then run all of your worker nodes on ECS to handle the actual serving of your content. This gets around the restriction of not storing data on disk, because the volume mount is bound to an external drive that persists across ECS deployments.

While you can run Docker containers on ECS with local volumes, having a single, shared volume for the whole deployment can be a very useful tool for many deployments.

Creating an EFS Volume

To set this up, you’ll need to create an EFS file system. This is fairly straightforward, and can be done from the EFS Management Console, but you’ll want to make a note of the volume ID as you’ll be needing it to work with the volume.

Create a new file system:

Enter a name, then choose the VPC where your ECS cluster is located.

Then, you can make a note of the File System ID, as you’ll need it later.

If you need to preload data on this filesystem, or just manually add or change files in your EFS volume, you can mount it to any EC2 instance. You’ll need to install amazon-efs-utils:

And then mount it with the following command, using the ID:

This way, you can directly view and edit the contents of your EFS volume as if it was another HDD on your server. You’ll want to make sure you have nfs-utils installed for this all to work properly.

Connecting To an ECS Deployment

Next, you’ll have to hook up ECS to this volume. Create a new task definition in the ECS Management Console. Scroll to the bottom, and select “Configure Via JSON.” Then, replace the empty “volumes” key with the following JSON, adding the “family” key at the end:

Of course, you will need to replace fs-XXXXXX.efs.us-east-1.amazonaws.com with your EFS volume’s actual address. You should see a new volume:

You can use this in your container definition as a mount point. Select “Add Container” (or edit an existing one), and under “Storage And Logging,” select the newly created volume and specify a container path.

Save the task definition, and when you launch a cluster with this new definition, all of the containers will be able to access your shared file system.