Essentially, ENIs are virtual network cards you can attach to your EC2 instances. They are used to enable network connectivity for your instances, and having more than one of them connected to your instance allows it to communicate on two different subnets.
What Are Elastic Network Interfaces?
You’re already using them if you’re running on EC2—the default interface, eth0, is attached to an ENI that was created when you launched the instance, and is used to handle all traffic sent and received from the instance.
You’re not limited to just one network interface though—attaching a secondary network interface allows you to connect your EC2 instance to two networks at once, which can be very useful when designing your network architecture. You can use them to host load balancers, proxy servers, and NAT servers on an EC2 instance, routing traffic from one subnet to another.
ENIs have security groups, just like EC2 instances, which act as a built in firewall. You can use these, rather than a Linux firewall like iptables, to manage inter-subnet traffic.
A common use case for ENIs is the creation of management networks. This allows you to have public-facing applications like web servers in a public subnet but lock down SSH access down to a private subnet on a secondary network interface. In this scenario, you would connect using a VPN to the private management subnet, then administrate your servers as usual.
In this diagram, the subnet on the left is the public subnet, which communicates with the internet over the Internet Gateway for the VPC. The subnet on the right is the private management subnet, which can only be accessed in this example by the AWS Direct Connect gateway, which allows the on-premises network to handle authentication, and simply extends that network into the cloud. You could also use AWS Client VPN, which will run a VPN server that can be accessed with private key credentials.
ENIs are also often used as the primary network interfaces for Docker containers launched on ECS using Fargate. This allows Fargate tasks to handle complex networking, set firewalls in place using security groups, and be launched into private subnets.
According to AWS, ENIs have the following attributes:
A primary private IPv4 address from the IPv4 address range of your VPC One or more secondary private IPv4 addresses from the IPv4 address range of your VPC One Elastic IP address (IPv4) per private IPv4 address One public IPv4 address One or more IPv6 addresses One or more security groups A MAC address A source/destination check flag A description
ENIs are entirely free to use, though they are not exempt from the standard AWS data charges.
Implementing Cheap Failover With ENIs
Because ENIs can have their association dynamically assigned, they’re commonly used to implement failover in network design. If you’re running a service that needs high availability, you can run two servers, a primary server and a standby server. If the primary server fails for any reason, the service can be switched over to the standby server.
ENIs can fulfill this pattern quite easily—simply launch two servers, create a secondary ENI instance to use as the switch, and associate it to the primary server, optionally with an elastic IP. Whenever you need to swap to the standby instance, you simply have to swap the ENI over (albeit manually, or with a script).
However, ENIs aren’t the best way to go about this in AWS’s ecosystem. AWS supports auto-scaling, which can be used to achieve the same effect in a more cost effective manner. Rather than paying extra for redundancy, you would instead run many smaller servers in a auto-scaling fleet. If one of the instances goes down, it isn’t a big deal. A new server can be spun up quickly to handle the drop in traffic.
While manually switching the ENI over to the standby instance is easy, automating the failover process is a lot more complicated. You’ll have to set up a CloudWatch alarm on the primary instance to fire when the instance goes down (optionally sending you a message in the process), subscribe it to an SNS message queue, and trigger a Lambda function that will run off that queue and handle the detaching and reattaching process using the AWS SDK. It’s doable, but we highly recommend looking into using Route 53 DNS failover rather than this.
If you do want to automate the process, you can follow this guide from AWS on how to link a CloudWatch alarm to Lambda.