Distributing Traffic with Elastic Load Balancing (ELB)

Now that your app is connected to RDS for accessing the database and using S3 for image storage, you're no longer tied to one machine. That means you can start running the app on more than one server—something that becomes important as soon as traffic picks up or you want better uptime.
In this part, you'll create a second EC2 instance that runs the exact same version of your app. Then, using an Application Load Balancer (ALB)—one of the load balancer types available in AWS—you'll place both servers behind it. Once that's set up, AWS will automatically handle traffic routing. If one server is busy or goes offline, the other one keeps the app running—your users won’t notice a thing.
A quick note on cost (as of April 2025):
Using an Application Load Balancer (ALB) incurs charges based on two components:
- Hourly Charge: $0.0225 per hour for each ALB.
- Load Balancer Capacity Units (LCUs): $0.008 per LCU per hour. An LCU measures dimensions like new connections, active connections, processed bytes, and rule evaluations.
For example, if your application uses 1 LCU per hour, cost per day can be approximately $0.732 (Total per month (30 days): $21.96)
To avoid unexpected charges, make sure to delete the ALB once your practice is complete.
What is Load Balancer?
A load balancer helps spread web traffic across more than one server.
Without it, all your users go to a single EC2 instance. If that server goes down, so does your app. With a load balancer in place, requests get passed to whichever server is available and healthy. You don’t have to change anything in your app code—the load balancer just sits in front of your instances and handles routing.
In this setup, we’ll use an Application Load Balancer (ALB), which is designed for routing HTTP traffic. It works well for web apps like the one you're building.
Implementing Load Balancer
To use a load balancer, you need at least two EC2 instances running the same version of your app. Since your current server is already set up with RDS and S3, the easiest way to create a second one is by making an AMI (Amazon Machine Image) from the existing instance. That way, you can launch a copy without redoing all the setup. After that, you can implement Load Balancer starting from a target group creation.
Here’s what we’ll do:
- Create an AMI from the EC2 instance you already have.
- Launch a new EC2 instance using that image.
- Adjust the Django app on the second EC2 instance
- Create a target group to hold both servers.
- Set up the Application Load Balancer and connect it to the target group.
- Test that everything is working as expected.
Let’s go step by step.
Step 1: Create an AMI from the Existing EC2 Instance
We’ll start by creating a reusable image of your current EC2 instance. This lets us spin up an identical server without having to repeat any setup work.
- Go to EC2 in the AWS Console.
- Find the EC2 instance that’s already running the app.
- Select it. Under Actions, go to Image and templates and click Create image.

- Call it something like
image-sharing-app-RDS-AMI
so it’s easy to find later.
- Leave the rest as-is and click Create image.

It’ll take a few minutes. You can check the status under AMIs.
What Happens If You Don’t Use an AMI?
You could manually create a second EC2 instance and repeat the setup. But it’s easy to miss a step—a config file, a Python package, or an environment variable.
That leads to inconsistent behavior between servers, which becomes a problem when the load balancer starts rotating traffic.
Creating the second instance from an AMI avoids all that. You get a clean copy of everything in one go.
Step 2: Launch a Second EC2 Instance Using the AMI
Once the image is ready, we’ll use it to launch a second server. This ensures the app setup, environment, and dependencies match exactly.
1. Go to the AMIs section, select image-sharing-app-RDS-AMI
. Click Launch instance from image.

2. Set the name like image-sharing-app-custom-VPC-2nd
.

3. Use the same key pair as the 1st one.

- Choose the same VPC used by your first instance.
- Select a public subnet that’s in a different Availability Zone from the first one.
- Make sure Auto-assign public IP is enabled.
- Use the same security group as the first instance.

Click Launch instance. When it’s running, grab the public IP and test it in the browser. The app should load right away.
Step 3: Adjust the Django app on the second EC2 instance
Now that the second EC2 instance is running, you’ll need to adjust its app settings. By default, the Django app may still be tied to the IP address of the first server. To make the app accessible from the new public IP, you’ll update the ALLOWED_HOSTS
setting and restart the app.
Add a host entry to your SSH config file and connect to the instance
Open (or create) your SSH config file. Create a copy of the settings and update the host name and host name:
Host image-sharing-new-2nd
HostName SECOND_INSTANCE_PUBLIC_IP
User ec2-user
IdentityFile ~/.ssh/your-key.pem
Be sure to replace SECOND_INSTANCE_PUBLIC_IP
.
Save and exit.
Now you can connect like this:
ssh image-sharing-new-2nd
Update the .env.pro file
Open the .env.prod file and edit the IP address.
ALLOWED_HOSTS=localhost,SECOND_INSTANCE_PUBLIC_IP
# Server IP (for nginx)
SERVER_IP=SECOND_INSTANCE_PUBLIC_IP
Re run the app
Once the file is updated, you need to rebuild and restart the containers so the new environment settings take effect. Run the following commands from your project directory:
docker-compose -f docker-compose.prod.yml --env-file .env.prod down
docker-compose -f docker-compose.prod.yml --env-file .env.prod up --build -d
After a moment, the containers will restart with the new settings in place. Now, open the original and new instance’s IP addresses in your browser again—you should see your two applications are running in different IP addresses.

Step 4: Create a Target Group
Next, we’ll set up a group that tells the load balancer which EC2 instances should receive traffic.
1. Go to Target Groups from the EC2 sidebar. Click Create target group.

2. Set the following before click Next.
- - Target type: Instances
- - Target group name: Name it something like
image-sharing-app-group-TG
. - - Protocol: HTTP
- - Port: 80
- - IP address type: IPv4
- - VPC: Your custom VPC (
image-sharing-app-vpc
) - - Protocol version: HTTP1

3. Now register both EC2 instances. Check both instances and click Include as pending.

4. Once you confirm the two instances are listed under Review targets, then click Create target group.

Step 5: Create an Application Load Balancer
With both servers ready and grouped, the next step is to create the load balancer that will sit in front of them.
1. Go to Load Balancers in EC2 and click Create Load Balancer.

2. Choose Application Load Balancer.

Fill out the basic settings:
- Load balancer name: Name it something like
image-sharing-app-ALB
- Scheme: Internet-facing
- IP address type: IPv4

For network mapping:
- VPC: Your custom VPC (
image-sharing-app-vpc
)
- Select two public subnets in different AZs

Pick the security group that allows HTTP traffic on port 80.

In the Listeners section, leave the HTTP listener on port 80.
Set the default action to forward to the target group you just made.

Click Create load balancer. It’ll take a minute or two to spin up.
Step 6: Adjust the Django app ALLOWED_HOSTS setting
Once the load balancer is live, the app can be accessible via a DNS generated for the load balancer.
Find the URL in the Load Balancers page. Copy the DNS name (should look like image-sharing-appALB-xyz.elb.amazonaws.com
).

To really access the app, you need to add the URL to Django ALLOWED_HOSTS
for both EC2 instances.
Open the .env.prod file and edit the ALLOWED_HOSTS
section. To make the env file consistent, add both IP addresses of 1st and 2nd instances.
ALLOWED_HOSTS=localhost,1ST_INSTANCE_PUBLIC_IP,2ND_INSTANCE_PUBLIC_IP,YOUR_ALB_DNS_NAME
Once the file is updated, you need to rerun the containers so the new environment settings take effect. Run the following commands from your project directory:
docker-compose -f docker-compose.prod.yml --env-file .env.prod up --build -d
Now, you can access the app via the DNS name.

Wrapping up
You now have two EC2 servers running your app and a load balancer directing traffic between them. If one server stops, the app keeps running. If traffic increases, the system is already ready to scale.
In the next section, you’ll connect this setup to an Auto Scaling Group so AWS can add or remove servers automatically when traffic changes.
Cleanup Reminder:
If you're just practicing or testing this setup, make sure to delete the Application Load Balancer once you're done. The target group does not incur charges, so it's not necessary to delete it immediately. However, removing the ALB will prevent any unexpected costs from occurring in your AWS billing.