Preparing a Web App for Deployment

Before deploying a web application to the cloud, it helps to get to know the app you’re working with. In this section, we’ll set up a fully working app on your local machine first. That way, you can see how everything works before we move it to AWS.
You won’t need to write a single line of application code. For a quick start of your AWS journey, the Image Share app is already built and containerized with Docker, so you don’t need to worry to prepare your own app. This lets you focus on understanding the big picture: how the app runs and how its components connect. And most importantly, you’ll get a clearer view of what exactly you’re deploying when we move to the cloud.
The app for your practice: image sharing app
Throughout this guide, we’ll use the Image Share Web App—a simple but powerful image-sharing platform that can be a good example of apps that can be deployed on the AWS platform. It’s the kind of app you might build for your own project: users can upload images, browse a gallery, and interact with the interface in a smooth, responsive way.

The app is built using Django, a popular web framework for Python, with PostgreSQL as the database. Everything is containerized with Docker and configured through a docker-compose.yml
file. This means each part of the application runs in its own isolated environment, and Docker handles the communication between them. With this setup, the app can run consistently on any machine—whether locally or in the cloud.
In this section, we’ll also use GitHub to access the app’s source code. If you’re not yet familiar with Docker or GitHub, we recommend checking out the companion guides: “Docker Basics” and “Git & GitHub Introduction” for a quick overview.
Why start with a local deployment?
Before we dive into AWS, it’s a good idea to get the app running on your own computer first.
Running the app locally gives you a better sense of how everything works, and it makes debugging easier if something goes wrong. You’ll be able to observe logs in real time, interact with the app through your browser, and confirm that all components are functioning correctly. If you hit a snag, it’s much easier to fix it in a local environment than to troubleshoot it inside AWS.
That said, if you’d prefer to jump straight into deploying the app on AWS, feel free to skip this section and move on to the next.
But if you’d like to understand the app’s architecture first, stick with us. Let’s get it running locally.
Choosing an IDE for setup
To enhance your project navigation, utilizing an Integrated Development Environment (IDE) with AI capabilities can be highly beneficial. Here are two notable options:
Recommended (as of April 2025): Cursor

Cursor is an AI-powered IDE based on Visual Studio Code (VS Code). It offers features like code explanation, real-time troubleshooting, and guidance through unfamiliar files, making it particularly helpful for newcomers. Cursor provides a free tier with limited access, including 50 slow premium model uses per month and 2,000 completions. Extensive use requires a subscription; for example, the Pro plan includes 500 fast premium requests per month, with additional usage available through usage-based pricing.
Alternative: Visual Studio Code (VS Code)

VS Code is a widely used IDE known for its versatility and extensive extension library. It has introduced an AI-powered feature called Agent Mode, allowing developers to use natural language prompts to perform complex coding tasks. Accessing Agent Mode requires a GitHub Copilot subscription, which offers a free tier with limited usage and paid plans for more extensive access.
Integrating AI-powered IDEs like Cursor or VS Code's Agent Mode into your workflow streamlines development and accelerates learning through quick troubleshooting.
Setting up the App locally
Before we dive into AWS, let’s get this app running on your own machine. This approach lets you test everything in a controlled environment and gives you the confidence that the app is working as expected. Here's how to do it.
Step 1: Fork and Clone the app
The app is available on GitHub, and the first step is to create your own copy so you can work with it freely.
Fork the repository
Visit the GitHub page for the Image Share Web App and click Fork in the top-right corner.

This will create a personal copy under your GitHub account.
Clone your forked copy
Now, open your terminal and run the following command from the folder where you want to store the project:

Get the repository URL. Open your terminal and run the following command from the folder where you want to locate the project:
git clone https://github.com/your-username/image-sharing-app.git
Once the cloning is complete, switch into the project directory:
examples/03-01-preparing-a-web-app-for-deployment/code-2.txt
Now you’ve got all the code and setup files ready to go on your machine.You now have a full copy of the application’s source code and configuration files ready to run on your machine.
Step 2: Install docker and docker compose (if you don’t have)
To run the app, you’ll need Docker and Docker Compose installed.
- Docker allows you to package and run applications in isolated environments called containers.
- Docker Compose makes it easy to start up multiple containers with a single command.
If you don’t have docker installed, download and install Docker Desktop:
https://www.docker.com/products/docker-desktop
Step 3: Set up your environment
Before running the app, you’ll need to prepare your environment file.
folder where you want to locate the project:
cp .env.example .env.local
This creates a local environment configuration. You can edit .env.local
to adjust settings if needed, but the defaults are ready for testing.
Step 4: Start the app with docker compose
Now it’s time to build and launch the app using Docker Compose.
1. Build and run containers
docker-compose -f docker-compose.local.yml --env-file .env.local up --build -d
This command builds all required containers—web, database, etc.—and starts them in the background.
2. Run migrations
After the containers are running, initialize the database with Django’s migration system:
docker-compose -f docker-compose.local.yml --env-file .env.local exec web python manage.py migrate
3. Create a superuser
To access the Django admin panel, create a superuser account:
docker-compose -f docker-compose.local.yml --env-file .env.local exec web python manage.py createsuperuser
You’ll be prompted to enter a username, email, and password.
You might notice we include two specific options:
-f docker-compose.local.yml
This flag tells Docker Compose which configuration file to use.
By default, Docker Compose looks for a file named docker-compose.yml
. But in this project, we’ve separated our configurations into different files—one for local development and one for production.
Using -f docker-compose.local.yml
ensures you're running the version designed for your local machine, with development-friendly settings like local volume mounts, debugging enabled, and no reliance on cloud services like S3.
--env-file .env.local
This flag specifies which environment variable file to use.
Environment files hold important app settings—like database credentials, secret keys, and debug mode. We’ve created separate files for local and production environments to avoid mixing settings across environments.
By pointing to .env.local
, you make sure Docker Compose loads the right configuration for development. For example:
DEBUG=True
so errors are easier to track
- Local PostgreSQL credentials
- No need to configure S3 just yet
In contrast, production uses .env.prod
, which disables debug mode and includes stricter settings like S3 access keys and allowed hosts.
Together, these options give you flexibility. You can switch between development and production simply by changing the Compose file and environment file. This pattern keeps your local setup safe and clean, while making your production setup secure and optimized for the cloud.
Step 5: View the app in your browser
Once everything is up and running, you can access the app here:
Main site: http://127.0.0.1:8000
Try creating an account, uploading an image, and exploring the interface. This helps confirm that all the core services—web, database, static files—are working correctly.

Admin panel: http://127.0.0.1:8000/admin
To log into the admin panel, use the superuser credentials you created earlier using the docker-compose
command.

Optional: Import test data instead of creating it manually
If you’d rather not create test data one entry at a time, you can import pre-made data through the Django admin panel.
Download the test dataset
We’ve prepared a zip file containing two CSV files (for user and image data) along with sample image files.
Download the data via this link.
Access the admin panel
Go to http://127.0.0.1:8000/admin and log in with your superuser account.
Navigate to data management
At the top of the admin dashboard, click Data Management.

Then select Import Data.

In the import panel, import the user data first. Then, import the image data later.
Once the import is complete, you’ll see that 1 user and 8 sample images have been added to the app.

Now, return to the main site at http://127.0.0.1:8000, and you should see the sample images displayed in the gallery.
Next, you need to locate the file under the media folder. Create a media folder under the project folder. Copy and paste the image data like shown below:

Once you uploaded the data and images, you’ll see the sample posts on the home page.

Step 6: Stopping and cleaning up
When you're done working with the app, you can stop the running containers by using the following command:
docker-compose -f docker-compose.local.yml --env-file .env.local down
If you want to completely reset your environment—including removing the database and all uploaded media—use the -v
flag:
docker-compose -f docker-compose.local.yml --env-file .env.local down -v
This is useful when you want to start fresh. Just keep in mind that all stored data, including uploaded images and user accounts, will be permanently deleted.
Wrapping up
You’ve just completed the first step toward deploying a real web application on AWS. By running the Django Image Sharing App locally, you’ve gained a clear understanding of how its components fit together—and how Docker Compose makes managing those components easier.
With the local setup complete, you’re now ready to shift your focus to the cloud. In the next section, you’ll launch your first EC2 instance and begin configuring a live environment where your app can run securely and reliably. Because you’ve already seen the app in action, moving to AWS will feel less abstract and far more manageable.