Dockerizing and Deploying Django Apps

Learn how to use Docker, Compose, and Ansible to build a simple yet robust deployment pipeline for Django applications.

We're going to build a simple Django app that I like to call the Number Cruncher.

The app itself consists of a UI for submitting Fibonacci and Nth prime requests, the API endpoints, and a Celery instance for the number crunching.

Needless to say, this app won't be the next Google, but that's not the point!

The goal is to learn how to use Docker to collaborate on a project, and to build a production deployment process.

Setting Up the Number Cruncher App

Let's tackle the app itself before digging into the infrastructure.

First things first, we'll create the app and the directory structure.

django-admin startproject number_crunchercd number_cruncherpython3 manage.py startapp corerm core/views.pymkdir core/templatesmkdir core/viewsmkdir core/forms

Like with any Django app, remember to add it to the INSTALLED_APPS list in the settings file.

INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "core",]

The schema for this app is very simple: just one model for storing calculation results. Open core/models.py and add the following.

from django.db import modelsclass CalculationResult(models.Model): id = models.AutoField(primary_key=True) result = models.IntegerField(null=True)

We'll use HTMX to make polling for results easy. Open core/views/fibonacci.py to create the view.

from django.shortcuts import renderfrom django.http import HttpResponsefrom core.forms import FibonacciFormfrom core.models import CalculationResultfrom core.tasks import calculate_fibonaccidef fibonacci(request): if request.method == "POST": form = FibonacciForm(request.POST) if form.is_valid(): number = form.cleaned_data["number"] result = CalculationResult() result.save() calculate_fibonacci.delay(result.id, number) return render(request, "core/calculation_result.html", {"result": result}) else: form = FibonacciForm() return render(request, "core/fibonacci_form.html", {"form": form})

Making a GET request to this endpoint will render a form for submitting the Fibonacci request.

POST requests (i.e. submitting the form) will render only a small chunk of HTML that contains the result of the calculation, or a loading message if the calculation is not finished.

The magic of HTMX will take the POST response and swap out a specific part of the page.

HTMX is a powerful new way of building user interfaces with Django. I’ve written other guides about HTMX and Django if you want to read more about that.

Let's take a look at the Django template for the Fibonacci form.

{% extends "core/base.html" %} {% load static %} {% block title %}Fibonacci Calculator{% endblock %}{% block content %}<form method="post" hx-post="/fibonacci/" hx-target="#result" hx-swap="outerHTML"> {% csrf_token %} <button type="submit">Calculate Fibonacci</button></form><div id="result"></div>{% endblock %}

For this guide, I'll assume you're familiar with the basics of Django templating.

Submitting the form triggers a POST to the previous endpoint and replaces the result div with the response content.

The response content comes from the calculation_result.html template.

{% if result.result %} <div id="result"> </div>{% else %} <div id="result" hx-get="/calculation//result/" hx-trigger="load delay:2s" hx-swap="outerHTML"> Loading... </div>{% endif %}

Because the Fibonacci endpoint passes off calculation to Celery, we need to poll for the actual result. We receive the ID of the CalculationResult object on submitting the form, so we can use that to poll until the final result is ready.

On polling when the result is not ready, we get a chunk of HTML with hx-trigger that causes a follow-up request. The polling stops when the answer is ready because the rendered template drops all HTMX attributes from the result div.

Background Tasks with Celery

So how does the calculation work actually happen?

The call to calculate_fibonacci.delay(result.id, number) triggers a Celery task. Since the processing is done in the background, the final answer is not returned to the client.

Let's take a look at the Celery task that calculates the Fibonacci numbers.

from celery import shared_taskfrom core.models import CalculationResult@shared_task(queue="fibonacci")def calculate_fibonacci(result_id, n): def fibonacci(num): if num <= 1: return num else: return fibonacci(num - 1) + fibonacci(num - 2) result = fibonacci(n) calculation_result = CalculationResult.objects.get(pk=result_id) calculation_result.result = result calculation_result.save() return result

There are far more efficient ways to calculate Fibonacci numbers, but since I wanted this task to actually take up some time, I went for the slower recursive solution.

That's the heart of the Django app!

But how do we share this project with other developers?

How do we put the application online and deploy updates?

Dockerizing the Application

Docker can help other developers contribute to the project by providing a consistent environment, regardless of whether a developer is on Windows, Linux, or any other environment.

To read more about setting up Docker and Docker Compose, take a look at this FCD series on Django for Production.

The Dockerfile describes the exact steps for building the application environment.

FROM python:3.11-alpineENV PYTHONUNBUFFERED 1ENV PYTHONDONTWRITEBYTECODE 1WORKDIR /usr/src/appRUN apk update && apk add --no-cache postgresql-dev && \ apk add --no-cache --virtual .build-deps gcc python3-dev musl-devCOPY requirements.txt /usr/src/appRUN pip install --upgrade pip && pip install -r requirements.txtCOPY . /usr/src/app

The FROM serves as the foundation of the Docker image. With each COPY command, we copy files from our local filesystem into the new image. Once an image is built, we can use it to run multiple containers, as we'll see with Docker Compose.

Up next is the docker-compose.yml file that defines how our containers should interact.

version: '3.8'x-worker-opts: &worker-opts build: context: . dockerfile: Dockerfile volumes: - ${PWD}:/usr/src/app environment: - CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0 - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_HOST - POSTGRES_DB depends_on: - redisservices: web: build: context: . dockerfile: Dockerfile container_name: number_cruncher command: python manage.py runserver 0.0.0.0:8000 volumes: - ${PWD}:/usr/src/app environment: - CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0 - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_HOST - POSTGRES_DB ports: - "8000:8000" depends_on: - redis fibonacci-worker: command: ./start_celery.sh -Q fibonacci --concurrency=1 <<: *worker-opts prime-worker: command: ./start_celery.sh -Q prime --concurrency=1 <<: *worker-opts redis: image: redis:latest container_name: redis ports: - "6379:6379" database: image: postgres:15 restart: always volumes: - /var/lib/postgres:/var/lib/postgres ports: - "5432:5432" environment: - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_DB

The x-worker-opts block actually doesn't define a container! This is instead a reusable block that we can extend using a built-in YAML concept called anchors. Since the definitions are almost the same, the fibonacci-worker and prime-worker containers use anchors to avoid duplication.

The web and Celery containers use the build key to define what should be done when someone runs the docker-compose build command. This feature is nice because it makes it possible to build and run everything with docker-compose alone.

The volumes defined on the web and Celery containers map the local directory into the container.

Because of this mapping, the Django live reload feature will still function as expected.

To bring up all of these containers:

docker-compose builddocker-compose up -d

I use the extra flag at the end of the run command to indicate that I want the containers to run in the background, rather than stream all of their logs to the console.

Preparing for Production Deployment

The production configuration will build on top of the existing development setup.

The main difference is that in production, we need to switch to a suitable web server, instead of the Django dev server.

This means a web server like Nginx that is capable of efficiently serving HTTPS traffic.

In addition to Nginx, we'll use Gunicorn to provide concurrency, so requests can be executed in parallel.

Luckily, we can reuse most of the existing Docker configuration.

We'll create a new file named docker-compose.production.yml and add the following:

version: '3.8'x-worker-opts: &worker-opts build: context: . dockerfile: Dockerfile volumes: - ${PWD}:/usr/src/app environment: - CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0 - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_HOST - POSTGRES_DB depends_on: - redisvolumes: sock:services: nginx: image: nginx restart: always network_mode: "host" volumes: - /etc/letsencrypt:/etc/letsencrypt - ./nginx.prod.conf:/etc/nginx/nginx.conf - ./static:/static - sock:/sock ports: - "443:443" - "80:80" web: image: number_cruncher container_name: number_cruncher command: gunicorn --preload --bind=unix:/sock/app.sock --workers=4 wsgi volumes: - sock:/sock fibonacci-worker: command: celery -A number_cruncher worker -Q fibonacci --concurrency=1 --loglevel=info <<: *worker-opts prime-worker: command: celery -A number_cruncher worker -Q prime --concurrency=1 --loglevel=info <<: *worker-opts

You'll notice that I don't include any reference to the database or redis services here.

That's because we're going to layer the Compose files, meaning this production config will override settings, while the original docker-compose.yml acts as a base file.

To run this layered config setup, execute the following:

docker-compose -f docker-compose.yml -f docker-compose.production.yml up -d

Deploying the Number Cruncher App

Now that we've got our Number Cruncher app running smoothly in our local Docker environment, it's time to streamline our deployment process using GitHub Actions and Ansible. This integration will automate the deployment of our app to our production environment whenever we push changes to our main branch. Let's dive into how to set this up.

In your project directory, create the .github/workflows directory.

Create a new file named deploy.yml. This YAML file will define our deployment workflow.

We want our deployment to run when changes are pushed to the main branch. So, we'll start our deploy.yml with:

name: Deploy Number Cruncheron: push: branches: [main]jobs: deploy: runs-on: ubuntu-latest steps: - name: Check out the repository uses: actions/checkout@v2 - name: Set up SSH Key uses: webfactory/[email protected] with: ssh-private-key: $ - name: Run Ansible Playbook run: ansible-playbook -i inventory.ini playbook.yml

You’ll notice that we use a file named inventory.ini when invoking ansible-playbook. This file contains your “inventory” of servers that Ansible should interact with.

Here’s an example inventory.ini file:

webserver ansible_host=your vm ip address here

Each line corresponds to a server, along with options such as ansible_host. You can define other options here as well such as what user to log in as, but we’ll leave this as it is.

Now, with our GitHub Action configured for deployment, we need to focus on setting up secure SSH communication between our GitHub repository and the production server. Here’s how we go about it:

On your local machine, generate an SSH key pair that GitHub Actions will use to log in to the production instance.

ssh-keygen -f ansible-key# Upload the public key to the remote server if running this command locally.ssh-copy-id -i ansible-key.pub [email protected]

Replace the IP address with that of one of your own virtual machines.

To store the private key in GitHub, go to your GitHub repository's settings, find the "Secrets" section, and click on "New repository secret".

Name the secret SSH_PRIVATE_KEY and paste in the private key.

Next, on the virtual machine, you'll need to install the Ansible agent so that the GitHub Action has something to interact with.

sudo apt install software-properties-commonsudo add-apt-repository --yes --update ppa:ansible/ansiblesudo apt install ansible

Creating the Ansible Playbook

With the GitHub Actions and SSH setup ready, we now turn our attention to creating the playbook.yml file.

This Ansible playbook will update the Number Cruncher app on the production server and restart the Docker containers to reflect those changes.

In your project directory, create a file named playbook.yml. This file will define the tasks that Ansible will perform on your production server.

The playbook will have several tasks: updating the app's code, rebuilding the Docker image, and restarting the containers.

---- hosts: all become: yes tasks: - name: Pull latest code from repository git: repo: 'your-repository-url' dest: /usr/local/number_cruncher version: main - name: Build Docker image command: docker-compose build args: chdir: /usr/local/number_cruncher - name: Restart Docker containers command: docker-compose up -d args: chdir: /usr/local/number_cruncher

Conclusion

I hope this tutorial has shown that deploying an app doesn't have to be overly complex!

With a minimum of infrastructure setup, we now have an automated deployment pipeline.

Liked this guest article? Found something Zach could improve on? Let him know on Discord!

Reply

or to participate.