Docker Compose, Django, PostgreSQL, and Redis & Celery Baseline Configuration

Docker Compose, Django, PostgreSQL, and Redis & Celery Baseline Configuration

·

12 min read

In the fast-paced world of software development, efficiency is key. And when it comes to handling data operations, managing asynchronous tasks, and orchestrating containerized environments, Redis, Celery, Docker, Docker Compose, and PostgreSQL emerge as indispensable tools. But what exactly are they, and how do they fit into your tech stack? Let's explore Redis, Celery, Docker, Docker Compose, and PostgreSQL.

What is Redis?

Redis is an in-memory data structure store that acts as a high-performance message broker. In tech lingo, a message broker is a mediator between different software components, facilitating communication by storing and routing messages.

What makes Redis particularly powerful is its lightning-fast speed and versatility. It stores data primarily in RAM (random-access memory), which allows for extremely quick read and write operations.

In addition to its core messaging capabilities, Redis supports a variety of data structures beyond simple key-value pairs, including strings, lists, sets, hashes, and sorted sets. This versatility enables developers to implement a wide range of use cases efficiently.

Imagine you have a bunch of friends who want to send you messages, but they all need to go through a central mailbox. That's sort of what Redis is like, but for computers.

Redis is like a super-fast messenger for computer programs. It helps different programs or parts of a program talk to each other by storing and sending messages quickly. These messages could be anything from simple notes to more complex instructions.

So, in simple terms, Redis is like a speedy postman for computer messages, making sure they get where they need to go without getting lost or delayed.

What is Celery?

Let's say you have a lot of things to do on your computer, like sending emails, processing data, or doing calculations. Celery is like a helpful organizer for these tasks. It helps your computer manage and prioritize these tasks efficiently.

Celery is a distributed task queue system for handling asynchronous tasks in web applications.

Here's how it typically works:

  1. Task Definition: You define tasks in your application code. These tasks can be anything from processing data, sending emails, generating reports, or any other computational task.

  2. Task Queues: When a task needs to be executed, instead of executing it immediately, your application places it in a message queue managed by Celery. This queue acts as a buffer, holding tasks until they can be processed.

  3. Workers: Celery employs worker processes that continuously monitor the task queue. When a worker is available, it fetches tasks from the queue and executes them. This allows tasks to be processed independently and concurrently, maximizing efficiency.

  4. Result Backend: After a task is executed, Celery can optionally store the result in a result backend such as a database or message broker. This enables your application to retrieve the results later if needed.

  5. Monitoring and Scalability: Celery provides tools for monitoring the status and performance of tasks and workers. Additionally, it supports scaling by allowing you to add or remove worker processes dynamically based on workload.

In summary, Redis is a versatile data store and message broker optimized for low-latency data operations, while Celery is a specialized task queue system tailored for managing asynchronous tasks in distributed systems. While both can be used for similar purposes, they serve different roles within a software architecture. Let's try it out!

Create a GitHub Repository with a README file and clone this in your local machine using SSH connections

Here are the step-by-step instructions to achieve this:

  1. Create a GitHub Repository:

    • Go to GitHub.

    • Log in to your account.

    • Click on the "+" sign in the top-right corner and select "New repository."

    • Enter a name for your repository, optionally add a description, choose whether it should be public or private, and click "Create repository."

  2. Add a README file:

    • Once your repository is created, you'll be taken to the repository's page.

    • Click on the "Add README" button or manually create a README.mdfile by clicking "Create new file" and entering "README.md" in the file name field.

    • You can add some initial content to the README file if you'd like.

  3. Set up SSH Key:

    • If you haven't already set up an SSH key, you need to generate one. Open a terminal window on your local machine.
  4. Generate SSH Key:

    • Run the following command to generate a new SSH key. Replace <your_email@example.com> with your email address connected to your GitHub account.

      COPY

            ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
      
    • Press Enter to accept the default file location and optionally enter a passphrase (recommended).

  5. Add SSH Key to SSH Agent:

    • Start the SSH agent by running the command:

      COPY

            eval "$(ssh-agent -s)"
      
    • Add your SSH private key to the SSH agent:

      COPY

            ssh-add ~/.ssh/id_rsa
      
  6. Add SSH Key to GitHub:

    • Copy the SSH key to your clipboard:

      COPY

            cat ~/.ssh/id_rsa.pub
      
    • Go to your GitHub account settings.

    • Click on "SSH and GPG keys" in the left sidebar.

    • Click on "New SSH key" and paste your SSH key into the "Key" field.

    • Give your key a descriptive title and click "Add SSH key."

  7. Clone Repository:

    • Go to your repository on GitHub.

    • Click on the green "Code" button and make sure "SSH" is selected.

    • Copy the SSH URL provided.

    • Open a terminal window and navigate to the directory where you want to clone the repository.

    • Run the following command, replacing <repository_url> with the SSH URL you copied:

      COPY

            git clone <repository_url>
      
    • This will clone the repository to your local machine.

Now you have successfully created a GitHub repository, added a README file, set up SSH keys, and cloned the repository to your local machine using SSH.

Create a requirements.txt list

Keep in consideration the dependencies required for PostgreSQL like psycopg2.

Generate a list of dependencies using:

$ pip freeze > requirements.txt
$ cat requirements.txt
asgiref==3.8.1
certifi==2024.2.2
charset-normalizer==3.3.2
Django==5.0.4
docker==7.0.0
idna==3.7
packaging==24.0
requests==2.31.0
sqlparse==0.4.4
urllib3==2.2.1
psycopg2>=2.9
redis>=5.0
celery>=5.3

Create a Dockerfile

What exactly is a Dockerfile? Crafting one might seem daunting initially, but think of it simply as a recipe for creating personalized Docker images.

  • Setup Docker:

    It's essential to set up Docker and Docker Desktop on your local system. For educational purposes, opt for Docker Community Edition.

  • Make the docker app image:

    The next step is to include a Dockerfile in your project. A Dockerfile can be seen as a set of steps to construct your image and later your container.

    To begin, make a new file named Dockerfile in the main directory of your project. Then, follow each step carefully, just like in the example provided.

      # Use the official Python image for an Operating System
      FROM python:3.12-alpine
    
      # Ensure Python outputs everything that's printed inside the application
      # Any errors logs sends straight to the terminal, we get the message straight away
      # 1 means non empty value
      ENV PYTHONUNBUFFERED=1
    
      # This is just dependencies in order to get our PostgreSQL DB working on alpine
      # This command updates the package index and installs necessary packages for developing with 
      # PostgreSQL, including libraries and tools needed for compiling and running PostgreSQL-related code on an Alpine Linux system.
      RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
    
      # Set working directory in the container
      WORKDIR /django
    
      # Copy the requirements file into the container
      COPY requirements.txt requirements.txt
    
      # Upgrade pip
      RUN pip install --upgrade pip
      # Install Python dependencies
      RUN pip install --no-cache-dir -r requirements.txt
    
  • FROM python:3.12-alpine: Start with an image that has Python version 3.12 and uses Alpine Linux as its operating system.

  • ENV PYTHONUNBUFFERED=1: Set an environment variable to ensure Python outputs everything immediately without buffering, so we see messages instantly.

  • RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev: Update the package index and install necessary packages for developing with PostgreSQL. This includes libraries and tools needed for compiling and running PostgreSQL-related code on an Alpine Linux system.

  • WORKDIR /django: Set the working directory inside the container to /django, where we'll be working with our Django application.

  • COPY requirements.txt requirements.txt: Copy the requirements.txt file from our local machine into the container. This file lists all the Python packages our application depends on.

  • RUN pip install --upgrade pip: Upgrade the pip package manager to the latest version.

  • RUN pip install --no-cache-dir -r requirements.txt: Install Python dependencies listed in the requirements.txt file into the container. The --no-cache-dir flag ensures that pip doesn't cache the downloaded packages, saving space in the container.

Configure Docker Compose including the Django app, PostgreSQL, Redis and Celery

What is Docker Compose file? With a Docker Compose file, you can list out all the services you need for your project, along with their settings and how they should talk to each other. It's like giving each container a set of instructions so they know how to work together harmoniously.

In practical terms, this means that developers can specify the services (containers) required for their application, along with their respective configurations and dependencies, within a Docker Compose file. This file serves as a blueprint or declarative representation of the application's architecture and runtime environment.

So, in simple terms, a Docker Compose file is like a manager for your services, making sure they all play nice and work together smoothly without you needing to babysit each one individually. Cool, right?

# what version of docker compose, adjust to the latest version
version: '3.8'
services:

  # Redis
  redis:
    image: redis:alpine
    container_name: redis

  # Database Postgres
  db:
    image: postgres
    # ./data/db: This is to connect and save the database data in our computer
    # /var/lib/postgresql/data: the data is located from this containerized db
    volumes:
      - ./data/db:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    container_name: postgres_db_for

  # Django Application
  app:
    build: .
    volumes:
    # /django is the workdir from the Dockerfile
      - .:/django
    # Define ports so we can access the container
    ports:
    # Port 8000 from your computer with Port 8000 of the container
      - 8000:8000
    # name of the image
    image: app:imageName-django
    container_name: django_app_container
    # why 0.0.0.0 and not 127.0.0.1?
    command: python manage.py runserver 0.0.0.0:8000
    # Define the running order
    depends_on:
      - db

  # Celery
  celery:
    restart: always
    build:
      context: .
    # When celery setup a task, it provides info in the terminal (turn this off in prod)
    command: celery -A core worker -l INFO
    volumes:
    # /django is the workdir from the Dockerfile
      - .:/django
    container_name: celery
    depends_on:
    # all of the ones below should start before celery
      - db
      - redis
      - app

Build and start the container

To build the Docker image, use the following command:

docker-compose build
  • Note that we are only building the image. There will be no container running here.

To run the container, execute the following steps:

docker-compose run --rm <imageThatYouwantToUse> django-admin startproject core .
  • docker-compose: This tells your computer to use Docker Compose, a tool for defining and running multi-container Docker applications.

  • run: This command tells Docker Compose to run a command inside a service.

  • --rm: This flag tells Docker Compose to remove the container after it exits. It helps keep things clean by getting rid of temporary containers.

  • Check out the docker-compose file command. What it all does is run the server of the Django app. With django-admin startproject core . we'll first create the app.

💡
Now, only the containers for your app and database will appear in your directory. This is because in your Docker Compose file, your app is dependent on your database.

To start all the services defined in your Docker Compose configuration:

docker-compose up

When you run docker-compose up, Docker Compose reads your docker-compose.yml file, builds any necessary images, creates containers for each service, and then starts those containers.

This command is handy because it automates the process of starting your multi-container Docker applications. It handles tasks like creating networks for your services to communicate, setting up volumes, and running containers with the specified configurations.

Additionally, docker-compose up will stream the logs of all the containers to your terminal, so you can see the output from each service in real-time. It's one of the fundamental commands used when working with Docker Compose.

💡
When you run this the celery container will impose an error. Because in the Docker compose file in the line "command: celery -A core worker -l INFO", that command doesn't know how to connect with "core". And if you go to your Docker Desktop you will see that celery container will just keep restarting, because again in the Docker compose file, we set celery to "restart: always". Let's solve that below.

Build another Django App inside the container

Go to your container's terminal and run:

docker exec -it <djangoAppContainerName> sh

Create a new Django App by running

/django # ython manage.py startapp <newappName>

This will also appear in your local directory because that's the magic of Volume.

Configure a basic Celery task

In my first Django App named core. I've created a new file name "celery.py", with this code.

import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings')
app = Celery('core')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

still in your "core" app's "__init__.py" file, paste:

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

__all__ = ('celery_app',)

Still in your "core" app's "settings.py" file, add the following line in the end:

CELERY_BROKER_URL = "redis://redis:6379"
CELERY_RESULT_BACKEND = "redis://redis:6379"

And configure this line too:

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'newapp'
]

At your newly created Django Application, mine I called "newapp". Create a file called "tasks.py" and paste:

from __future__ import absolute_import, unicode_literals

from celery import shared_task

@shared_task
def add(x, y):
    return x + y

From here open a new terminal window and run:

docker exec -it <django_app_containerName> sh

Test the celery task by running this in the command line:

python manage.py shell
from newapp.tasks import add
add.delay(2, 2)

Congratulations on handling your first Celery task! In summary, Redis speeds up how your data moves around, Celery keeps track of tasks so you don't have to, Docker and Docker Compose make sure everything runs smoothly in little containers, and PostgreSQL holds onto your data securely.


Find the GitHub repo here. I've included the errors I've encountered and how I solved it.

I trust we all learned something from this blog. If you found it helpful too, give back and show your support by clicking heart or like, share this article as a conversation starter and join my newsletter so that we can continue learning together and you won’t miss any future posts.

Thanks for reading until the end! If you have any questions or feedback, feel free to leave a comment.

Did you find this article valuable?

Support anj by becoming a sponsor. Any amount is appreciated!