How to add Async Tasks to Django with Celery
In this tutorial you’ll learn how to setup Celery on a Django project for executing async tasks.
First, I’ll explain what asynchronous tasks are and why you might use them.
If you already know, feel free to skip the faff and jump right to the steps below.
What are Asynchronous Tasks?
Asynchronous (or async for short) tasks are used to execute code in the background, outside the “main thread” of the application.
Web based applications work by using a request and repsonse system.
The browser submits a request (e.g. GET http://example.com) to a server, which then executes some code that returns a response (such as a HTML document).
In a standard web application, a bulk of the requests happen extremely quickly (<20ms).
However, occasionally you need to execute code which takes longer than a user would reasonably expect to wait in a standard web request.
A good example is integrating with AI GPT API’s like Chat GPT, Claude.ai and so on.
When you make requests to AI (e.g. asking a question), you may notice that it can take more than a few seconds for it to respond to your question.
Generally, the longer the prompt, the longer the response time.
This is a great example of where you may wish to execute some code in an async task.
When Should I use Async Tasks?
Generally, I use async tasks for any code that meets any of the following criteria:
- It will most likely take longer than 1-2 seconds to execute
- The request relies on an external service which may fail (e.g. sending an email)
- You need to run code in batches
Here are some specific examples:
- Making requests to an AI service
- Processing video or audio files
- Sending emails
Adding Asynchronous Tasks to Django
Before you start, you’ll need the following:
- Starting project code – This is the code from our Add Postgres Database to Dockerized Django Project tutorial
- Docker Desktop
The starting project contains a simple project using Django, Postgres and React.
It’s all Dockerized, so to test it, simply run:
docker compose up
And then navigate to http://localhost:8000/admin.
Add RabbitMQ Service
For Celery to work, we need to add a message queue server to our project.
A message queue is essentially a tool that allows us to queue messages which can be consumed by our application.
It works like this:

- Our app posts things to the queue
- The queue keeps a list of things to be processed
- The worker pulls things off the queue and processes them
This diagram has one only worker, however you could add as many as you like.
The more workers running, the faster the items in the queue will be processed.
To add RabbitMQ to the project, open docker-compose.yml and add the following lines to the services block:
mq:
image: rabbitmq:3.13.7-alpine
environment:
- RABBITMQ_DEFAULT_USER=mquser
- RABBITMQ_DEFAULT_PASS=bunnypass123
This adds a new service called mq which uses the rabbitmq image from Docker Hub.
The environment variables set the credentials needed for our application to connect to RabbitMQ.
In production, these values should be set as environment variables so you can avoid hard coding them.
Install Celery
Next we need to install Celery which is the Python library that’s used for adding and processing items from the queue.
Open requirements.txt and add the following to the end:
celery==5.4.0
Then run docker compose build to rebuild our Docker image and install our new dependency from requirements.txt.
Configure Celery in Django
Once Celery has been installed, we need to configure it for our project.
This happens in a few stages and is documented in the First steps with Django documentation on the Celery docs site.
Open backend/backend/__init__.py and add the following contents:
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ('celery_app',)
This is taken directly from the official docs, and as the comment says, it tells Celery to use the @shared_task decorator (which we’ll be using later) with this app.
Next, create a new file at backend/backend/celery.py and populate it with the following contents:
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')
app = Celery('backend')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
@app.task(bind=True, ignore_result=True)
def debug_task(self):
print(f'Request: {self.request!r}')
This is also taken from the official docs for celery.
Essentially it sets up celery on our project and allows us to configure using variables in settings.py which are prefixed by CELERY_.
It also adds a basic debug_task() which can be used for debugging issues.
Next, open backend/backend/settings.py and add the following line to the end:
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL")
This sets a new configuration variable called CELERY_BROKER_URL and sets it to the value of the environment variable by the same name.
Now open docker-compose.yml and add the following line to the env block of the backend app.
- CELERY_BROKER_URL=amqp://mquser:bunnypass123@mq:5672//
This configures an environment variable called CELERY_BROKER_URL which contains the full URL (including credentials) for our RabbitMQ service.
Note: The values are hard coded here, however if you wanted to deploy this, it’s best to make the variables configurable using a .env file.
Add Celery Worker Service
Next we will add a worker service that will be responsible for processing the jobs added to our RabbitMQ service.
Open docker-compose.yml and add the following to the end of the services block (above volumes):
worker:
build:
context: .
volumes:
- ./backend:/backend
command: >
sh -c "celery -A backend worker -l INFO"
environment:
- DB_HOST=db
- DB_NAME=db
- DB_USER=user
- DB_PASS=localdevpw
- CELERY_BROKER_URL=amqp://mquser:bunnypass123@mq:5672//
depends_on:
- backend
- mq
This does the following:
- Adds a new “worker” service to our project which runs using the same Docker image that our Django app runs in (determined by
context: .) - Sets the command that runs the celery worker for our project (which is called backend) and sets the log level to “INFO”
- Sets the environment variables configuration values for the database and Celery broker
- Sets depends on to backend and
mq, to ensure that both the backend (Django) application and RabbitMQ services are running before starting the worker
Test Celery Task
Now we should be set up to run asynchronous tasks in our project.
Let’s create a task we can use to test with.
Create a file at backend/core/tasks.py and add the following contents.
from celery import shared_task
@shared_task
def hello_task(name):
print(f"Hello {name}. You have {len(name)} characters in your name.")
This code contains a one line Python function called hello_task that accepts the parameter called name — which outputs as a text string with the length of the name.
The @shared_task decorator is used to configure this function as a task which can be called asynchronously.
If the project is already running in Docker, then run CMD + C (macOS) or CTRL + C (Windows) to stop the service.
Then run docker compose up to start it again (this time the worker should start).
We can test this by starting a new shell and calling our new task.
Open a separate Terminal (or PowerShell) window and run the following:
docker compose run —rm backend sh -c “python manage.py shell"
This will launch the Django shell for our project.
In the shell, type the following:
from core.tasks import hello_task
hello_task.delay(“James")
Note that we don’t call the function directly (e.g. hello_task(“James”)).
Instead, we call hello_task.delay(“James”).
The delay function is added using the shared_task decorator, and causes the function executed to be queued rather than called directly.
Whenever you want to trigger a task to run in the background, you simply call it with .delay() instead of calling it directly.
The output will look something like this.

If you open up the tab that’s running our docker compose services, you can see the output of our task:

That’s how you add async tasks to a Django project.
Please leave any comments or suggestions in the comments below, and remember to subscribe to our YouTube channel.

Leave a Reply
Want to join the discussion?Feel free to contribute!