, , ,

Create AI Chatbot with OpenAI, Django and React

In this tutorial you’ll learn how to create a chat bot using OpenAI, Django and React.

By the end of the tutorial, you will have a functional Chatbot that looks like this:


Prerequisites

This tutorial starts where the last tutorial left off: How to add Async Tasks to Django with Celery. Want to learn how to set up each component in the starting project? Follow that tutorial first. Otherwise, you can jump right in with the starting code.

You’ll need:

Design Overview

Let’s start with a design overview.

If you want to skip this and go right to the code, click here.

To build our Chatbot, we need to:

  • Integrate with the OpenAI Chat Completions API.
  • Put in place an AiChatSession model to keep track of the chat log.
  • Implement an AiRequest model to track individual AI requests.
  • Handle requests via Async tasks because they can take a long time to run.
  • Install polling on the frontend to retrieve messages.

OpenAI Pricing

OpenAI charge users for making calls to their APIs.

The cost varies based on the model and the amount of “tokens” you send and receive.

Generally, you can assume that 1 token is approx. 4 characters. You can learn about tokens on the official Key concepts documentation. You can see the price per 1M tokens on the OpenAI Platform Pricing page.

At the time of writing this, here’s what we can expect to spend.

We’ll use the gpt-4o-mini model, which is a low-cost model that charges $0.15 per 1M input tokens, and $0.60 per 1M output tokens.

If we assume an average of 1 token = 4 characters, then you could send 4M, and receive 4M characters and pay less than $1 USD.

When creating this tutorial and testing the Chatbot, I paid less than $0.01 for the usage.

You are responsible for all usage costs and I implored you to take measures to protect your API key.

Authentication features are not covered in this tutorial. Please ensure your app is not public.

How the Chat Completion APIs Work

The Chat Completions API is well documented. But here is a quick overview of how it works:

  1. Make a request including a list of messages and the model you wish to use.
  2. The API returns the LLM’s (large language models) response to the request.
  3. You keep a history of all messages, and keep building on them every time you send a new one.

For example, the first request may look like this:

{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are an assistant who's had too much coffee."
    },
    {
      "role": "user",
      "content": "How are you?"
    }
  ]
}

The “model” tells OpenAI which model you wish to use.

Each model has a different cost. In the example above, we use gpt-4o-mini which is one of the smaller, low-cost models. You can learn more about different models on the Models – OpenAI API documentation page.

Then you have an array of “messages” which have role and content values.

The “role” tells the model who is stating the message. The common roles are:

  • system: This is tells the model how to respond. These messages customise the model and provide context to respond in the way you want it to. These messages are usually not shown to the user in the chat log.
  • user: Represents messages from the user of your Chatbot. This includes messages the user actually types and sends to the Chatbot. These show in the chat log as “sent” messages.
  • assistant: Includes the messages the AI model returns (an example follows). These show in the chat log as “received” messages.

There are other roles such as “function” that we won’t be using in this tutorial.

A response to the above message might look like this:

{
  "id": "chatcmpl-B49lcr2W6E9DCgsfqoatv4yIV9Yro",
  "model": "gpt-4o-mini-2024-07-18",
  "usage": {
    "total_tokens": 92,
    "prompt_tokens": 53,
    "completion_tokens": 39,
    "prompt_tokens_details": { "audio_tokens": 0, "cached_tokens": 0 },
    "completion_tokens_details": {
      "audio_tokens": 0,
      "reasoning_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Buzzing with energy! You?",
        "refusal": null
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "created": 1740330556,
  "service_tier": "default",
  "system_fingerprint": "fp_709714d124"
}

This contains metadata about the request, such as the number of tokens used. This can help calculate the cost of the request.

The most important part of the response is under the “choices” key. Here we can see a “message” that contains the content: “Buzzing with energy! You?”

This is the value we capture and show to the user as a response from the Chatbot.

If the user wants to follow up with another message, then the next request might look like this:

{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "role": "system",
      "content": "You are an assistant who's had too much coffee."
    },
    {
      "role": "user",
      "content": "How are you?"
    },
    {
      "role": "assistant",
      "content": "Buzzing with energy! You?"
    },
    {
      "role": "user",
      "content": "I'm great!"
    }
  ]
}

This request is the same structure as the first request. But it includes the full chat history as well as the new message. This provides context for the LLM to generate a response to our new message.

This is important because it’s how the model keeps track of everything that has been said previously, which adds valuable context needed for the response.

So as the user interacts with the Chatbot, the messages array will continue to grow.

Application Architecture

To handle this application workflow, our application will look like this:

  • React will be the frontend – this will render the interface and poll the backend for new messages.
  • Django will run the backend with API requests from the frontend.
  • PostgreSQL will be the database that we’ll use to store the chat context.
  • Celery Worker will process the AI Requests as background tasks.
  • OpenAI will be the AI backend.

Now let’s start building…

Clone the Starting Project

We provide a starter project to build on to create our Chatbot.

The starting code includes the following:

  • Django backend with Django REST Framework.
  • React frontend.
  • Celery for background tasks.
  • All configured to run using Docker and Compose.

If you want to learn how we set this up, start with: How to Dockerize a React Project.

Otherwise, you can find the starting code here: github.com/LondonAppDeveloper/yt-django-celery.

So let’s clone this project to get started:

git clone https://github.com/LondonAppDeveloper/yt-django-celery.git chatbot

Test the project by running the following:

cd chatbot
docker compose up --build --watch

You can navigate to the project by visiting the following URLs:

  • Frontend React Project: http://127.0.0.1:5173
  • Django Admin: http://127.0.0.1:8000/admin/

If you want to create a superuser that you can use to login to the Django admin, run the following:

docker compose run --rm backend sh -c "python manage.py createsuperuser" 

Create Database Models

We are going to setup some database models which will be used to keep track of the chat flow and requests to the AI backend.

We’ll do the following:

  • Create two new database models so we can track our chat history and AI requests.
  • Enable these models for the Django Admin so we can view the data in the browser.
  • Create our migration file.
  • Apply our migrations to the database.

You can find the full code diff for adding the models here on GitHub.

Open backend/core/models.py and add the following contents:

class AiChatSession(models.Model):
    """Tracks an AI chat session."""
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)


class AiRequest(models.Model):
    """Represents an AI request."""

    PENDING = 'pending'
    RUNNING = 'running'
    COMPLETE = 'complete'
    FAILED = 'failed'
    STATUS_OPTIONS = (
        (PENDING, 'Pending'),
        (RUNNING, 'Running'),
        (COMPLETE, 'Complete'),
        (FAILED, 'Failed')
    )

    status = models.CharField(choices=STATUS_OPTIONS, default=PENDING)
    session = models.ForeignKey(
        AiChatSession,
        on_delete=models.CASCADE,
        null=True,
        blank=True
    )
    messages = models.JSONField()
    response = models.JSONField(null=True, blank=True)
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)

This code includes the following:

  • AiChatSession model which will be used to track each session – this will be used to group all messages and responses from the AI model in one place.
  • AiRequest model which is used to track specific requests sent to the AI backend.
    • Includes a status which is used to track what’s happening in the request (e.g. PENDING > RUNNING > COMPLETE).
    • Linked to the session.
    • Includes a messages field which will be used to store the array of messages to be sent to the AI backend.
    • Includes a response field to store the response received from the AI backend.

Once the models are added, update backend/core/admin.py to include these models in the Django admin:

admin.site.register(models.AiChatSession)
admin.site.register(models.AiRequest)

Now create and apply the migrations by running the following:

docker compose run --rm backend sh -c “python manage.py makemigrations”
docker compose run --rm backend sh -c “python manage.py migrate”

This will create a new migration file inside backend/core/migrations/ and apply it to the running database.

Configure OpenAI

Now we are going to configure our project to use OpenAI by doing the following:

  • Install the openai Python library.
  • Create a new API Key in the OpenAI Platform.
  • Add environment variables for configuring our project with the key.
  • Update our Django settings to include the key.

You can find the full diff for all the changes in this section on GitHub.

Add the following line to the environment variables list within the backend and worker services:

- OPENAI_API_KEY=${OPENAI_API_KEY}

This tells Docker Compose to set the OPENAI_API_KEY environment variable to the value we set in .env. This helps avoid hard coding the key in the codebase.

Next, open backend/backend/settings.py and add the following to the end of the file:

OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY')

This pulls in the value of the OPENAI_API_KEY environment variable, and sets it as a configuration value with the same name.

If you’re using PyEnv to install dependencies locally for VSCode autocomplete, you create a file called .python-version in the root of the project and add the contents 3.13. This pins your Python version to the same version we are using for this tutorial. (If you aren’t using PyEnv or installing dependencies on your local machine, you can skip this).

Open up requirements.txt and add the following to the end of the file:

openai==1.61.1

Note: Pin your package to the exact version above so the steps are consistent with this tutorial.

Once that’s done, head to https://platform.openai.com/api-keys and create a new API key for this project:

Once created, add a file called .env to the root of your project, and add the following contents:

OPENAI_API_KEY=YOURKEYGOESHERE

Exclude the .env file from your Git project using .gitignore. This ensures that the secret key is not stored with the project or pushed to GitHub. Never share the key with anyone. It allows people to make requests to the OpenAI API’s and run up charges on your account.

Now, if your docker environment is still running, stop it by typing CMD + C (macOS) or CTRL + C (Windows).

Then run the following:

docker compose up --build --watch

This should rebuild your Docker image on your local machine. Which in turn installs the contents of the requirements.txt. It will then pull the value of the new OPENAI_API_KEY environment variable into the project.

Handle AI Requests

Now we need to add logic to handle receiving messages, making requests to OpenAI, and processing responses.

We will achieve this by doing the following:

  • Add a handle() method to our AiRequest model. This will handle making a request using the OpenAI Chat Completions API.
  • Add a new background task that calls the handle() method on the model.
  • Add a _queue_job() method to queue the background job for handling the request. We do this because it can sometimes take a while for the chat completions API to process the request.

You can find the full diff for the changes in this section here on GitHub.

Add the following task to backend/core/tasks.py:

from core import models

@shared_task
def handle_ai_request_job(ai_request_id):
    models.AiRequest.objects.get(id=ai_request_id).handle()

This one-line task called handle_ai_request_job simply finds the AiRequest instance by the ID and calls the handle() method we’ll be creating shortly.

Now open backend/core/models.py and add the following imports to the top of the file:

from openai import OpenAI
from core.tasks import handle_ai_request_job

Then add the following to the AiRequest model:


    def _queue_job(self):
        """Add job to queue."""
        handle_ai_request_job.delay(self.id)

    def handle(self):
        """Handle request."""
        self.status = self.RUNNING
        self.save()
        client = OpenAI()
        try:
            completion = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=self.messages,
            )
            self.response = completion.to_dict()
            self.status = self.COMPLETE
        except Exception:
            self.status = self.FAILED

        self.save()

    def save(self, **kwargs):
        is_new = self._state.adding
        super().save(**kwargs)
        if is_new:
            self._queue_job()

Hopefully this code is pretty self explanatory, but here is a breakdown of what it does:

  • Adds a _queue_job() method to queue the background job that will call handle(). It passes the id of the current model to this job, so it knows which model to call. We want to process the AiRequest as a background task because it can take a long time for OpenAI to respond to the request.
  • Then we add a handle() method which contains the logic for calling the backend. It does the following:
    • Sets the status to RUNNING.
    • Creates a new OpenAI client instance, then calls the client.chat.completions.create() method, passing in the model we want to use (gpt-4o-mini) and a list of all the messages we want to send.
    • Extracts the response as a dict using to_dict(), and sets it in the response field of our model so we can access it later.
    • Updates the status to COMPLETE.
    • Catches any exceptions and sets the status to FAILED – In reality, we should catch specific exceptions and provide some details of the error. But for this demo we’ll just update the status.
    • Saves the changes to the model to commit them to the database.
  • We also override the save() method so that if this is a new instance, we will trigger the job. This allows us to test by creating new requests through the Django admin.

Once that’s done we can open the http://127.0.0.1:8000/admin and create a new AiRequest instance with the following in “messages”:

[
  {"role": "system", "content": "You are a snarky and unhelpful assistant."}, 
  {"role": "user", "content": "How are you today?"}
]

Hit save, and wait a few seconds before opening the instance and seeing the response:

Now we have a simple mechanism for making requests.

Handle Chat Prompts

Next we need to actually handle these requests in a way that supports a chat flow.

We’ll achieve this by:

  • Adding a method which handles sending messages in the session by creating a new AiRequest.
  • Extracting messages from all the AiQueries created to get the latest message and chat history.

As with the other sections, the changes are available as a diff on GitHub.

Open backend/core/models.py and add the following to the ChatBotSession model:

  def get_last_request(self):
        """Return the most recent AiRequest or None."""
        return self.airequest_set.all().order_by('-created_at').first()

    def _create_message(self, message, role="user"):
        """Create a message for the AI."""
        return {"role": role, "content": message}


    def create_first_message(self, message):
        """Create the first message in the session."""
        return [
            self._create_message(
                "You are a snarky and unhelpful assistant.",
                "system"
            ),
            self._create_message(message, "user")
        ]

    def messages(self):
        """Return messages in the conversation including the AI response."""
        all_messages = []
        request = self.get_last_request()

        if request:
            all_messages.extend(request.messages)
            try:
                all_messages.append(request.response["choices"][0]["message"])
            except (KeyError, TypeError, IndexError):
                pass

        return all_messages

    def send(self, message):
        """Send a message to the AI."""
        last_request = self.get_last_request()

        if not last_request:
            AiRequest.objects.create(
                session=self, messages=self.create_first_message(message))
        elif last_request.status in [AiRequest.COMPLETE, AiRequest.FAILED]:
            AiRequest.objects.create(
                session=self,
                messages=self.messages() + [
                    self._create_message(message, "user")
                ]
            )
        else:
            return

This change includes the following:

  • Adds a get_last_request() method to retrieve the most recent AiRequest for that session. This is to combine messages and also send new messages.
  • The createmessage() method creates a new dict object we can send to OpenAI. This includes a role and content value which are the least required values for each message.
  • Adds a create_first_message() method to construct the first message for the session. This includes the “system” prompt that tells OpenAI how we want the LLM to behave. In this case, we are telling it to be “a snarky and unhelpful” assistant. In a real-world application, this is where you would customise your Chatbot and add any context to provide relevant responses. This is the opportunity to distinguish your Chatbot from the official ChatGPT chat.
  • We added a messages() method to get a list (array) of all messages in the chat session. This takes the messages value from the last request sent, and appends the latest response from the AI.
  • Then there is the send() method which (as the name suggests), sends a message to the LLM. It does it by doing the following:
    • Gets the latest request in the session.
    • If no request exists, then this must be the first request. So we call create_first_message(), passing in the message content and creating a new AiQuery.
    • If a request does exist, then this must be a new message in an existing flow. Therefore we take the messages from the last AiRequest and append our new message to that flow.

Once that’s done, we can move on.

Create Chat API

Now we have the foundations for making requests to the AI backend and managing the message flow. We can move on to create an API endpoint to interact with these backend models.

We can do this by:

  • Creating a serializer for our chat session and the messages.
  • Creating a view for a “create session” endpoint in order to create a new session.
  • Creating a view for a “chat session” endpoint to retrieve all messages in an existing session and send new messages.
  • Map the URLs for these new endpoints.

A diff of all changes is available on GitHub.

Start by opening backend/core/serializers.py and adding the following:

from rest_framework import serializers
from core.models import AiChatSession


class AiChatSessionMessageSerializer(serializers.Serializer):
    role = serializers.CharField()
    content = serializers.CharField()


class AiChatSessionSerializer(serializers.ModelSerializer):
    messages = AiChatSessionMessageSerializer(many=True)

    def to_representation(self, instance):
        representation = super().to_representation(instance)
        representation['messages'] = [
            msg for msg in representation['messages']
            if msg['role'] != 'system'
        ]
        return representation

    class Meta:
        model = AiChatSession
        fields = ['id', 'messages']
        read_only_fields = ['messages']

This change includes the following:

  • Creates an AiChatSessionMessageSerializer to serialize messages in the OpenAI format (“role” and “content”). We do this to serialize the messages so they show in a human readable format in the frontend.
  • Added AiChatSessionSerializer which has two fields:
    • id which shows the unique ID of the session, and
    • messages which lists all messages sent/received in the session to date.
  • Excludes the “system” messages from the prompt message log by using _representation. This is so we don’t end up showing the setup prompt to the user (usually this is behind the scenes).

These serializers allow us to convert the data in the models to JSON responses for the frontend to process.

Then open backend/core/views.py and make it look like this:

from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from django.shortcuts import get_object_or_404

from core.models import AiChatSession
from core.serializers import AiChatSessionSerializer


@api_view(['POST'])
def create_chat_session(request):
    """Create a new chat session."""
    session = AiChatSession.objects.create()
    serializer = AiChatSessionSerializer(session)
    return Response(serializer.data, status=status.HTTP_201_CREATED)


@api_view(['GET', 'POST'])
def chat_session(request, sessionId):
    """Retrieve a chat session and its messages."""
    session = get_object_or_404(AiChatSession, id=sessionId)
    serializer = AiChatSessionSerializer(session)

    if request.method == 'POST':
        message = request.data.get('message')
        if not message:
            return Response(
                {'error': 'Message is required'},
                status=status.HTTP_400_BAD_REQUEST
            )
        session.send(message)

    return Response(serializer.data)

This change does the following:

  • Adds a create_chat_session function to accept an HTTP POST request. This creates a new chat session in our frontend when the user sends their first message.
  • Adds a chat_session function to accept the following:
    • HTTP GET: returns the session for the ID (provided in the URL) and all messages that send or receive in that session.
    • HTTP POST: handles sending a new message in the session. We will use this in our frontend every time the user wants to send a message.

Finally let’s wire up the URLs by updating backend/backend/urls.py so it looks like this:

from django.urls import path

from backend.views import hello_world
from core.views import create_chat_session, chat_session

urlpatterns = [
    path('api/hello-world/', hello_world),
    path('api/chat/sessions/', create_chat_session),
    path('api/chat/sessions/<str:sessionId>/', chat_session),
    path('admin/', admin.site.urls),
]

This adds two new URLs to our project:

  • /api/chat/sessions/ that uses our create_chat_session function for creating new sessions.
  • /api/chat/sessions/<id>/ that uses our chat_session function for retrieving and sending messages in the session.

Create Chat Page

Now we’ll move onto the frontend side of things by modifying our React project to include a chat page.

This will include:

  • Adding a bit of CSS.
  • Creating a React component for showing a simple chat flow.

I’m not a designer, so the UI is not great, but it will give us the minimal chat interface needed for creating a Chatbot.

The full diff is available here.

Open frontend/src/index.css and locate the html, #root and body sections, and replace them with this:

html, #root {
  height: 100%;
  width: 100%;
}

body {
  margin: 0;
  place-items: center;
  width: 100%;
  height: 100%;
}

This sets up our page so we can use the full height/width of the page.

Then, open frontend/src/App.css and replace the full content with this:

* {
  box-sizing: border-box;
}

.wrapper {
  width: 100%;
  height: 100%;
  display: flex;
  justify-content: center;
  align-items: center;

  .chat-wrapper {
    width: 80%;
    height: 80%;
    display: flex;
    flex-direction: column;
    justify-content: space-between;
    box-shadow: 0 0 20px rgba(0, 0, 0, 0.15);
    border-radius: 12px;
    padding: 10px;

    .chat-history {
      height: 100%;
      margin: 10px;
      overflow: auto;
      display: flex;
      flex-direction: column;
      flex-direction: column-reverse;
      overflow-anchor: auto;

      .message {
        background-color: #f0f0f0;
        padding: 10px;
        margin: 10px;
        border-radius: 10px;
      }

      .message.user {
        background-color: #b4d8ff;
      }
    }

    input {
      width: 100%;
      padding: 8px;
      border: 1px solid #ccc;
      border-radius: 4px;
    }

  }
}

This provides a basic layout for our app by doing the following:

  • Sets box-sizing to border-box to prevent padding in child elements overlapping the parent.
  • Creates a .wrapper using flexbox to provide a simple layout for our Chatbot (full page and centred).
  • Styles a .chat-wrapper to take up 80% of the page and render items in a column layout. It also adds a box layout and border radius to make it look nice.
  • Styles the .chat-history so it shows the latest chat messages at the bottom.
  • Styles messages so they look different for the user and AI to give an alternating colour effect.
  • Styles the input so the text box looks nice.

Now update frontend/src/App.jsx to look like this:

import { useState } from "react";

import "./App.css";

function App() {
  const [message, setMessage] = useState("");
  const [messages, setMessages] = useState([]);

  const sendMessage = (e) => {
    if (e.key === "Enter") {
      setMessage("");
      setMessages([...messages, { content: message, role: "user" }]);
    }
  };

  return (
    <div className="wrapper">
      <div className="chat-wrapper">
        <div className="chat-history">
          <div>
            {messages.map((message, index) => (
              <div
                key={index}
                className={`message${message.role === "user" ? " user" : ""}`}
              >
                {message.role === "user" ? "Me: " : "AI: "}
                {message.content}
              </div>
            ))}
          </div>
        </div>
        <input
          type="text"
          placeholder="Type a message..."
          value={message}
          onChange={(e) => setMessage(e.target.value)}
          onKeyUp={sendMessage}
        />
      </div>
    </div>
  );
}

export default App;

This component handles the following:

  • Sets up a basic layout for our chat page using the CSS classes we added (wrapper, chat-wrapper, chat-history, etc.)
  • Creates some state variables for:
    • message: the current message the user types into the chat message box.
    • messages: the array of messages in the current chat session.
  • Adds a sendMessage function when the user types a message and hits enter.
    • Currently, this will only add the message the user writes to the state. Later on we will make this call the backend API.
  • Render the messages from the state variable as items in the message history.
    • Sets the user class on the message if it was a user message.

Once that’s done, you can start the service (if it’s not still running) using the following command:

docker compose up --watch --build

Then you should see the new interface if you open http://127.0.0.1:5137.

Note: We include the –build flag to ensure any frontend changes incorporate in to our image. If you don’t see the changes in your browser, then stop the service and restart it using the command above.

Integrate Chat API

Next we need to configure our chat component to use our backend API.

We’ll do this by doing the following:

  • Handle making API requests to the backend API.
  • Add logic to poll the backend for changes to the message history once the first message sends.
  • Create a new session if one doesn’t exist.

The full diff for these changes are on GitHub.

Update the frontend/src/App.jsx file to look like this:

import { useState, useEffect } from "react";

import "./App.css";

function App() {
  const [message, setMessage] = useState("");
  const [messages, setMessages] = useState([]);
  const [sessionId, setSessionId] = useState(null);

  useEffect(() => {
    if (!sessionId) return;

    const intervalId = setInterval(async () => {
      const response = await fetch(
        `http://localhost:8000/api/chat/sessions/${sessionId}/`,
        {
          method: "GET",
        }
      );
      const data = await response.json();
      setMessages(data.messages);
    }, 1000);

    return () => clearInterval(intervalId);
  }, [sessionId]);

  const postMessage = async (sessionId, message) => {
    await fetch(`http://localhost:8000/api/chat/sessions/${sessionId}/`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
      },
      body: JSON.stringify({ message: message }),
    });
  };

  const sendMessage = async (e) => {
    if (e.key === "Enter") {
      if (!sessionId) {
        const response = await fetch(
          "http://localhost:8000/api/chat/sessions/",
          {
            method: "POST",
          }
        );
        const data = await response.json();
        setSessionId(data.id);
        postMessage(data.id, message);
      } else {
        postMessage(sessionId, message);
      }

      setMessage("");
    }
  };

  return (
    <!-- ...EXISTING JSX... -->
  );
}

export default App;

Note: I excluded the returned JSX value in the sample above because we aren’t modifying that.

The changes above do the following:

  • Adds a postMessage() function to call our session API with a HTTP POST request. This is to send a new message to the session.
  • Adds a new state value for the sessionId for the current chat session. This is so we can send messages and retrieve the latest messages for the current session.
  • Uses useEffect to setup polling on the session endpoint. Polling is not always the best option as it’s not as performative as using WebSockets. But I like to use it for simplicity (we may teach WebSockets in a different tutorial).
  • Update the sendMessage() function to:
    • Create a session if it doesn’t exist already (if this is the first message).
    • Call postMessage with the session ID to send the message that the user types.

This completes our Chatbot. You should be able to interact with the Chatbot in the browser like this:

Have fun playing around with it.

Summary

In this guide, you learned how to create a very basic Chatbot using the OpenAI API.

This is an example and doesn’t include things like user authentication. So you might not want to deploy this to a public server unless you’re happy for people to make requests via your API key.

Some ideas for things to do next:

  • Experiment with the system prompt. This is what’s used to customise your Chatbot and make it respond in the way you want it to. You can expand on it to provide more context and examples of how you want it to respond.
  • Setup user authentication.
  • Support more advanced response formatting. Right now it shows the raw content. But it is possible to ask the LLM to respond in HTML so you can render formatted responses. If you do this, it’s best to ensure you “clean” the response to prevent any XSS attacks, etc.

So that’s it! We hope you found this useful.

If you enjoyed this tutorial please subscribe to our YouTube channel for future content. It helps us grow our audience and reach more people so we can continue to create more educational content.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *