Skip to content

Building a Conversational Chatbot with LangChain

A chatbot is more than just a single question-and-answer model; it's a stateful application that remembers the context of the conversation. Frameworks like LangChain make it easy to build chatbots by handling the complexities of conversation memory for you.

This tutorial demonstrates how to create a simple, conversational chatbot using LangChain with Voidon as the backend LLM. The chatbot will remember previous parts of your conversation to provide contextually relevant answers.

Why use a framework like LangChain?

Language models are inherently stateless. Each API call is independent. LangChain provides "Memory" components that automatically load past conversation history and include it in the prompt, creating the illusion of a continuous conversation.


Prerequisites

You will need to install the langchain and langchain-openai libraries.

Bash
pip install langchain langchain-openai

Implementation

The core components of a LangChain chatbot are: 1. The LLM: Our connection to the Voidon API. 2. The Prompt Template: A template that structures the input to the model, including placeholders for the conversation history and the user's new question. 3. Memory: An object that stores and retrieves the conversation history. 4. The Chain: The object that links all the components together.

Step 1: Set Up the LLM

First, we configure LangChain's ChatOpenAI class to point to the Voidon API endpoint.

Python
import os
from langchain_openai import ChatOpenAI

# It's recommended to set your API key as an environment variable
os.environ["VOIDON_API_KEY"] = "your-voidon-api-key"

# Configure the LLM to use the Voidon endpoint
llm = ChatOpenAI(
    model="auto",
    base_url="https://api.voidon.astramind.ai/v1",
    api_key=os.environ.get("VOIDON_API_KEY"),
    temperature=0.7
)

Step 2: Design the Prompt Template

We'll create a prompt that instructs the model on its role and includes placeholders for the chat history and new user input.

Python
1
2
3
4
5
6
7
8
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

# Create a prompt template that includes a system message and placeholders
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful and friendly AI assistant. Answer the user's questions concisely."),
    MessagesPlaceholder(variable_name="chat_history"),
    ("human", "{input}"),
])

Step 3: Create the Conversational Chain

Using the LangChain Expression Language (LCEL), we can easily chain our prompt and LLM.

Python
1
2
3
4
5
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.runnables import RunnablePassthrough

# Create a chain that combines the prompt and the LLM
conversational_chain = prompt | llm

Step 4: Add Memory with RunnableWithMessageHistory

This is the key component. RunnableWithMessageHistory is a wrapper that automatically manages the saving and loading of messages to and from a memory object.

Python
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory

# In-memory store for the chat history
# For production, you would use a persistent store like Redis or a database
demo_ephemeral_chat_history = ChatMessageHistory()

# Wrap our chain with the history manager
conversational_chain_with_memory = RunnableWithMessageHistory(
    conversational_chain,
    # A function that returns the history object based on a session ID
    lambda session_id: demo_ephemeral_chat_history,
    input_messages_key="input",
    history_messages_key="chat_history",
)

Step 5: Interact with the Chatbot

Now we can interact with our chatbot. For each call, we must provide a session_id so the chain knows which conversation history to use.

Python
def run_chatbot():
    print("Chatbot is ready! Type 'exit' to end the session.")
    session_id = "user123" # A unique identifier for the conversation

    while True:
        user_input = input("You: ")
        if user_input.lower() == 'exit':
            print("Chatbot: Goodbye!")
            break

        # The config dictionary is where we pass the session_id
        config = {"configurable": {"session_id": session_id}}

        # Invoke the chain with the user's input
        response = conversational_chain_with_memory.invoke(
            {"input": user_input},
            config=config
        )

        print(f"Chatbot: {response.content}")

# Start the chatbot
run_chatbot()

# Example Conversation:
# You: My name is Alex.
# Chatbot: It's nice to meet you, Alex! How can I help you today?
# You: What is my name?
# Chatbot: Your name is Alex.```

### What's Happening Under the Hood?
1.  You provide an input and a `session_id`.
2.  `RunnableWithMessageHistory` uses the `session_id` to retrieve the conversation history from `demo_ephemeral_chat_history`.
3.  The history and your new input are passed to the `ChatPromptTemplate`.
4.  The fully formatted prompt is sent to the Voidon LLM.
5.  The LLM generates a response.
6.  `RunnableWithMessageHistory` saves your input and the LLM's response back into `demo_ephemeral_chat_history` before returning the final answer.