A chatbot is more than just a single question-and-answer model; it's a stateful application that remembers the context of the conversation. Frameworks like LangChain make it easy to build chatbots by handling the complexities of conversation memory for you.
This tutorial demonstrates how to create a simple, conversational chatbot using LangChain with Voidon as the backend LLM. The chatbot will remember previous parts of your conversation to provide contextually relevant answers.
Why use a framework like LangChain?
Language models are inherently stateless. Each API call is independent. LangChain provides "Memory" components that automatically load past conversation history and include it in the prompt, creating the illusion of a continuous conversation.
The core components of a LangChain chatbot are: 1. The LLM: Our connection to the Voidon API. 2. The Prompt Template: A template that structures the input to the model, including placeholders for the conversation history and the user's new question. 3. Memory: An object that stores and retrieves the conversation history. 4. The Chain: The object that links all the components together.
importosfromlangchain_openaiimportChatOpenAI# It's recommended to set your API key as an environment variableos.environ["VOIDON_API_KEY"]="your-voidon-api-key"# Configure the LLM to use the Voidon endpointllm=ChatOpenAI(model="auto",base_url="https://api.voidon.astramind.ai/v1",api_key=os.environ.get("VOIDON_API_KEY"),temperature=0.7)
fromlangchain_core.promptsimportChatPromptTemplate,MessagesPlaceholder# Create a prompt template that includes a system message and placeholdersprompt=ChatPromptTemplate.from_messages([("system","You are a helpful and friendly AI assistant. Answer the user's questions concisely."),MessagesPlaceholder(variable_name="chat_history"),("human","{input}"),])
fromlangchain.chains.combine_documentsimportcreate_stuff_documents_chainfromlangchain_core.runnablesimportRunnablePassthrough# Create a chain that combines the prompt and the LLMconversational_chain=prompt|llm
Step 4: Add Memory with RunnableWithMessageHistory¶
This is the key component. RunnableWithMessageHistory is a wrapper that automatically manages the saving and loading of messages to and from a memory object.
fromlangchain_core.runnables.historyimportRunnableWithMessageHistoryfromlangchain_community.chat_message_historiesimportChatMessageHistory# In-memory store for the chat history# For production, you would use a persistent store like Redis or a databasedemo_ephemeral_chat_history=ChatMessageHistory()# Wrap our chain with the history managerconversational_chain_with_memory=RunnableWithMessageHistory(conversational_chain,# A function that returns the history object based on a session IDlambdasession_id:demo_ephemeral_chat_history,input_messages_key="input",history_messages_key="chat_history",)
defrun_chatbot():print("Chatbot is ready! Type 'exit' to end the session.")session_id="user123"# A unique identifier for the conversationwhileTrue:user_input=input("You: ")ifuser_input.lower()=='exit':print("Chatbot: Goodbye!")break# The config dictionary is where we pass the session_idconfig={"configurable":{"session_id":session_id}}# Invoke the chain with the user's inputresponse=conversational_chain_with_memory.invoke({"input":user_input},config=config)print(f"Chatbot: {response.content}")# Start the chatbotrun_chatbot()# Example Conversation:# You: My name is Alex.# Chatbot: It's nice to meet you, Alex! How can I help you today?# You: What is my name?# Chatbot: Your name is Alex.```### What's Happening Under the Hood?1.Youprovideaninputanda`session_id`.2.`RunnableWithMessageHistory`usesthe`session_id`toretrievetheconversationhistoryfrom`demo_ephemeral_chat_history`.3.Thehistoryandyournewinputarepassedtothe`ChatPromptTemplate`.4.ThefullyformattedpromptissenttotheVoidonLLM.5.TheLLMgeneratesaresponse.6.`RunnableWithMessageHistory`savesyourinputandtheLLM's response back into `demo_ephemeral_chat_history` before returning the final answer.