💬 Part 5: Designing Conversational Logic
Now that we have our order data embedded and searchable with Pinecone, it’s time to design how the chatbot understands and responds to customer queries. This part focuses on crafting prompts, setting up retrieval-based QA, and handling multi-turn conversations using LangChain's Chains and Memory.
✅ What We'll Cover
- Creating prompt templates for order-related questions
- Setting up chains using LangChain
- Managing context and conversation history
- Using memory to support multi-turn conversations
🧾 1. Create Prompt Templates
LangChain allows you to create flexible prompt templates for guiding the LLM. Here’s an example:
// backend/langchain/prompts.js
const { PromptTemplate } = require('@langchain/core/prompts');
const orderQueryPrompt = new PromptTemplate({
template: \`
You are a helpful support assistant for an e-commerce store. Use the context to answer the user's question.
If the answer isn't in the context, respond with "Sorry, I couldn't find the information."
Context:
{context}
Question:
{question}
\`,
inputVariables: ['context', 'question'],
});
module.exports = { orderQueryPrompt };
🔄 2. Create a QA Chain
We’ll use a RetrievalQAChain to combine the vector search and the LLM.
// backend/langchain/qaChain.js
const { RetrievalQAChain } = require('@langchain/community/chains');
const { OpenAI } = require('@langchain/openai');
const { orderQueryPrompt } = require('./prompts');
const { initPinecone } = require('./config');
const { OpenAIEmbeddings } = require('@langchain/openai');
const { PineconeStore } = require('@langchain/pinecone');
async function createQAChain() {
const llm = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0.2,
modelName: 'gpt-3.5-turbo',
});
const pinecone = await initPinecone();
const index = pinecone.Index('ecommerce-orders');
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY }),
{ pineconeIndex: index }
);
return RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever(), {
prompt: orderQueryPrompt,
});
}
module.exports = { createQAChain };
🧠 3. Add Conversation Memory (Multi-Turn)
We’ll use BufferMemory to allow the model to remember the chat context.
// backend/langchain/memory.js
const { BufferMemory } = require('@langchain/core/memory');
const memory = new BufferMemory({
returnMessages: true,
memoryKey: 'chat_history',
});
module.exports = { memory };
You can pass this memory into a conversational chain if you want the bot to remember previous queries.
🚀 4. Handle Incoming Queries
Finally, let’s wire this up to handle incoming API requests.
// backend/routes/chat.js
const express = require('express');
const router = express.Router();
const { createQAChain } = require('../langchain/qaChain');
router.post('/', async (req, res) => {
const { question } = req.body;
try {
const qaChain = await createQAChain();
const response = await qaChain.call({ question });
res.json({ answer: response.text });
} catch (err) {
console.error(err);
res.status(500).json({ error: 'Failed to process question' });
}
});
module.exports = router;
Then plug the route into your server.js
:
// backend/server.js
const chatRoute = require('./routes/chat');
app.use('/api/chat', chatRoute);
✅ Next Steps (Part 6)
In the next part, we will:
- Build the frontend chat interface in React
- Connect the UI to the chat API
- Style and refine the user experience
🎨 Let’s make it look good and feel real!