Welcome back to the series! 👋
In Part 1, we built a basic Blazor app that lets you chat with a local LLM using the MaIN.NET framework. We connected to a local model (Gemma3 from Google), sent messages, and got responses — all without any external APIs.
In this part, we’re going to take it one step further by adding chat history support. Why is that important?
🗣️ Without conversation history, the model responds to each message as if it’s the first.
🧠 With history, it can remember context and carry on a more natural, coherent conversation.
By the end of this tutorial, your app will be able to:
- Keep track of previous messages in the session
- Send a complete conversation log to the model
- Generate responses that reflect the ongoing chat
Let’s dive in!
🛠 Step 1: Prepare for Chat History
Before we start making our chatbot smarter, let’s make sure everything from Part 1 is up and running. If you haven’t completed it yet, check it out here and follow along to get your base project ready.
Now that your app can talk to a local LLM, we’ll make some adjustments to support chat history.
👉 Update ChatLLM.razor
First, we need to import two additional namespaces to help us manage context and serialize our message history.
At the top of your ChatLLM.razor
file, add the following @using
statements:
@using MaIN.Core.Hub.Contexts
@using System.Text.Json
🧱 Step 2: Update the UI to Support Chat History
To display and differentiate between user and LLM messages, we need to update the HTML in ChatLLM.razor
.
We’ll simplify the layout and introduce new styles so it’s easy to see who said what in the conversation.
🧼 1. Clean Up the Old HTML
Let’s start by removing everything below the header in your
ChatLLM.razor
. We’ll be rebuilding it from scratch.
So keep only:
Chat with LLM
Delete everything else below it.
💬 2. Add the Chat Message Display
Now, paste the following chat bubble layout directly under the header in your
ChatLLM.razor
file:
class="border rounded p-3 mb-3 bg-light">
@foreach (var message in chatHistory)
{
class="mb-3 p-2 rounded @(message.IsUser ? "bg-primary text-white ms-5" : "bg-light me-5")">
@(message.IsUser ? "You" : "LLM chat"):
class="mb-0 mt-2">@message.Content
}
🔍 What’s happening here?
We’re looping through the chatHistory
list.
Each message is styled based on the sender:
- User messages appear on the right, with a blue background.
- LLM responses appear on the left, with a light gray background.
This helps visually distinguish who said what, making the conversation more readable and natural to follow.
📝 3. Add the Input Field and Send Button
Below the chat display, add the following input section:
class="d-flex gap-2">
type="text" class="form-control"
@bind="messageToLLM"
@bind:event="oninput"
@onkeydown="HandleKeyDown"
placeholder="Type your message..." />
class="btn btn-primary" @onclick="SendMessage">Send
💡 This layout does the following:
- Binds the input to the
messageToLLM
variable. - Allows users to press Enter to send a message using
HandleKeyDown
. - Provides a Send button as an alternative method of submission.
In the next step, we’ll define the message model, initialize the chatHistory
list, and modify SendMessage()
to send the full conversation context to the LLM.
🧠 Step 3: Manage Chat History in the Code Section
That’s it for the front-end part. Now let’s switch to the @code
section of your ChatLLM.razor
file. We'll refactor it heavily.
You can keep only the first line of your existing code block:
string messageToLLM = "";
Now, directly below it, add the following:
List<ChatMessage> chatHistory = new();
private ChatContext? chatInstance;
private class ChatMessage
{
public string Content { get; set; } = "";
public bool IsUser { get; set; }
}
🧾 What we just did:
-
chatHistory
– a list that stores the entire conversation, which we already connected to the front-end display. -
ChatMessage
– a small class that holds the message content and a flag (IsUser
) to determine whether the message came from the user or the model.
You could extract this class into a separate file later, especially as your project grows.
🔁 Replace the SendMessage()
Method
Next, we’ll completely rewrite the SendMessage()
method to handle message history and preserve context across requests.
Replace your existing method with this:
private async Task SendMessage()
{
if (string.IsNullOrWhiteSpace(messageToLLM))
return;
// Add user message to history
chatHistory.Add(new ChatMessage { Content = messageToLLM, IsUser = true });
var userMessage = messageToLLM;
messageToLLM = "";
try
{
if (chatInstance == null)
{
chatInstance = AIHub.Chat()
.WithModel("gemma3:4b");
}
var result = await chatInstance
.WithMessage(userMessage)
.CompleteAsync();
// Add chat response to history
chatHistory.Add(new ChatMessage { Content = result.Message.Content, IsUser = false });
}
catch (Exception ex)
{
chatHistory.Add(new ChatMessage { Content = $"Error: {ex.Message}", IsUser = false });
}
}
🔍 What’s happening here?
We check if the message is empty.
If not, we:
- Add the user's message to the
chatHistory
list and mark it asIsUser = true
. - Clear the input field.
- If it’s the first message, we create a new
ChatContext
instance with our local model. - Then we call
.WithMessage()
to send the message within the current chat session — maintaining context awareness. - The LLM’s response is added to the chat history as a system message (
IsUser = false
).
⌨️ Add Enter Key Support
To improve the UX of your chat, let’s allow users to press Enter to send a message.
Add this method below your @code
block:
private async Task HandleKeyDown(KeyboardEventArgs e)
{
if (e.Key == "Enter")
{
await SendMessage();
}
}
🎉 You’re all set! Your chatbot now supports full conversation history and feels much more like ChatGPT.
✅ Summary
In this part, we took a big step forward by adding chat history support to our local ChatGPT-like app. Here's what we accomplished:
- Refactored the UI to display a styled conversation thread.
- Built a message model to manage chat state.
- Used
ChatContext
to keep the conversation alive between turns. - Enabled sending messages with the Enter key for smoother UX.
Now, your chatbot can maintain context across multiple messages, just like real-world assistants!
🚀 What’s Next?
In Part 3, we’ll add file upload support, allowing users to upload PDFs and have conversations about the content inside them.
You’ll learn how to:
- Accept file uploads in Blazor
- Read and process PDF files
- Inject the content into the chat context dynamically
🗂️ Imagine uploading a document and asking the model:
"Can you summarize this?" or "What are the key takeaways?"
Stay tuned — it's going to get even more powerful!
💬 Got questions or feedback? Drop a comment below — I’d love to hear your thoughts!
👉 Follow me to stay updated — Part 2 is coming soon!
<< To be continued >>