Imagine ChatGPT — but offline, private, and trained on your own files.

No cloud, no API keys, no monthly fees.

That's exactly what I built: a personal AI assistant that runs entirely on your computer, with voice support, document understanding, and memory.


🔧 What It Does

  • Runs offline using local models with Ollama
  • 📄 Understands uploaded documents (PDF or TXT)
  • 🧠 Remembers your chat and continues where you left off
  • 🎙️ Accepts voice input (speak to it!)
  • 🔊 Replies out loud using text-to-speech
  • 🎨 Customisable personality through a simple UI

All running inside a sleek Streamlit interface.


🛠 Tech Stack

  • 🧠 LangChain for chaining LLM logic, prompts, and memory
  • 🐍 Python backend
  • 🖥️ Ollama to run mistral:7b and other open-source LLMs locally
  • 📚 FAISS for vector search + document retrieval
  • 📄 PyPDFLoader + TextLoader to process files
  • 🎤 SpeechRecognition for microphone input
  • 🔈 pyttsx3 for voice output
  • 🎨 Streamlit for real-time UI

🖼️ Demo

https://shorturl.at/j36oQ


⚙️ How It Works

  1. Chat + Memory

    Conversations are saved to disk as JSON. Reload the app — it remembers you.

  2. Document Q&A

    You can upload .pdf or .txt files. The app reads them, chunks the content, embeds it using Ollama embeddings, and uses FAISS to retrieve relevant context before responding.

  3. Voice Input & Output

    • Click the 🎙️ button to speak.
    • The assistant replies aloud using TTS.
    • Both features are toggleable in the sidebar.
  4. Custom Personality

    You can modify the assistant’s tone with a system prompt (e.g., “You are a sarcastic genius.” 😏)


💡 What I Learned

  • How to build a full offline AI assistant from scratch
  • How to integrate speech recognition + TTS in a web app
  • How to handle multi-turn memory using LangChain
  • How to combine document loaders, embeddings, and retrieval in one pipeline

📦 Want to Try It?

This project is fully open-source. You can find the code and setup instructions here:

👉 [GitHub Repo Link – https://github.com/D-artisan/ai-chatbot]

You’ll need:

  • Python
  • Ollama installed and a model pulled (e.g., ollama pull mistral)
  • A mic + speakers if you want voice

🚀 Future Features

  • 📂 Multi-file upload support
  • 🧾 Summarise documents before chat
  • 📜 Export conversations
  • 🌐 LAN deployment to access from any device

👋 Final Thoughts

This was a fun, hands-on way to get deeper into the LLM ecosystem — especially offline tools like Ollama and LangChain.

Let me know what you’d build with a chatbot like this — or if you’d like a guided version to follow along!


🔗 Connect With Me

Let’s chat on:


#AI #LangChain #LLM #Chatbot #Python #Ollama #VoiceAI #Streamlit #OpenSource