Hey dev community! 👋

Like many of you, I've been diving deep into the world of Large Language Models (LLMs). It's an incredibly exciting space, but let's be honest – managing interactions across different platforms like ChatGPT, Claude.ai, Gemini, plus maybe some local models via Ollama, can get messy fast.

I found myself constantly switching tabs, copy-pasting prompts, losing conversation context, and wishing I had more control over how I interacted with these powerful tools. Comparing model outputs on the same prompt was a chore, and organizing my experiments felt chaotic.

That frustration led me to build AIHubMatrix.

What is AIHubMatrix?

In simple terms, AIHubMatrix is a self-hostable web application (with a live demo!) designed to be your single, unified interface for interacting with multiple AI models from various providers (OpenAI, Anthropic, Google, DeepSeek, Grok, local models, and more).

It renders AI responses beautifully using Markdown and is built with:

  • Backend: Node.js / Express
  • Frontend: React (served directly for simplicity)
  • Database: PostgreSQL
  • Deployment: Docker & Docker Compose

Solving the Fragmentation Problem

The core idea is to bring all your AI interactions under one roof, but I wanted to go beyond just being an aggregator. I focused on adding features I personally missed in other tools:

1. 🔄 Dynamic, In-Conversation Model Switching

This is the killer feature. Start a conversation with GPT-4o, get a response, and then switch to Claude 3.7 Sonnet to refine it or get a different perspective, all within the same chat history. Invaluable for comparing models or using the best tool for each step.

2. 🔬 Fine-Grained Message Control

Tired of deleting messages just to see how the AI responds without them? AIHubMatrix lets you exclude/include individual messages from the AI's context without permanently deleting them. Perfect for experimentation. You can also easily view messages as raw text or rendered Markdown, and copy as plain or rich text.

3. 👁️ Universal Vision (Even for Text Models!)

Models like GPT-4o or Claude 3 handle images natively. But for text-only models? AIHubMatrix integrates Google Vision API as a fallback. Upload an image, and the app automatically generates a description for the text model.

4. 📁 Folder Organization

Organize your chats into a familiar hierarchical folder structure using simple drag-and-drop. No more endless flat lists!

5. 🔌 Resilient Connections

Dropped your connection mid-request? AIHubMatrix keeps processing in the background and updates you when you reconnect.

Other Key Features:

  • Wide file type support (PDF, DOCX, XLSX, code, images, ZIPs...) with in-chat previews.
  • Configurable PDF processing (Native, Text Extraction, Google Vision OCR).
  • Conversation Cache for pre-loading context/files.
  • Code syntax highlighting.
  • Streaming support.
  • Multi-user support & Admin panel.
  • Dark/Light mode.
  • Developer debug tools (JSON request preview).

Current Status & What's Next

AIHubMatrix is currently in a functional demo state. The core features are working. I'm planning to refine the UI, add more tests, and potentially integrate more AI providers based on feedback.

Try it Out & Give Me Feedback! 🙏

I'm excited to share this with the dev community and would love your feedback!

You can also run it locally using Docker (instructions in the README).

I'm particularly interested in:

  • How intuitive is the dynamic model switching?
  • Is the message exclusion feature useful?
  • What models/providers should be next?
  • Any bugs or UI quirks?

Feel free to open an issue on GitHub or drop a comment below. Thanks for reading!