Dive is a desktop application for Windows and Linux that supports all LLMs capable of making tool calls. It is currently the easiest way to install MCP Server. Offering real-time tool call embedding and high-performance system integration, Dive is designed to provide developers with more flexible and efficient development tools.

0.6.0 → 0.7.3 Update Summary

  1. Multi-Model Support & Switching

Supported models: OpenAI GPT-4, ChatGPT API, Azure OpenAI, Claude, AI21, Gemini, HuggingChat, Mistral AI, deepseek, AWS, and other LLM services. Custom models are also supported.

Multi-model Switching: Switch between multiple MCP Servers. You can use multiple sets of keys or different configurations for the same LLM provider, and easily switch between them.

  1. User Experience & Performance Optimization

Editable Messages: Modify messages that have already been sent.

Regenerate Responses: Supports regenerating AI responses.

Auto Updates: Now supports automatic updates to the latest version.

Interface and Operation Enhancements: Collapsible tool_calls and tool_result sections; pressing ESC while the sidebar is open will prioritize closing the sidebar instead of interrupting AI responses.

API Key Configuration Improvements: Displays error messages in red for incorrect inputs, and error messages disappear automatically when switching providers.

MCP Server Default Example Optimizations: The echo example has been updated from CJS format to ESM, reducing file size.

Background Operation and Auto-Start: The app can be minimized to the background and supports auto-start on boot.
Try it out! 👇🏻
https://github.com/OpenAgentPlatform/Dive/releases