Originally published at GitHub
Introduction
In the rapidly evolving landscape of app development, artificial intelligence (AI) integration has become a critical feature for modern applications. Flutter developers seeking to incorporate AI capabilities into their apps face several challenges: choosing the right AI provider, managing API integrations, handling tokens and rate limits, and creating a smooth user experience.
The mcp_llm
package addresses these challenges by providing a unified interface to multiple AI providers, along with powerful tools for building AI-powered Flutter applications. This article—the third in our Model Context Protocol (MCP) series following our explorations of mcp_server and mcp_client—introduces you to the mcp_llm
package and demonstrates how to integrate AI capabilities into your Flutter apps.
Table of Contents
- Understanding mcp_llm
- Core Architecture
- Getting Started
- Basic Usage
- Building an AI Chat Application
- Advanced Features Preview
- Best Practices
- Conclusion
Understanding mcp_llm
The mcp_llm
package is a powerful toolkit for integrating Large Language Models (LLMs) with Flutter and Dart applications. It builds upon the Model Context Protocol (MCP), providing a standardized way for AI models to interact with external tools, access resources, and communicate with your application.
Key Features
- Multiple LLM provider support: Claude (Anthropic), OpenAI, Together AI, and extensible custom providers
- Client/Server implementations: LlmClient for app integration, LlmServer for service provision
- MCP Integration: Seamless integration with mcp_client and mcp_server
- Plugin system: Extensible architecture for tools, prompts, and resources
- Parallel processing: Query multiple LLMs simultaneously and aggregate results
- RAG capabilities: Document storage and vector search integration
- Performance monitoring: Track response times, success rates, and more
Use Cases
The mcp_llm
package enables a wide range of AI-powered applications:
- Intelligent chatbots and virtual assistants
- Document analysis and summarization systems
- Code generation and assistance tools
- Knowledge-based question-answering systems
- Multimodal content creation and analysis
- Enterprise data integration and analysis
Core Architecture
The core architecture of mcp_llm
consists of several key components:
1. McpLlm
The main class that serves as the entry point for the package. It handles provider registration, client/server creation, plugin management, and more.
// Create an McpLlm instance
final mcpLlm = McpLlm();
// Register providers
mcpLlm.registerProvider('claude', ClaudeProviderFactory());
mcpLlm.registerProvider('openai', OpenAiProviderFactory());
2. LlmClient
The client-side implementation that communicates with AI models and integrates with mcp_client. It sends queries to the AI model, receives responses, and handles tool calls.
// Create an LlmClient
final client = await mcpLlm.createClient(
providerName: 'claude',
config: LlmConfiguration(
apiKey: 'your-api-key',
model: 'claude-3-haiku-20240307',
),
);
// Chat with the AI
final response = await client.chat("What's the weather like today?");
3. LlmServer
The server-side implementation that provides AI capabilities as a service and integrates with mcp_server. It processes requests from external clients and exposes AI functionality.
// Create an LlmServer
final server = await mcpLlm.createServer(
providerName: 'openai',
config: LlmConfiguration(
apiKey: 'your-api-key',
model: 'gpt-4',
),
);
// Register a local tool
server.registerLocalTool(
name: 'calculator',
description: 'Performs calculations',
inputSchema: {...},
handler: calculatorHandler,
);
4. LLM Providers
Implementation classes that communicate with specific LLM APIs. Each provider implements the LlmInterface and handles communication with a specific LLM service.
// Supported providers
mcpLlm.registerProvider('claude', ClaudeProviderFactory());
mcpLlm.registerProvider('openai', OpenAiProviderFactory());
mcpLlm.registerProvider('together', TogetherProviderFactory());
5. Plugin System
An extensible plugin system that allows registering and managing additional functionality through plugins.
// Register a plugin
await mcpLlm.registerPlugin(myToolPlugin);
6. RAG Components
Components that handle document storage, embedding management, vector search, and other RAG (Retrieval Augmented Generation) capabilities.
// Create a retrieval manager
final retrievalManager = mcpLlm.createRetrievalManager(
providerName: 'openai',
documentStore: documentStore,
);
Getting Started
Installation
Add the mcp_llm
dependency to your pubspec.yaml
file:
dependencies:
mcp_llm: ^0.2.2
flutter:
sdk: flutter
Or install it from the command line:
flutter pub add mcp_llm
API Key Setup
You'll need API keys for the LLM providers you intend to use. Here's how to set up keys for the main providers:
// API key setup examples
final claudeConfig = LlmConfiguration(
apiKey: 'your-claude-api-key',
model: 'claude-3-haiku-20240307',
);
final openAiConfig = LlmConfiguration(
apiKey: 'your-openai-api-key',
model: 'gpt-4',
);
final togetherConfig = LlmConfiguration(
apiKey: 'your-together-api-key',
model: 'mixtral-8x7b-instruct',
);
For security, it's recommended to store API keys in environment variables or secure storage:
// Load API key from environment variable
final apiKey = Platform.environment['CLAUDE_API_KEY'] ??
await secureStorage.read(key: 'claude_api_key');
Basic Usage
Creating McpLlm Instance and Registering Providers
import 'package:mcp_llm/mcp_llm.dart';
void main() async {
// Create McpLlm instance
final mcpLlm = McpLlm();
// Optional logging setup
final logger = Logger.getLogger('mcp_llm.main');
logger.setLevel(LogLevel.debug);
// Register providers
mcpLlm.registerProvider('claude', ClaudeProviderFactory());
mcpLlm.registerProvider('openai', OpenAiProviderFactory());
// Check available providers and capabilities
final capabilities = mcpLlm.getProviderCapabilities();
logger.info('Available providers: ${capabilities.keys.join(', ')}');
// Clean up resources
await mcpLlm.shutdown();
}
Creating and Using LlmClient
// Create LlmClient
final client = await mcpLlm.createClient(
providerName: 'claude',
config: LlmConfiguration(
apiKey: 'your-claude-api-key',
model: 'claude-3-haiku-20240307',
options: {
'temperature': 0.7,
'max_tokens': 1500,
},
),
systemPrompt: 'You are a helpful assistant specialized in Flutter development.',
);
// Chat with AI
final response = await client.chat(
"What's the best state management approach for Flutter?",
);
print('AI Response: ${response.text}');
Handling Streaming Responses
For real-time responses, you can use streaming:
// Streaming response
final responseStream = client.streamChat(
"Explain how Flutter's widget tree works",
);
// Handle response chunks
await for (final chunk in responseStream) {
// Process response chunk
print('Chunk: ${chunk.textChunk}');
// Check if complete
if (chunk.isDone) {
print('Response completed');
break;
}
}
Creating and Using LlmServer
// Create LlmServer
final server = await mcpLlm.createServer(
providerName: 'openai',
config: LlmConfiguration(
apiKey: 'your-openai-api-key',
model: 'gpt-4',
),
);
// Register local tool
server.registerLocalTool(
name: 'calculator',
description: 'Performs basic arithmetic operations',
inputSchema: {
'type': 'object',
'properties': {
'operation': {
'type': 'string',
'enum': ['add', 'subtract', 'multiply', 'divide'],
},
'a': {'type': 'number'},
'b': {'type': 'number'},
},
'required': ['operation', 'a', 'b'],
},
handler: (args) async {
final operation = args['operation'] as String;
final a = args['a'] as num;
final b = args['b'] as num;
switch (operation) {
case 'add': return {'result': a + b};
case 'subtract': return {'result': a - b};
case 'multiply': return {'result': a * b};
case 'divide': return {'result': a / b};
default: throw ArgumentError('Unknown operation: $operation');
}
},
);
// Process query
final result = await server.processQuery(
query: "What's 25 + 17?",
useLocalTools: true,
);
print('AI Response: ${result.text}');
Building an AI Chat Application
Now, let's build a simple AI chat application using Flutter and the mcp_llm
package.
1. Project Setup
Create a new Flutter project and add the required dependencies:
flutter create ai_chat_app
cd ai_chat_app
flutter pub add mcp_llm
2. Secure API Key Management
For security, we'll manage API keys using environment variables or secure storage. For this example, we'll use a direct approach for simplicity, but in a production app, you should use a more secure method.
3. Implementing the Chat App
Create a new file lib/main.dart
with the following code:
import 'package:flutter/material.dart';
import 'package:mcp_llm/mcp_llm.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'AI Chat App',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const ChatScreen(),
);
}
}
class ChatScreen extends StatefulWidget {
const ChatScreen({super.key});
@override
_ChatScreenState createState() => _ChatScreenState();
}
class _ChatScreenState extends State<ChatScreen> {
final TextEditingController _textController = TextEditingController();
final List<ChatMessage> _messages = [];
late McpLlm _mcpLlm;
LlmClient? _client;
bool _isTyping = false;
@override
void initState() {
super.initState();
_initializeLlm();
}
Future<void> _initializeLlm() async {
_mcpLlm = McpLlm();
_mcpLlm.registerProvider('claude', ClaudeProviderFactory());
// API key - replace with your actual key or use secure storage
const apiKey = 'your-claude-api-key';
if (apiKey.isEmpty) {
_showError('API key not found');
return;
}
try {
_client = await _mcpLlm.createClient(
providerName: 'claude',
config: LlmConfiguration(
apiKey: apiKey,
model: 'claude-3-haiku-20240307',
options: {
'temperature': 0.7,
'max_tokens': 1500,
},
),
systemPrompt: 'You are a helpful assistant. Be concise and friendly.',
);
} catch (e) {
_showError('Failed to initialize AI: $e');
}
}
void _showError(String message) {
ScaffoldMessenger.of(context).showSnackBar(
SnackBar(content: Text(message)),
);
}
void _handleSubmitted(String text) async {
if (text.trim().isEmpty) return;
_textController.clear();
setState(() {
_messages.add(ChatMessage(
text: text,
isUser: true,
));
_isTyping = true;
});
if (_client == null) {
_showError('AI client not initialized');
setState(() {
_isTyping = false;
});
return;
}
try {
final response = await _client!.chat(text);
setState(() {
_messages.add(ChatMessage(
text: response.text,
isUser: false,
));
_isTyping = false;
});
} catch (e) {
_showError('Error getting AI response: $e');
setState(() {
_isTyping = false;
});
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('AI Chat App'),
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
),
body: Column(
children: [
Flexible(
child: ListView.builder(
padding: const EdgeInsets.all(8.0),
reverse: true,
itemCount: _messages.length,
itemBuilder: (_, index) => _messages[_messages.length - 1 - index],
),
),
if (_isTyping)
const Padding(
padding: EdgeInsets.all(8.0),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
children: [
CircularProgressIndicator(),
SizedBox(width: 8),
Text('AI is typing...'),
],
),
),
const Divider(height: 1.0),
Container(
decoration: BoxDecoration(
color: Theme.of(context).cardColor,
),
child: _buildTextComposer(),
),
],
),
);
}
Widget _buildTextComposer() {
return IconTheme(
data: IconThemeData(color: Theme.of(context).colorScheme.primary),
child: Container(
margin: const EdgeInsets.symmetric(horizontal: 8.0),
padding: const EdgeInsets.symmetric(horizontal: 8.0, vertical: 8.0),
child: Row(
children: [
Flexible(
child: TextField(
controller: _textController,
onSubmitted: _handleSubmitted,
decoration: const InputDecoration.collapsed(
hintText: 'Send a message',
),
),
),
Container(
margin: const EdgeInsets.symmetric(horizontal: 4.0),
child: IconButton(
icon: const Icon(Icons.send),
onPressed: () => _handleSubmitted(_textController.text),
),
),
],
),
),
);
}
@override
void dispose() {
_mcpLlm.shutdown();
_textController.dispose();
super.dispose();
}
}
class ChatMessage extends StatelessWidget {
final String text;
final bool isUser;
const ChatMessage({
super.key,
required this.text,
required this.isUser,
});
@override
Widget build(BuildContext context) {
return Container(
margin: const EdgeInsets.symmetric(vertical: 10.0),
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Container(
margin: const EdgeInsets.only(right: 16.0),
child: CircleAvatar(
backgroundColor: isUser
? Theme.of(context).colorScheme.primary
: Theme.of(context).colorScheme.secondary,
child: Text(isUser ? 'You' : 'AI'),
),
),
Expanded(
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Text(
isUser ? 'You' : 'AI Assistant',
style: Theme.of(context).textTheme.titleMedium,
),
Container(
margin: const EdgeInsets.only(top: 5.0),
child: Text(text),
),
],
),
),
],
),
);
}
}
4. Network Permissions for macOS
If you're running on macOS, you need to set up network permissions:
App Transport Security Settings
Add the following to macos/Runner/Info.plist
:
NSAppTransportSecurity
NSAllowsArbitraryLoads
Network Entitlements
Add the following to both macos/Runner/DebugProfile.entitlements
and macos/Runner/Release.entitlements
:
com.apple.security.network.client
com.apple.security.network.server
5. Run the App
Now you can run the app:
flutter run
The app provides a simple chat interface where users can send messages to the AI and receive responses. It includes features like:
- Message list with user and AI messages
- Loading indicator while the AI is responding
- Error handling and notifications
- Clean UI with Material Design 3
Advanced Features Preview
The mcp_llm
package offers numerous advanced features that we'll explore in future articles:
1. Streaming Responses
Display AI responses in real-time for a better user experience:
final responseStream = client.streamChat("Explain Flutter widgets");
await for (final chunk in responseStream) {
// Update UI
setState(() {
currentResponse += chunk.textChunk;
});
}
2. mcp_client Integration
Access external tools and resources:
import 'package:mcp_client/mcp_client.dart';
// Create MCP client
final mcpClient = McpClient.createClient(...);
// Connect to LlmClient
final llmClient = await mcpLlm.createClient(
providerName: 'claude',
mcpClient: mcpClient,
...
);
3. mcp_server Integration
Provide AI capabilities as a service:
import 'package:mcp_server/mcp_server.dart';
// Create MCP server
final mcpServer = McpServer.createServer(...);
// Connect to LlmServer
final llmServer = await mcpLlm.createServer(
providerName: 'openai',
mcpServer: mcpServer,
...
);
4. Multiple LLM Providers
Leverage different AI models based on the task:
// Register multiple providers
mcpLlm.registerProvider('claude', ClaudeProviderFactory());
mcpLlm.registerProvider('openai', OpenAiProviderFactory());
mcpLlm.registerProvider('together', TogetherProviderFactory());
// Select client based on query
final client = mcpLlm.selectClient(query);
5. Parallel Processing
Query multiple LLMs simultaneously and aggregate results:
final response = await mcpLlm.executeParallel(
"What are the pros and cons of Flutter?",
providerNames: ['claude', 'openai', 'together'],
);
6. RAG (Retrieval Augmented Generation)
Integrate document search results with AI responses:
// Create retrieval manager
final retrievalManager = mcpLlm.createRetrievalManager(...);
// Add documents
await retrievalManager.addDocument(Document(...));
// Search and generate
final answer = await retrievalManager.retrieveAndGenerate(
"Tell me about Flutter state management",
);
Best Practices
When working with mcp_llm
, consider these best practices:
1. API Key Security
Always store API keys securely:
- Use environment variables
- Use secure storage solutions
- Never hardcode keys in source code
- Don't include keys in version control
2. Error Handling
Implement robust error handling:
- Wrap AI calls in try-catch blocks
- Handle network issues gracefully
- Provide meaningful error messages to users
- Implement retry mechanisms for transient errors
3. Performance Optimization
Optimize performance:
- Use streaming responses for better UX
- Implement caching where appropriate
- Monitor token usage and API costs
- Use the performance monitoring features
4. User Experience
Create a smooth user experience:
- Show typing indicators during AI processing
- Implement timeout handling
- Provide fallbacks for unavailable services
- Design intuitive interfaces for AI interactions
5. Testing
Thoroughly test your AI integrations:
- Unit test your AI-related code
- Use mock providers for testing
- Test with different query types
- Verify error handling scenarios
Conclusion
The mcp_llm
package offers a powerful, flexible way to integrate AI capabilities into Flutter applications. By providing a unified interface to multiple LLM providers and robust tools for building AI-powered features, it simplifies the process of creating intelligent apps.
In this article, we've explored the basic concepts and usage of mcp_llm
, but we've only scratched the surface of its capabilities. Future articles in this series will delve deeper into advanced features such as LlmClient's tool integration, LlmServer implementation, plugin development, and RAG systems.
With mcp_llm
, Flutter developers can leverage the power of large language models while maintaining a clean, modular architecture and a great user experience. Whether you're building a simple chatbot or a complex AI-powered application, mcp_llm
provides the tools you need to succeed.
Resources
- Example GitHub Repository
- mcp_llm GitHub Repository
- Model Context Protocol
- Flutter Documentation
- Claude API Documentation
- OpenAI API Documentation
Support the Developer
If you found this article helpful, please consider supporting the development of more free content through Patreon. Your support makes a big difference!
Tags: #Flutter #AI #MCP #LLM #Dart #Claude #OpenAI #ModelContextProtocol #AIIntegration