Am I a bit late to talk about MCP and A2A protocols? I hope not! Both have been all over the internet, and they are mind-blowing!! A race is occurring right now, and nobody wants to be left behind in introducing new models and tools.

Anthropic released MCP (Model Context Protocol) for agents, which got good community traction. Recently, we saw Openai’s integration with MCP. MCP tells you how the agent will communicate with the APIs, making our multiple-tool calling easier.

Now, Google has released an A2A (Agent2Agent) protocol to streamline agent communication. In short, A2A standardises agent-to-agent communication while MCP standardises agent-to-tools communication.

So, yes, they are not competing but complementing each other. Google has extended support for MCP in the Agents Development Kit (ADK).

Screenshot 2025-04-21

This blog post explains how they work together to standardise building production-ready AI agents.

Let’s first discuss MCP and then proceed with the A2A protocol to see how both work.


Understanding MCPs – The Role of the Model Context Protocol

MCP stands for Model Context Protocol, an open standard developed by Anthropic. It defines a structured and efficient way for applications to provide external context to large language models (LLMs) like Claude and GPT. Think of it like USB for AI — it lets AI models connect to external tools and data sources in a standardised way.

MCP diagram

What’s the Core Problem MCP Solves?

MCP has three critical components:

  • Client: Maintains a 1-to-1 connection with servers, handles all LLM routing and orchestration, and negotiates capabilities.
  • Server: API services, databases, and logs that LLMs can access to complete tasks. Servers expose tools that LLMs use.
  • Protocol: The core standard governing client and server communication.

For an in-depth guide on MCP, its architecture, and internal workings, see Model Context Protocol (MCP): Explained.

MCP architecture

In short, MCP lets client developers build apps (Cursor, Windsurf, etc.) and server developers build API servers without worrying about each other’s implementation. Any MCP client can connect to any MCP server and vice-versa.

Each tool implementation is different:

  • Different field names (start_time vs event_time)
  • Different auth schemes (OAuth, API key, JWT, etc.)
  • Different error formats

MCP standardises how servers are built. You still write integration logic for each app (or use Composio), but MCP ensures any server can plug into any client. That makes life easier for millions of developers and abstracts away tool-by-tool quirks.

You can think of it like this:

  • User: Adds a Google Calendar MCP server to Cursor IDE.
  • Client: Fetches the server’s tools and injects them into the LLM context.
  • User: “Schedule a team sync on Thursday at 3 PM.”
  • MCP Client: LLM decides it needs to call a tool, fills parameters, and executes (after auth).
  • Calendar Server: Creates the meeting.

Instead of wiring services with brittle code, you now get a clean, modular interface.

Despite its merits, MCP can be tough in production — security, reliability, and multiple servers get hairy. That’s why we at Composio are building robust MCP infrastructure for your AI workflows.

Agent2Agent Protocol by Google

Google introduced the Agent-to-Agent Protocol (A2A), inspired by MCP. Where MCP focuses on agent-to-server calls, A2A focuses on agent-to-agent interoperability.

A2A diagram

Imagine a travel assistant planning a trip from Delhi to Mumbai. It can delegate to a train-booking agent, a hotel-booking agent, and a cab-service agent:

“Plan a full trip from Delhi to Mumbai, book my train, find a hotel near the station, and arrange local transport.”

Behind the scenes A2A forms a mini-team of agents, each handling part of the job. That’s modular, connected, and smarter.

A2A Design Principles

A2A enables flexible, secure communication between autonomous agents, regardless of vendor or ecosystem.

A2A in a nutshell

Key advantages

  1. True agentic behaviour – independent cooperation without shared state.
  2. Familiar tech stack – HTTP + SSE + JSON-RPC.
  3. Enterprise-grade security – built-in auth/authz like OpenAPI.
  4. Short- or long-running tasks – real-time progress, state tracking, and human-in-the-loop.
  5. Modality-agnostic – text, audio, video, etc.

How A2A Works

Stage What happens
Capability discovery Agents publish Agent Cards (JSON) describing skills, modalities, constraints.
Task lifecycle A client agent delegates a task; the remote agent updates status until producing an artefact.
Collaboration Agents exchange messages, artefacts, and context.
UX negotiation Messages use typed parts (text, image, chart, form, …) tailored to the client’s UI.

A2A lifecycle

Key Concepts of A2A Protocol

1. Multi-Agent Collaboration

  • Agents share tasks, results, and work across ecosystems.
  • E.g. a recruiting agent chatting with a company’s hiring agent, or a delivery agent coordinating restaurants.

2. Open & Extensible

  • Open protocol with 50 + contributors (Atlassian, Box, LangChain, PayPal, etc.).
  • Uses standards like JSON-RPC and Service/Event descriptions.

3. Secure by Default

  • Auth / authz via OpenID Connect.
  • .well-known/agent.json discovery endpoints.

Working of A2A – Examples

Architecture Example

Three agents in a productivity suite:

  • Calendar Agent – hosted server, pulls availability via MCP.
  • Document Agent – fetches documents/notes via MCP.
  • Assistant Agent – user-facing LLM delegating tasks.

Flow

  1. Assistant → Calendar: check availability.
  2. Assistant → Document: fetch & summarise doc.

So A2A handles agent-to-agent chat, while MCP bridges agents to apps.

Agent Discovery (Inspired by OpenID Connect)

Agents advertise at:

yourdomain.com/.well-known/agent.json

It lists name, description, capabilities, sample queries, modalities, etc., so newcomers can discover and interact dynamically.

Agent2Agent vs MCP

Feature MCP Agent2Agent
Communication Agent ↔ External APIs Agent ↔ Agent
Goal API integration Collaboration & interoperability
Layer Backend Mid-layer
Tech REST/JSON JSON-RPC / events
Inspired by LSP OpenID Connect

MCP provides the tools each agent uses, while A2A facilitates the collaboration between agents. They complement each other, ensuring both the execution of individual tasks and the coordination of complex, multi-step processes.​

While MCP equips agents with the necessary tools to perform specific tasks, A2A enables these agents to collaborate, ensuring a cohesive and efficient experience.

Both Anthropic’s MCP and Google’s A2A protocols facilitate interaction between AI systems and external components, but they cater to different scenarios and architectures.

Category Anthropic MCP Google A2A
Objective Link one model to external tools Coordinate autonomous agents
Best fit Secure enterprise data access Distributed B2B coordination
Protocol STDIO/HTTP + SSE HTTP/S + webhooks/SSE
Discovery Manual server settings Dynamic Agent Cards
Pattern Top-down calls Peer collaboration
Security Cross-boundary focus Same, multi-agent scope
Workflows Simple request-response Long-running, stateful

1. Communication

MCP: Structured Schemas
• In MCP (Multi-Call Protocol), the interaction is explicit and schema-driven.
• The assistant knows exactly what tool to call, what arguments to pass, and in what format.
• Flow: AI Assistant → Tool with structured input → Tool returns raw result.
MCP Flow:

• AI sends: get_weather_forecast(Tokyo, 2025-04-22)
• Tool returns: “Sunny, 22°C”
• AI just displays the result.
A2A: Natural Language
• A2A (Agent-to-Agent) is much more conversation-style, using natural language tasks.
• Tasks are expressed like real user queries, and agents internally decide how to interpret them.
• Flow: User Agent → Task in plain English → Target Agent processes → Responds naturally.
A2A Flow:

• User says: “Can you tell me the weather in Tokyo on April 22nd and the current $NVDA price?”
• Agent routes to the appropriate Finance/Weather Agent
• Response might be: “Sure! The forecast for Tokyo on April 22nd is sunny with a high of 22°C. or $NVDA price currently is $101.42 down by 0.064%”

2. Task Management

MCP: Single-Stage Execution
• MCP handles tasks like a classic function call.
• You call the function (or “tool”) and immediately get a response: either a success with the result or a failure (error/exception).
• The whole process is immediate and atomic, one shot, one answer.
A2A: Multi-Stage Lifecycle
• A2A treats tasks like long-running jobs.
• Tasks have multiple possible states:
• pending → waiting to start
• running → work in progress (can even provide partial results!)
• completed → final result ready
• failed → something went wrong
• You can check back anytime to see progress, grab partial data, or wait for the full result.

3. Capability Specification

MCP: Low-Level, Instruction-Based
MCP capabilities are described with very strict schemas, usually in JSON Schema format. They are about precision and control, like telling a machine exactly what to do and how to do it.

{

  "name": "book_table",

  "description": "Books a table at a restaurant",

  "inputSchema": {

    "type": "object",

    "properties": {

      "restaurant": { "type": "string" },

      "date": { "type": "string", "format": "date" },

      "time": { "type": "string", "pattern": "^\\d{2}:\\d{2}$" },

      "party_size": { "type": "integer", "minimum": 1 }

    },

    "required": ["restaurant", "date", "time", "party_size"]

  }

}

A2A: High-Level, Goal-Oriented

In contrast, A2A uses an Agent Card to describe capabilities regarding goals, roles, and expertise. It’s like explaining what someone is good at and trusting them to handle it.

agent_card = AgentCard(

    id="restaurant-agent",

    name="Dining Assistant",

    description="Helps users find and book tables at restaurants.",

    agent_skills=[

        AgentSkill(

            id="table_booking",

            name="Table Booking",

            description="Can search restaurants and book tables as per user preferences.",

            examples=[

                "Book a table for 4 at an Italian place this Friday night.",

                "Find a quiet restaurant near downtown and reserve for two people."

            ]

        )
        ]

)

• MCP allows you add skills (API services, Databases, records, etc) to your agents.
• A2A gives you flexibility, judgment, and delegation power. Think of a team of thoughtful coworkers.
• They’re like pairing an engineer (MCP) with a project manager (A2A). One does exact work; the other handles the chaos.


How to Use MCP with A2A

One way to integrate MCP servers into A2A agents is with Google’s Agent Development Kit (ADK).


Install the ADK

pip install google-adk

Import the Required Modules

# ./adk_agent_samples/mcp_agent/agent.py
import asyncio
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents.llm_agent import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService  # Optional
from google.adk.tools.mcp_tool.mcp_toolset import (
    MCPToolset,
    SseServerParams,
    StdioServerParameters,
)

# Load environment variables from a .env file in the parent directory
load_dotenv(".env")

Configure the MCP Server and Fetch Tools

# --- Step 1: import tools from an MCP server (HTTP SSE) ---
async def get_tools_async():
    """Gets tools from the Gmail MCP server."""
    print("Attempting to connect to MCP Filesystem server…")
    tools, exit_stack = await MCPToolset.from_server(
        connection_params=SseServerParams(
            url="https://mcp.composio.dev/gmail/tinkling-faint-car-f6g1zk"
        )
    )
    print("MCP Toolset created successfully.")
    return tools, exit_stack

Note: In the example above, we use the HTTP SSE endpoint for the Gmail server at mcp.composio.dev.

For a STDIO-based tool

async def get_tools_async():
    """Gets tools from a local MCP filesystem server (STDIO)."""
    print("Attempting to connect to MCP Filesystem server…")
    tools, exit_stack = await MCPToolset.from_server(
        connection_params=StdioServerParameters(
            command="npx",
            args=[
                "-y",
                "@modelcontextprotocol/server-filesystem",
                "/path/to/your/folder",
            ],
        )
    )
    print("MCP Toolset created successfully.")
    return tools, exit_stack

Create the Agent

async def get_agent_async():
    """Creates an ADK agent equipped with tools from the MCP server."""
    tools, exit_stack = await get_tools_async()
    print(f"Fetched {len(tools)} tools from MCP server.")
    root_agent = LlmAgent(
        model="gemini-2.0-flash",           # Adjust if needed
        name="maps_assistant",
        instruction=(
            "Help the user with mapping and directions using the available tools."
        ),
        tools=tools,
    )
    return root_agent, exit_stack

Define main

async def async_main():
    session_service   = InMemorySessionService()
    artifacts_service = InMemoryArtifactService()  # Optional

    session = session_service.create_session(
        state={}, app_name="mcp_maps_app", user_id="user_maps"
    )

    # TODO: Use specific addresses for reliable results with this server
    query   = "What is the route from 1600 Amphitheatre Pkwy to 1165 Borregas Ave"
    print(f"User Query: '{query}'")

    content = types.Content(role="user", parts=[types.Part(text=query)])

    root_agent, exit_stack = await get_agent_async()

    runner = Runner(
        app_name="mcp_maps_app",
        agent=root_agent,
        artifact_service=artifacts_service,  # Optional
        session_service=session_service,
    )

    print("Running agent…")
    events_async = runner.run_async(
        session_id=session.id,
        user_id=session.user_id,
        new_message=content,
    )

    async for event in events_async:
        print(f"Event received: {event}")

    print("Closing MCP server connection…")
    await exit_stack.aclose()
    print("Cleanup complete.")


if __name__ == "__main__":
    try:
        asyncio.run(async_main())
    except Exception as e:
        print(f"An error occurred: {e}")

Run the Application

Execute the script to watch your A2A agent automatically call MCP-hosted tools in response to user queries.


Conclusion

MCP makes it easier for agents to communicate with the tools that wrap external application services, and Agent2Agent makes it easier for multiple agents to communicate and collaborate. Both MCP and Agent2Agent are steps in the direction of standardising agent development. It would be interesting to see how they transform the agentic ecosystem.