A2A — One Stop beyond MCP
Author(s): Kelvin Lu
Originally published on Towards AI.
If you’ve been following the buzz around MCP and spent some time exploring its details, you might see it as a crucial building block for future agentic systems. However, you might also feel it doesn’t quite go far enough. Perhaps you perceive it primarily as a basic wrapper for local services, potentially finding it inadequate for enterprise-scale needs. For those who feel this way, there’s exciting news: Google announced their agent communication protocol at Google Next just a couple of weeks back.
MCP Introduction
Enter the Model Context Protocol (MCP), introduced by Anthropic as an open standard. Its goal was to tackle the growing challenge of getting LLMs to play nicely with external data sources and tools. Think about how it used to be: integrating AI often required building custom connectors for each application and writing lots of boiler-plate code for common tasks like formatting or handling errors. This traditional method often led to headaches with scaling, made data handling inconsistent, and opened the door to more security risks.
More importantly, without a standard, sharing these developments was a real challenge. MCP aimed to simplify all this by offering a standardised interface, allowing AI applications to interact uniformly with local tools, resources, and, of course, prompts.
Why It’s Inadequate
Despite its benefits, MCP faces several limitations:
- Limited Integraion Capabilities: Although the protocol has a plan in the road map to provide robust authentication and encryption, but the current implementations only support local client-server communication.
- Complex Task Cooperations: The MCP protocol is capable of expose local services for the clients to connect. However, it lacks the high level capabilities for inter-agent communication.
All these limitations mean MCP is best suited for building low-level agents — think of them as the behind-the-scenes workers who handle specific, task-oriented jobs using local resources.
MCP in the Wild: The Insurance Agent Example
Picture a generative AI system for insurance approvals. The “boss” agent (the one users interact with) manages the big picture: workflow planning, status updates, and delegating tasks to specialised sub-agents. These sub-agents are the real MVPs here:
- Credit Check Agent: The financial detective.
- Fraud Detection Agent: The suspicious-minded skeptic.
- Pricing Agent: The number-crunching wizard.
- Customer Consolidation Agent: The expert who knows who is who.
MCP shines here because it’s basically the API of the agent world — it neatly wraps up messy implementation details and exposes only what’s needed, making it perfect for these bite-sized tasks.
But Here’s the Catch…
While MCP is great for individual tasks, it struggles with orchestration. Trying to build an MCP agent that bosses around other agents — even on the same machine — is like herding cats. And distributed deployment? Forget about it (for now). MCP didn’t make these kinds development more difficult than before, it is just irrelevant in these scenarios.
So while MCP solves some integration headaches, it’s not the magic bullet for complex, multi-agent symphonies. It’s the soloist, not the conductor.
A2A: the Universal Interpreter
The Agent-to-Agent (A2A) protocol, introduced by Google, addresses a critical gap in AI ecosystems: fragmented workflows caused by isolated agents. Unlike MCP, which focuses on data-source integration, A2A standardises inter-agent communication, enabling collaboration across agents. Key advantages include:
- ”Agent Cards” = AI Business Cards
Every agent introduces itself with a neat JSON profile (skills, endpoints, auth requirements). No more awkward “So… what exactly can you do?” conversations. - Task Management That Doesn’t Ghost You
Clear status updates (“submitted” → “working” → “done”) keep workflows in sync. Even better? It handles long-running tasks (think hours or days of research) with live progress reports — no more wondering if your AI forgot about you. - Works with Anything (Yes, Even Video)
Need text? Forms? Audio streaming? A2A doesn’t care. It’s modality-agnostic, so agents can negotiate how they chat on the fly. - Security Without the Headache
Built on HTTP, SSE, and JSON-RPC, it’s as secure as your favorite enterprise API — but way more flexible.
In Google’s announcement of the release of A2A, they said A2A ‘compliments MCP’:
Implementation
A2A leverages existing web standards for compatibility. Its capabilities are:
- Capability discovery: Agents can advertise their capabilities using an “Agent Card” in JSON format, allowing the client agent to identify the best agent that can perform a task and leverage A2A to communicate with the remote agent.
- Task management: The communication between a client and remote agent is oriented towards task completion, in which agents work to fulfil end-user requests. This “task” object is defined by the protocol and has a lifecycle. It can be completed immediately or, for long-running tasks, each of the agents can communicate to stay in sync with each other on the latest status of completing a task. The output of a task is known as an “artifact.”
- Collaboration: Agents can send each other messages to communicate context, replies, artefacts, or user instructions.
- User experience negotiation: Each message includes “parts,” which is a fully formed piece of content, like a generated image. Each part has a specified content type, allowing client and remote agents to negotiate the correct format needed and explicitly include negotiations of the user’s UI capabilities–e.g., iframes, video, web forms, and more.
Unlike many previous Google products that are freshly delivered from labs, A2A is an obvious strategic movement. At the first day of its release, Google announced a long list of impactful enterprise users. This is another key difference between A2A and MCP — A2A was positioned as an enterprise solution.
Scanning through the long list of MCP servers vs. the Google partners contributing to the A2A protocol, it is easy to develop a feeling that MCP community marks open source culture: smoother learning curve, more options, larger community, varied quality, and a little bit of chaos. In contrast, A2A presents a commercial style: more authority, less options, higher quality, smaller community, and better organised.
You may never shocked by the length of A2A service catalogs like when you saw the MCP server list for the first time, but A2A is no question an important option in your enterprise solutions.
What Remains Unresolved
The combination of A2A and MCP presents a robust foundation for agent communication, yet their limitations reveal deeper challenges in agent-oriented development itself — not merely in protocol design, but in how we conceptualise and architect multi-agent systems.
The Naming Problem: A Sign of Immaturity
Consider the insurance approval agentic application: different agents serve distinct roles and requires different design patterns, yet we lack standardised terminology to classify them. Should we adopt terms like managerial agent versus clerk agent? The absence of a shared vocabulary underscores how nascent this field remains.
Critical Challenges in MCP and A2A
While these protocols advance interoperability, several unresolved issues demand scrutiny:
- Security Risks in Agent Selection
Current systems rely on endpoint descriptions to determine suitability — a naive approach vulnerable to exploitation. If a malicious service disguises itself as legitimate, can existing frameworks reliably reject it? Agent security remains an underdeveloped frontier, ripe for novel attack vectors. - Ambiguity in Service Descriptions
Unlike traditional APIs, where endpoints are explicitly defined by URLs, agentic systems route requests based on semantic descriptions. Overlapping or vague service definitions risk misdirection, as LLMs lack the discernment to resolve conflicts between similar offerings. - The Versioning Dilemma
MLOps practices like blue-green deployment — where multiple model versions coexist — clash with A2A and MCP’s lack of versioning protocols. Should clients cache service descriptions or fetch them dynamically per invocation? Neither approach is standardised, leaving reliability uncertain. - Inadequate Memory Control
Shared memory is a cornerstone of agent collaboration, yet neither protocol specifies how sensitive data should be partitioned. A legal agent may require contract details, while an accounting agent needs payment history — but cross-access must be restricted. Current implementations lack granular memory governance. - Unclear Error Handling
Traditional software defines errors rigidly, but agentic systems operate in gray zone. What if an LLM receives absurd inputs or insufficient data? And what if the client want to challenge the result of an agent? The A2A protocol’s input-required tag enables multi-turn conversation, but deadlocks may arise if multiple agents stall awaiting mutual input. - The Planning Deficit
Some implementations hardcode agent workflows, reducing them to deterministic, chatbot-like behavior. True agentic applications should leverage LLMs for dynamic planning — yet most today fail unpredictably when deviating from scripted paths. While this shortfall exceeds A2A and MCP’s scope, future protocol iterations must accommodate higher-level reasoning.
Moving Forward
These gaps highlight that agentic development is not merely a technical challenge, but a paradigm shift requiring new design philosophies. As the field matures, protocols must evolve beyond connectivity — toward security, adaptability, and intentionality.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI