MCP is not a Magical Cure-all Panacea
Last Updated on May 5, 2025 by Editorial Team
Author(s): Maithri Vm
Originally published on Towards AI.
MCP is not a Magical Cure-all Panacea
At least not just yet!
Now that the MCP waves are settling down, it is time to reflect on my learnings from building MCP prototypes for a reality check.
This post is intentionally kept very simple & skipping detailed discourse on what & how part of it, as there are sheer number of materials in the internet already. So let me skip ahead with some abstractions till I get to its practical relevance and way forward.
What is MCP?
It is a protocol to set standard specifications for interacting with LLM agents.
What are the primary components of MCP?
- MCP Server: A Mechanism designed for remote servers to publish resources, prompts, and tools to an LLM agent.
- MCP Client: Is an object that connects to MCP servers, and binds them to the LLM interface to achieve expected outcomes.
- MCP Host: An end application (native / web) where the client resides, to enable users to interact with the MCP client(s).
- Transport layer: To establish communication between client & servers, MCP supports Stdio support for local processes and HTTP with SSE for client-to-server and server-to-client communication, respectively.
So in a nutshell, MCP server is basically a standard for engineering AI applications using LLM agents.
What is an LLM Agent?
LLM agent is an abstraction for “Sophisticated Context Engineering” 😛 This deserves a separate post by in itself, but lot has been written and discussed already on this, so let me pass on this one.
So, MCP is basically a standard for ‘Context engineering’!
How does MCP benefit enterprise AI applications?
A typical enterprise application involves connecting to multiple different services (internal and external) to offer comprehensive (often complex) solutions for various business outcomes.
- MCP brings uniformity across enterprise services to seamlessly operate with (usb-c analogy) LLM applications: It acts as standard protocol for an AI app to connect to these different servers — both for input context (resources) and output actions (tools) alongside prompts, which could offer specific guidance to the invoking client in the form of Prompt templates.
- MCP offers decoupled architecture constituting client (LLM app) & distributed servers (remote services that offers resources and tools). This particularly provides much needed modularity & scalability for the overall solution with ‘plug-n-play’ services, that an LLM app could connect to. In other words, it enables ‘dynamic discovery’ of the services that are added to/ removed from the server without having much implications on the end client applications.
- With agents becoming the integral commodities of the future web applications, setting a standard like MCP for LLM agents is certainly a necessity for interoperable agents across the eco system.
Does it mean that MCP is indeed a panacea for AI applications?
While the above virtues are indeed an absolute necessity, there are few other aspects of LLM application that remains as the core concerns to the developers, extending beyond spec like MCP.
- Resource / Tool deluge : How do you teach an LLM agent to pick correct resources / tools / generate a plan to solve a complex problem? This has been a core challenge to any AI application, and there are multitude of solutions like RAG, Function calling, Fine tuning, Inference time scaling etc (and the list keeps growing).
Just migrating the LLM agent to an MCP spec doesn’t solve any of these fundamental challenges. Well, point to consider here is that Sonnet 3.7 has dominance over the function calling capabilities and claims to yield decent performance with up to 100 tools in a given context (sic : Mahesh Murag’s workshop). There is a hope to bet on these advents though, on behalf of MCP server / tools assembly, as models extend context limits & functional calling capabilities & other such innovations in a rapid pace.
Yet, this is an area to be evaluated by the model developer with applicable domain context & complexity to check how does current solutions can be migrated to an MCP world.
If the solution involved custom fine-tuned models for resource based decision making or tools based function calling, the dynamic resource & tool binding architecture of MCP would add additional overhead to keep such special models in sync. In a way, this would be a kinda disservice to the dynamic plug & play design that MCP is aimed to solve for.
2. Role of MCP in a Multi-agent system : Well, this is where the creative problem solving kicks in, because the short answer is Agents and MCP Registry are still roadmap items for MCP. However, I have explored few design patterns on this front, just to check how far this can be extended.
a. Dumping all tools across different MCP servers into custom MCP registry (single DB), which enables RAG based tool discovery with a search_tool like api (this is one of the recommended option in the workshop). Well, this is not a bad idea, however we could treat this as problem of plenty as discussed in point #1.
b. Treating each MCP server as an independent agent, while the application routes the request to target MCP server in a hierarchical manner: That’s a decent approach to build self encapsulated, modular agents handling its own tools and resources. However, that’s how far it could go at this moment.
A true agent as we know, should support natural language prompt with unstructured context as input and serve the best response using its own tools / resources in a more conversational style interaction. But wrapping an agent as an MCP server would involve spinning ‘/completion’ as a tool with couple params that act as context and agent config. With all the sophistication that agentic frameworks offer these days, it is hard to imagine the viability of all that being managed as mere MCP tools.
3. Workflow style execution vs Simple delegation : As we know for most enterprise scenarios, AI workflows are driving the industry adoption over fully autonomous agents. To achieve such workflows, the agent collaboration would also entail sequential flow management, human in the loop interruptions & approvals and state management to support them all. Building such an agentic workflow backed by MCP servers in lieu of true agent is yet another challenge that has to be overcome with workarounds / scaffolding as discussed in point #2. Yet again, this is another roadmap item for MCP — Interactive Workflows, Agent Graphs etc.
4. Server discovery: Be it a hub and spoke model or an agentic mesh, with a growing number of agents in the ecosystem, it is prudent that an AI solution involves interacting with various agents to accomplish the end goal. As the number of agents evolve (just like websites in the internet), and agent market places ebbing in the horizon, the discovery of agents is going to be next big thing deserving google like solution. Well, MCP architecture has taken the lead by laying foundation for standards, it has also enabled dynamic tool & resource update of an MCP server on the fly.
However, robust solutions for dynamic discovery of the agents itself is a roadmap item (MCP Registry). While such a central registry caters to hub-and-spoke model, there are other standards emerging in the industry which also enables peer agent discovery to enable mesh.
5. Finally, the elephant in the room: MCP server security, safety, and governance: Consolidating all these system engineering concerns into a single bullet as there are already enough said & debated about security loopholes with MCP and also necessary risk mitigation plans that the solution developer has to consider. (Refer to the appendix.)
In a nutshell, MCP has long way to go on multiple fronts as far as security is concerned, but if one is already considering production grade deliveries using MCP, it is highly recommended to tread with caution. Own your security!
Even MCP warrants securing own server by building own Auth layer, Trust management (both between user & agent as well agent to agent), Session management and measures against AI model Jailbreaks etc to mitigate known risks. Engineering these aspects around MCP client, host and server are non-negotiable. Though some of these points are definitely part of the roadmap of MCP, onus is on the dev team to gap fill till then.
So, what is the recommendation here?
MCP is not only poised to set the industry standard for the agentic world, it is also pioneering the thought leadership. As a generic trend, the successors and industry giants will certainly enhance its current capabilities and make it more adoptable, eventually accelerating the agentic revolution.
Any standard / spec in my opinion is going to succeed only with wider adoption by the developer community, which in turn maturing it and converge towards winning standard(s). With known advantages and tradeoffs (though in the interim), all I could suggest developers is to evaluate options and make choices based on careful considerations. Subscribing to it based on informed decisions on the additional measures in place would be more advisable.
There are more standards that have followed after MCP, and it is also fair to expect more such coming in the near future. While MCP has a great start with defining standard for Agent’s inner wiring, the elevation of such standards to true agent-to-agent interaction is the need of the hour!
Would love to hear your thoughts & findings from your learnings & experimentation in this space.
Appendix:
https://www.latent.space/p/why-mcp-won
Everything Wrong with MCP
Explaining the Model Context Protocol and everything that might go wrong.
blog.sshh.io
https://equixly.com/blog/2025/03/29/mcp-server-new-security-nightmare/
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.