
Don’t Waste Your Time Building With MCP Until You’ve Read This
Last Updated on August 26, 2025 by Editorial Team
Author(s):
Originally published on Towards AI.

MCP looks like Elven magic — until it melts your time or your night. ☕
So in this post we’ll revisit the rant, fold in your latest tweaks, and ship the roadmap I wish I’d had. Two months, three rewrites, and many long nights experimenting and suffering. 🤯
Why all the buzz around MCP right now ?
LLM tooling sprinted from demo to daily driver in just two years.
Teams once debating prompt-engineering now juggle agents, KBs, and tool calling. MCP slid into that chaos promising one spec to route them all — but the spec moves fast and the ecosystem is patchy. Half the GitHub repos stamped “MCP” still sit at v0.1, if versions exist at all (see this link).
Bottom line: the protocol is powerful, the footing is slippery.
Who actually uses MCP in 2025?
- Indie IDE tinkerer: Solo dev wiring Claude Code to vibe code, and deploying from a single key-combo.
- Automation orchestrator: No-code builder dropping MCP nodes inside n8n or Zapier-like flows to connect Slack ➜ Postgres ➜ GPT-enabled mailers.
- Enterprise engineer: Fortune 500 SRE pairing LangGraph agents with internal APIs so product teams can build advanced internal agents.
- AI pair-programming evangelist: Tech lead rolling GitHub Copilot across the org, bolting extra MCP servers for domain-specific dev.
All MCP users today are technical. Everyday ChatGPT users are not the target. Unless you aim for devs or power users, adoption will be slow.
Tips: If your crowd write code, you’re golden. Everyone else? Good luck.

Deploying Remote MCP is not plug-and-play
Remove MCP server ≠ Classic server.
You can’t just spin up remote MCP servers the same way you would on your api and call it a day. You need infra that handles long-lived streaming — think Cloudflare Workers, AWS Fargate, GCP Cloud Run, or HF Space.
And if you want to have MCP with classic API then you will need to even more think it through.
Bottom line: One does not just deploy a MCP server like an API. Careful thinking is needed.
Local vs. remote: philosophy meets value
Local setups keep secrets offline and latency near zero — no wonder the community ❤️ them.
Remote MCPs shine when they unlock something you simply can’t do on a laptop: always-on agents, multi-tenant collab, heavy GPU orchestration, specific accesses.
If you dare go remote for a public tool, surface the bonus loud and clear (batch jobs, team workspace). Anything less feels like vendor lock-in in order to make you pay.
Rule of thumb: Go remote only when it adds obvious value . Otherwise the community sticks to local.
One Does Not Simply deploy CRUD behind MCP
MCP speaks in tools, not in REST verbs:
- Think about your MCP core value.
- Craft atomic tools (“transcribe”, “rank”, “search”), less is better
- Bundle them with as much description as possible (tool description, argument description, example, …)
- Let the model plan the flow.
Treat every tool like a CLI command — single-purpose, composable, well-documented, and with real examples. Remember: LLMs will launch them, so clarity matters.
Tips: To test your MCP tools, you can just connect the server to an AI coding tool or Chat system and just ask to create and run a testing scenario.

Leverage the ecosystem (skip wheel reinvention)
Rolling your own parser feels heroic… until the spec bumps and breaks auth (which will happens often until MCP is table). Instead lean on:
- Official MCP Typescript SDK— containing everything needed for deploying MCP servers in Typescript (what I used).
- Official MCP Python SDK — same as before but in Python.
- FastMCP —not official but widely used and fast, Python only.
If you really want to implement everything yourself (or you are a little crazy) , here’s the official MCP specification.
Rule of thumb: Just use one of these or another you like and save yourself time to develop the core value.
Do you need a UI on top of MCP?
Ask three questions:
- Does the user need to see visualizations? (logs, graphs, diagrams, videos, …)
- Is the workflow long-running? (> 10 s feels like forever)
- Can chat-only not be enough? (chat only, zero clicks, no advanced visualisation)
Two or more yes answers → build that UI. You have to accept that you just cannot ONLY have a chat as the interface.
Tips: Validate with some early users that you really need the UI. Chat-only is always faster to develop and maintain.

MCP is plumbing — your value sits elsewhere
Repeat after me: protocol ≠ product.
Your real magic lives in:
- Creative tools (multimodal summarizers, context-aware code docs, …)
- Domain data (CPQ catalogs, private corpora)
- UX wow (one-click rollout, real-time diff)
Spend cycles here, not on tweaking tools arguments.
Rule of thumb: Just imagine MCP is like https, you really need it but it is not the core value.
Powerful… yet bounded by its cage
- Integration into ChatGPT — only possible for Deep Research and Agent mode, only for Pro+ subscription, only for getting data only and no data upload (see this link).
- Claude Desktop — brutally tight ≤ 1 MB download limit so with bigger artifacts, you need other strategies (see this link).
Where you want to integrate your MCP server will be the biggest limiting factors on the UX you want the users you want to target. Currently, MCP is really fo technical users.
Tips: Experiment, experiment, experiment … as early as possible
Conclusion
Let’s deep dive one last time. MCP delivers controller superpowers — routing, orchestration, context management — but only when paired with thoughtful tooling, user-centric interfaces, and specific infrastructure. At the end, MCP is just a protocol, your value is not there.
👉 If you enjoyed this article and want to read more about AI, MCP, and Multi-Agent systems, follow me here on Medium or connect with me directly on LinkedIn!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.