
🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions
Author(s): Damien Berezenko
Originally published on Towards AI.
🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions

This article covers:
- The MCP protocol basics, client config examples, servers, and proxy
- A curated list of the top MCP Servers available today
- Anthropic’s new Desktop Extension (DTX) format
- Key limitations and considerations when using MCP
- A step-by-step guide to installing MCP Servers and connecting to a client
What Is MCP?
MCP (Model Context Protocol) is a simple yet powerful way to give context to language models like ChatGPT, Cursor, or Claude. Since LLMs don’t have access to your personal or real-time data (e.g., emails, calendar, CRM, documents, and files), can’t handle things they aren’t very good at (e.g., math, strict logic, maps), or perform actions such as posting on your social media or sending an SMS or email, MCP servers are currently a hot topic in the AI space. By providing this external context, LLMs can do a much better job assisting you, whether it’s writing code, finding, reading, and writing files, or handling tasks that would normally require switching between different apps or tools.
This article is meant to help organize and explain these powerful tools clearly and practically.
Why Use MCP?
MCP lets you:
- Access your CRM or internal data
- Read local files or cloud storage (like Google Drive)
- Get real-time documentation or math help
- Integrate your browser or other apps
- Automate tasks and actions that usually require multiple tools
In short, MCP helps your LLM help you better.
🔐 Authentication and Authorization
MCP is still in its early stages but growing rapidly, with many features and standards evolving even as you read this. As a result, security, authentication (AuthN), and authorization (AuthZ) are not yet fully mature in many cases.
Often, you are required to store API keys, credentials, logins, and passwords in plain text — this is risky. To fix that, the community is moving toward OAuth 2.1 for secure authentication and authorization.
🔑 Key Terms:
- AuthN (Authentication): Verifies identity (e.g., “I’m John from Acme Corp”).
- AuthZ (Authorization): Defines what actions the user is allowed to take (e.g., read files, update CRM entries).
If your backend application doesn’t support OAuth 2.1 — such as a PostgreSQL database — you’ll still need to use a username and password credentials for the database and save it in plain text. Until a better integration is available, you’ll have to rely on these less secure methods and hope for tools that can eventually bridge native app authentication with OAuth.
When is Auth needed or not?
- In some cases, Auth is not needed, like for example https://mcp.deepwiki.com/, which works as a public source of documentation that our MCP Clients only read from the website.
- While others require some form of Authentication using login & password credentials, API keys that are stored in the MCP Client config. Such as https://hf.co/mcp
- OAuth can dynamically authenticate the user, providing a more secure way to log in, without keeping a clear-text API or credentials, allowing the user to quickly and simply authenticate via a web browser.
🧩 MCP Key Components
- MCP Server: Provides contextual data for your LLM.
- MCP Client: Connects to MCP Servers and delivers that context to the LLM (often built into editors or AI tools).
- MCP Proxy: Converts MCP Transport protocols, e.g., STDIO ⇄ SSE.
How MCP Servers Work
An MCP Server often acts as a bridge between your language model and other applications. For example, if you’re using the Google Drive web app, an MCP Server for Google Drive will sit between your MCP Client and the actual Google Drive service, making the data accessible to the LLM.
In the future, most applications — both local and cloud-based — may support MCP natively. But for now, many need a separate MCP Server to enable this connection.
MCP Servers usually act as a bridge between your tools and your LLM. For example:
— An MCP Server for Google Drive connects your files to the LLM.
— Some servers are standalone (e.g., calculator, time).
— Some apps or services can natively speak MCP.
Some MCP Servers don’t need any external application at all to connect to. For example, an MCP Calculator or an MCP Time server can work entirely on their own without needing to connect to another app.
These MCP Servers collectively enable LLMs to interact with external systems in a structured manner each includes these core components:
- Tools: Functions that LLMs can invoke to perform specific actions.
- Resources: The LLMs can access, akin to GET endpoints in a REST API.
- Prompts: Predefined templates that guide the use of tools or resources.
How to Run MCP Servers
Since MCP is still actively being developed, its ecosystem can be somewhat disorganized. But in general, there are two main ways to run an MCP Server:
1. Locally on Your Computer 🖥️
Some servers make the most sense to run locally. For instance, a File Server that lists and reads files from your local system must run on your machine to access that data. There are several ways to run MCP Servers on your local machine. Each method has its pros and cons, especially when it comes to ease of use, setup, and security.
There are a few main ways to run MCP Servers locally:
A. Using Dev Tools
This is the most common method. MCP Servers can be launched using tools like NPX/NPM, UV/UVX/PIP, BUN/BUNX, NODE, etc. These tools are popular because they allow developers to distribute MCP Servers with minimal extra work.
However, this method isn’t very beginner-friendly. You’ll need to install development tools and manually manage libraries, modules, and dependencies. Over time, different MCP Servers might conflict with each other when updated, breaking functionality.
Despite its complexity, this method is still widely used — and sometimes unavoidable.
A major downside is security: in many cases, you’ll have to provide credentials (such as API keys or usernames and passwords) in plain text, which is not secure.
✅ Popular with developers.
❌ Not beginner-friendly. Security concerns.
⚠️ Dependency conflicts can break other MCP Servers.
B. Using Containers (Docker, Podman)
Some MCP Servers are available as containers, which you can run using tools like Docker Desktop or Podman.
Containerization solves many dependency problems by isolating each MCP Server in its own environment. However, you’ll still need some technical knowledge to run containers from the command line (CLI). Example of a standalone container with hundreds of MCP Servers.
A major downside is security: in many cases, you’ll have to provide credentials (such as API keys or usernames and passwords) in plain text, which is not secure.
✅ Isolation does not brake dependency and easier setup.
❌ Requires knowledge of CLI (Terminal).
⚠️ Credentials often still stored insecurely.
C. Docker Desktop Containers + MCP Toolkit (with GUI)
This is currently the most beginner-friendly option. The MCP Toolkit is a Docker Desktop extension that includes a graphical user interface (GUI) and a built-in marketplace of MCP Servers. This setup is suitable for personal or development use. OAuth is supported, so it can be used for production environments where security is critical, but most MCP Servers requiring Auth still do not support OAuth. And their credentials are stored in the Docker Desktop app.
✅ Easiest option for beginners, no terminal commands. Supports OAuth.
✅ UI-based install via a built-in marketplace (not all) — few clicks
⚠️ Still not perfect: credentials are handled better, but still not securely enough for enterprise use. Some MCP Servers support OAuth.
D. Anthropic’s Desktop Extensions (DXT)
Anthropic announced Desktop Extensions (DXT) v0.1 in beta on June 27, 2025 now available as a calude directory. This new format packages locally running development tools with an MCP Server and configuration files into a ZIP file with an .dtx
extension, including dependencies and multi-platform OS support. The extension is currently supported only with Claude Desktop. Anthropic is also building a directory of Desktop Extensions (I suspect based on the MCP Registry project specification).

Desktop Extensions simplify distribution for developers and installation for end users. Node.js-based extensions don’t need to include Node.js as a dependency since Claude Desktop ships with it, while Python and binary applications are also supported. Extensions update automatically, and secrets are stored securely in the OS keychain (macOS) with a user-friendly UI for entering credentials and environment variables when needed at the extension installation.
For enterprises, Desktop Extensions will support Group Policies for Windows and MDM for macOS. Organizations can pre-install approved extensions, blacklist specific extensions or publishers, deploy private extension repositories, or disable the extension repository entirely.
✅ Easy for developers.
✅ Beginner-friendly for users.
✅ Personal use and Business-oriented.
✅ Secure by design (currently on macOS only).
⚠️ Still in Beta. Currently only supported with Claude Desktop and STDIO.
2. Remotely in Cloud ☁️
Other servers, like the Google Drive MCP Server, can run either locally or in the cloud. Since Google Drive already requires an internet connection, it often makes more sense to run its MCP Server for it in the cloud too. That way, you don’t need to install anything on your local device.
E. HTTP-based (SSE or streamable-HTTP)
It's quite easy to use for the end-user, no need to install anything (assuming your MCP Client supports SSE and streamable-HTTP). Typically, endpoints ending with /mcp
are streamable-HTTP, while those ending /sse
, well they are SSE. If your MCP Client does not support this transport, use a Proxy.
✅ Relatively easy option for beginners.
✅ UI-based install of MCP Servers via a built-in marketplace (not all).
✅ AuthZ & AuthN with OAuth typically are included
⚠️ Might need to install MCP Transport Proxy
⚠️ Some are paid. Most current offerings are mostly for personal use.
⚠️ Self-hosted requires a lot of efforts for Enterprise setup, use and support.
All options are valid — it depends on what kind of app you’re connecting to and your personal or project-specific needs.
Examples: Supermachine.ai, Databricks.com, Natoma.id, mcpfabric.com, glama.ai, Cloudflare (install, host, and support your own), Composio.dev, Pipedream.com and Plugged.in.
🔍 Where to Find MCP Servers
There is no single centralized place to find all available MCP Servers, but there are several reliable sources.
Local MCP Servers
You can run local MCP Servers using development tools or containers. Sources include:
- Dev tools (e.g.,
npx/npm, uv/pip
) - The new
DTX
format - Standalone Containers
Explore the Docker Desktop MCP Toolkit Marketplace.
Official Anthropic calude directory.
Some of the applications have build-in marketplaces with MCP Servers such as Cline, RooCode, Goose and others.
Check out other places:
Examples of Local MCP Servers:
- Filesystem on your computer
- Git repositories
- System time
- MCP for Docker
- Calculator — this may look silly, but LLMs do suffer with math, and an external tool usually would do a much better job.
Remote MCP Servers ☁️
Some apps are beginning to support the MCP protocol natively, meaning the MCP Server is built into the service — such as Cloudflare’s own MCP Servers — so there’s no need to install or manage any third-party software.
However, the majority of services (as of June 2025) still do not support MCP natively. These require a separate “middleman” MCP Server to run externally.
Remote MCP Servers fall into these categories:
1. Business Use (Multi-User/Multi-Tenant)
- Recent research identifies potential vulnerabilities within the MCP ecosystem, such as tool poisoning, memory poisoning, and preference manipulation adversarial attacks. These techniques are required to mitigate these malicious attempts to make the MCP a trusted tool to be adopted by businesses.
- Designed to integrate with internal tools securely.
- Support enterprise-grade features like AuthN and AuthZ other than OAuth.
- Currently limited but growing.
- Self-hosted requires a lot of efforts for Enterprise setup, use and support.
2. Personal Use
- More widely available due to fewer security requirements and lower complexity.
- Easier to set up and manage for individual users.
3. Public servers
- Services that typically read-only such as those providing documentation like DeepWiki, Context7 & Microsoft Learning.
🔄 Transport Protocols
MCP supports a few different transport layers for connecting clients and servers:
STDIO
STDIO is the original and most basic transport protocol used by MCP. It allows you to connect to locally running MCP servers via standard input/output streams, which are used by default. Most locally running MCP servers typically support the STDIO transport protocol but may also support SSE and Streamable HTTP. In the case of local servers such as a filesystem MCP, STDIO is typically the preferred option, as it operates solely on your machine — eliminating the exposure of your sensitive files over a network and offering a more secure setup by design.
- Still widely used by many in-development MCP Servers
- Also used in the Docker Desktop MCP Toolkit Marketplace
Example:
# Ensure Docker Desktop app is running first. Access MCP Toolkit:
docker run -i --rm alpine/socat STDIO TCP:host.docker.internal:8811
# Filesystem:
npx -y @modelcontextprotocol/server-filesystem /tmp
SSE (Server-Sent Events)
The second generation of MCP transport, based on HTTP. It allows data to be streamed from the server to the client.
- Some local MCP Servers support SSE
- Some local MCP Servers support both STDIO and SSE
Example:https://mcp.deepwiki.com/sse
Streamable-HTTP
A newer and more flexible streaming protocol over HTTP. While SSE is simpler and still in use, Streamable-HTTP is preferred for distributed or advanced setups.
Example:https://mcp.context7.com/mcp
Web Apps with Native MCP Support
Some web apps now natively support MCP using either SSE or Streamable-HTTP: GitHub, Cloudflare, HubSpot, Intercom, PayPal, Pipedream, Plaid, Shopify, Stripe, Square, Twilio and Zapier.
🏆 Top MCP Servers
👨💻 For Programmers
- Context7 — Up-to-date documentation about frameworks and libraries
- DeepWiki — Similar to Context7 from the Devin project
- Microsoft Learn MCP — Documentation for popular libraries, frameworks, and tools (Streamable-HTTP only)
- FastAPI MCP — Consume any FastAPI endpoints as MCP tools with built-in authentication support
- Mem0 / Neo4j / Cognee Graph — Graph-based long-term memory
- OpenMemory — personal long-term memories across chats
- Official GitHub — Enables LLMs to find, pull, and push code repositories directly from GitHub
- Pydantic-AI Run-Python — Secure, async-ready code execution generated by AI for testing; automatically installs required dependencies
- Netlify — Build/deploy hosted websites
- GitIngest — Turns a repo into a prompt-friendly text for LLM. Example
- Zilliztech/Code-Context — MCP Server and VS Code Extension for Codebase
🌍 Universal Use
- MCP LangFlow / Flowise — Drag-n-drop low-code AI flows as MCP
- Wikipedia MCP — General knowledge lookup
- Memory — Knowledge Graph Memory Server in JSON for long-term
- Basic Memory — persistent short-term memory in Markdown files
- Browser Use — Allows LLMs to browse the internet, click buttons, and extract data from web pages.
- Antvis MCP — Converts structured data into visual charts and graphs.
- WolframAlpha MCP — A paid service for advanced Math-powered reasoning
- WrenAI — Query any databases with natural language Text2SQL (Requires running an instance of WrenAI app)
- Sequential Thinking — Breaks down complex tasks into steps; supports idea branching, revision, and solution generation
Bonus
These might seem like long-forgotten languages, but they have a very important qualities that all LLMs struggle with: strong logic, reasoning, and deduction:
- Prolog — Logic, Deduction, Precise Reasoning, Backtracking, Explainability, Rule Verification, Math, Dynamic Knowledge Graphs. Neuro-Symbolic AI represents a convergence of symbolic reasoning (exemplified by Prolog’s programming language relying on math and logic) and LLMs’ pattern recognition capabilities.
- Lisp — Like Prolog, Lisp is used in Neuro-Symbolic AI. It automates boilerplate code, enables abstractions and transformations, manipulates math expressions (symbolic computation), reasons about math and logical expressions (not just giving you a computational result), and explains them. It is used for expert systems, ontology engines, and the semantic web. It also offers metaprogramming (homoiconicity) — writing code that writes or manipulates other code and ability to introspect and modify itself.
MCP Clients
MCP Clients are available as ready-to-use applications:
Popular Clients
- For Developers: Cursor, Windsur, OpenAI Codex, OpenCode, VS Code GitHub Copilot, VS Code extensions: Cline, Roo Code (Previously Roo Cline), Continue and others. Cline & Roo Code are one othe easiest MCP Clients to start using MCP with build-in server marketplaces. And others.

- For Everyone: PerplexityAI for macOS, iPhone, Windows, Android (STDIO + HTTP-based), Claude Desktop (STDIO), LangFlow, Flowise, AnythingLLM Desktop, Open WebUI (via mcpo), LM Studio (local models only), LocalAI (STDIO and local models only), Goose, Gemini CLI (Google Gemini API only), Letta ADE Desktop, MCP-CLI, and upcoming Chat GPT support (HTTP-based integrations which is currently available only via API). And others.

MCP SDKs and Libraries for developers
You can find SDKs and Libraries to incorporate MCP Client or MCP Server functionality into your own custom code in Python, TypeScript, Java and others such as FastMCP (Now part of official MCP module), OpenAI Agents SDK and many popular frameworks such as Microsoft AutoGen and many others.
MCP Server Configuration
Most MCP Clients use a JSON config file (mcpServers.json
) to load servers. Unfortunately, most do not follow the exact same structure, leading to differences between applications.
Example clients:
- Cursor, Cline, RooCode: Support config files (for STDIO & SSE)
- Continue: Similar, but syntax may differ
- VS Code GitHub Copilot: No dedicated config for MCP for the global config, merged with VS Code User Settings. Workspace-based config slightly different.
🚨 Need for Standardization:
We urgently need to standardize mcpServers.json
across all MCP Clients. The differences aren't just about formatting—some clients don't support certain transport protocols at all. Ideally we should have a single mcpServers.json config on a computer that all the MCP Clients could reuse.
- Different clients support different protocols (STDIO vs. SSE vs. Streamable-HTTP)
- Some MCP Clients only support one transport (e.g., Claude Desktop only supports STDIO)
- Lack of syntax standardization across apps
- Similarly, most MCP Servers don’t support all the transports (e.g., Docker Desktop MCP Toolkit only supports STDIO)
🧪 Example mcpServers.json
Below is an example mcpServers.json
configuration file compatible with clients like Cursor, Cline, and RooCode. While VS Code is the same with only difference of using just "servers"
instead of "mcpServers"
:
💡 Some MCP Servers need a database or other external dependency running beforehand such us running Docker app itself or a container with an app that needs to be started & running. Unfortunately, there’s no easy way to do this from the MCP config right now, so you’ll have to automate it with a startup script or run it manually before use.
🔄 Transport Proxies
Unfortunately, some MCP Clients support only specific transport protocols. If your MCP Client does not support the transport used by your MCP Server, you’ll need a proxy to convert between protocols. So we basically create middle-man for the middle-man…
You can use a proxy to convert STDIO ⇄ HTTP-based transports (which include SSE, Streamable-HTTP, and WebSockets).
Popular Proxies
- SuperGateway — Converts between STDIO and HHTP-based transports (SSE, Streamable-HTTP, WebSockets)
- MCP-Proxy — For SSE ⇄ STDIO or SSE ⇄ Streamable-HTTP
- IBM/mcp-context-forge — Versatile proxy with UI for power users and businesses
- MCP-Remote — Simple SSE & Streamable-HTTP ⇒ STDIO proxy with auth. Since most of the MCP Clients support STDIO, this is your go-to.
- MCP Smart Proxy and Unified-MCP-Tool-Graph are proxies that uses semantic search to filter out list of available tools dynamically for each request. The advantage is that the LLM gets the list of only the most relevant tools, while the disadvantage is current MCP specification does not allow to pass the original user request to the proxy before getting the list of tools, and the current workaround is to each time first to call a tool that first does the filtering and then returns the filtered list.
🚧 MCP’s Limitations
The first two items — Security and Complexity — are probably the main reasons holding off enterprise businesses from adopting MCP:
1. Security Risks
- Prompt / Tool Poisoning / Memory Poisoning: Malicious MCP servers can hide harmful instructions in tool descriptions — a user invokes a seemingly harmless tool, but the LLM executes hidden malicious actions. Attackers can tamper with long-term memory to manipulate behavior.
- Rug-Pulls & Tool Shadowing: Servers can alter behavior after install (“rug-pull”) or override trusted tools with malicious duplicates (“tool shadowing”).
- Command Injection: Poorly implemented servers may be vulnerable to arbitrary code execution via shell injection, path traversal, or SSRF attacks.
- Cleartext credentials: often, an MCP client is configured with a username and password or API keys saved as plain text in a config file, and passing this sensitive information via the MCP server adds security risks.
- Privilage Escalation: One tool can override others. Malicious plugins intercept calls meant for trusted services (like Slack or Notion).
2. Engineering & Operational Complexity
- Setup Required: You must define servers, clients, state handling, and tool schemas manually.
- OAuth-only: MCP currently focuses on version 2.1 of the protocol for AuthN and AuthZ, while real-world enterprise businesses utilize a large variety of different auth mechanisms — OAuth is just one of many. Creating adapters between OAuth and your auth mechanism or custom MCP servers adds operational complexity and security risks.
3. Performance & Scalability
- Context Window Bloat: Multiple active servers can overwhelm the LLM’s limited token memory and attention, slowing down reasoning and increasing costs. Each MCP Server may have a dozen of tools, more than 20–50 tools may significantly reduce quality.
4. Ecosystem Immaturity
- Limited Adoption: MCP is still new. Many SaaS platforms lack official servers, and community options vary in quality.
- Inconsistent Client Support: Few clients support MCP natively with inconsistent configs and transport support.
5. Governance & Standardization Gaps
- Protocol Drift: Rapid changes could break backward compatibility if no stable standard emerges. Such as recent transport protocol change from STDIO to SSE and then to streamable-HTTP. Inconsystencies in mcp json configs.
6. Multi-Tenancy Challenges
- Scaling Issues: MCP isn’t designed for multi-user or cloud-based deployments out-of-the-box yet— handling sessions, isolation, and concurrency requires custom solutions.
⚠️ Security: Prompt poisoning, cleartex credentials
⚠️ Complexity: Setup overhead, transport dependencies, OAuth-only
️️️⚠️ Performance: Context-window and token limits
⚠️ Ecosystem: Limited adoption, inconsistent support
⚠️ Governance: MCP Standard is still rapidly changing
⚠️ Scale: Hard to support self-hosted multi-user/cloud deployments
The future of AI-powered productivity is here.
With so many moving parts — changing platforms, a wide variety of tools, multiple transport protocols, different authentication mechanisms, and many different ways to run MCP Servers — it’s clear that the ecosystem is still evolving. But one thing is certain:
MCP is here to stay.
We may never have a truly autonomous AGI, so AI might always need human input. But one more thing is also true:
Humans who are proficient in using AI will replace those who are not. Start using MCP today and future-proof your skills.
🎯 Start today with this step-by step guide:
MCP Practical Guide:
GitHub – qdrddr/damien-ai-4friends: A bootstrap guides in AI for friends from Damien
A bootstrap guides in AI for friends from Damien . Contribute to qdrddr/damien-ai-4friends development by creating an…
github.com
Enjoyed This Story?
If you like this article and you want to support me:
- Clap 👏 my article 10 times; that will help me out
- Share this article on social media ➡️🌐
- Please give me feedback in the comments 💬 below👇. It’ll help me better understand that this work was useful. Even a simple “thanks” or “+” will do. Give me good or bad, whatever you think, as long as you tell me where to improve and how.
- Follow or connect with me on LinkedIn, Discord & BlueSky.
- Join our AI Sky Discord Server.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.