Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-FranΓ§ois Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions
Latest   Machine Learning

🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions

Last Updated on July 5, 2025 by Editorial Team

Author(s): Damien Berezenko

Originally published on Towards AI.

🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions

🧠 I Used MCP for 3 months: Everything You Need to Know + 24 Best Servers. New Anthropic DTX Extensions
Picture by author: Damien Berezenko

This article covers:

What Is MCP?

MCP (Model Context Protocol) is a simple yet powerful way to give context to language models like ChatGPT, Cursor, or Claude. Since LLMs don’t have access to your personal or real-time data (e.g., emails, calendar, CRM, documents, and files), can’t handle things they aren’t very good at (e.g., math, strict logic, maps), or perform actions such as posting on your social media or sending an SMS or email, MCP servers are currently a hot topic in the AI space. By providing this external context, LLMs can do a much better job assisting you β€” whether it’s writing code, finding, reading and writing files, or handling tasks that would normally require switching between different apps or tools.

This article is meant to help organize and explain these powerful tools in a clear and practical way.

Why Use MCP?

MCP lets you:

  • Access your CRM or internal data
  • Read local files or cloud storage (like Google Drive)
  • Get real-time documentation or math help
  • Integrate your browser or other apps
  • Automate tasks and actions that usually require multiple tools

In short, MCP helps your LLM help you better.

🔐 Authentication and Authorization

MCP is still in its early stages but growing rapidly, with many features and standards evolving even as you read this. As a result, security, authentication (AuthN), and authorization (AuthZ) are not yet fully mature in many cases.

Often, you are required to store API keys, credentials, logins, and passwords in plain text β€” this is risky. To fix that, the community is moving toward OAuth 2.1 for secure authentication and authorization.

🔑 Key Terms:

  • AuthN (Authentication): Verifies identity (e.g., β€œI’m John from Acme Corp”).
  • AuthZ (Authorization): Defines what actions the user is allowed to take (e.g., read files, update CRM entries).

If your backend application doesn’t support OAuth 2.1 β€” such as a PostgreSQL database β€” you’ll still need to use a username and password credentials for the database and save it in plain text. Until a better integration is available, you’ll have to rely on these less secure methods and hope for tools that can eventually bridge native app authentication with OAuth.

When Auth needed or not:

  • In same cases Auth is not needed, like for example https://mcp.deepwiki.com/ that works as a public source of documentation that our MCP Clients only reads from the website.
  • While others require some form of Authentication using login & password credentials, API keys that is stored in the MCP Client config.
  • OAuth dynamically can authenticate the user providing a more secure way to login, without keeping a cleartex API or credentials allowing the user to quicly and simple authenticate via web browser.

🧩 MCP Key Components

  • MCP Server: Provides contextual data for your LLM.
  • MCP Client: Connects to MCP Servers and delivers that context to the LLM (often built into editors or AI tools).
  • MCP Proxy: Converts MCP Transport protocols, e.g., STDIO ⇄ SSE.

How MCP Servers Work

An MCP Server often acts as a bridge between your language model and other applications. For example, if you’re using the Google Drive web app, an MCP Server for Google Drive will sit between your MCP Client and the actual Google Drive service, making the data accessible to the LLM.

In the future, most applications β€” both local and cloud-based β€” may support MCP natively. But for now, many need a separate MCP Server to enable this connection.

MCP Servers usually act as a bridge between your tools and your LLM. For example:
β€” An MCP Server for Google Drive connects your files to the LLM.
β€” Some servers are standalone (e.g., calculator, time).
β€” Some apps or services can natively speak MCP.

Some MCP Servers don’t need any external application at all to connect to. For example, an MCP Calculator or an MCP Time servers can work entirely on their own without needing to connect to another app.

These MCP Servers collectively enable LLMs to interact with external systems in a structured manner each includes these core components:

  • Tools: Functions that LLMs can invoke to perform specific actions.
  • Resources: The LLMs can access, akin to GET endpoints in a REST API.
  • Prompts: Predefined templates that guide the use of tools or resources.

How to Run MCP Servers

Since MCP is still actively being developed, its ecosystem can be somewhat disorganized. But in general, there are two main ways to run an MCP Server:

1. Locally on Your Computer 🖥️

Some servers make the most sense to run locally. For instance, a File Server that lists and reads files from your local system must run on your machine to access that data. There are several ways to run MCP Servers on your local machine. Each method has its pros and cons, especially when it comes to ease of use, setup, and security.

There are a few main ways to run MCP Servers locally:

A. Using Dev Tools

This is the most common method. MCP Servers can be launched using tools like NPX/NPM, UV/UVX/PIP, BUN/BUNX, NODE etc. These tools are popular because they allow developers to distribute MCP Servers with minimal extra work.

However, this method isn’t very beginner-friendly. You’ll need to install development tools and manually manage libraries, modules, and dependencies. Over time, different MCP Servers might conflict with each other when updated, breaking functionality.

Despite its complexity, this method is still widely used β€” and sometimes unavoidable.

A major downside is security: in many cases, you’ll have to provide credentials (such as API keys or usernames and passwords) in plain text, which is not secure.

✅ Popular with developers.
❌ Not beginner-friendly. Security concerns.
⚠️ Dependency conflicts can break other MCP Servers.

B. Using Containers (Docker, Podman)

Some MCP Servers are available as containers, which you can run using tools like Docker Desktop or Podman.

Containerization solves many dependency problems by isolating each MCP Server in its own environment. However, you’ll still need some technical knowledge to run containers from the command line (CLI).

A major downside is security: in many cases, you’ll have to provide credentials (such as API keys or usernames and passwords) in plain text, which is not secure.

✅ Isolation does not brake dependency and easier setup.
❌ Requires knowledge of CLI (Terminal).
⚠️ Credentials often still stored insecurely.

C. Docker Desktop Containers + MCP Toolkit (with GUI)

This is currently the most beginner-friendly option. The MCP Toolkit is a Docker Desktop extension that includes a graphical user interface (GUI) and a built-in marketplace of MCP Servers. This setup is suitable for personal or development use. OAuth is supported, so it can be used for production environments where security is critical, but most MCP Servers requiring Auth still do not support OAuth. And their credentials are stored in Docker Desktop app.

✅ Easiest option for beginners, no terminal commands. Supports OAuth.
✅ UI-based install via a built-in marketplace (not all) β€” few clicks
⚠️ Still not perfect: credentials are handled better, but still not securely enough for enterprise use. Some MCP Servers support OAuth.

D. Anthropic’s Desktop Extensions (DXT)

Anthropic announced Desktop Extensions (DXT) v0.1 in beta on June 27, 2025. This new format packages locally running development tools with an MCP Server and configuration files into a ZIP file with a .dtx extension, including dependencies and multi-platform OS support. Anthropic is building a directory of Desktop Extensions, currently supported only with Claude Desktop (not public yet).

Desktop Extensions simplify distribution for developers and installation for end users. Node.js-based extensions don’t need to include Node.js as a dependency since Claude Desktop ships with it, while Python and binary applications are also supported. Extensions update automatically, and secrets are stored securely in the OS keychain (macOS) with a user-friendly UI for entering credentials and environment variables when needed at the extention installation.

For enterprises, Desktop Extensions will support Group Policies for Windows and MDM for macOS. Organizations can pre-install approved extensions, blacklist specific extensions or publishers, deploy private extension repositories, or disable the extension repository entirely.

✅ Easy for developers.
✅ Beginner-friendly for users.
✅ Personal use and Business-oriented.
✅ Secure by design (currently on macOS only).
⚠️ Still in Beta. Currently only supported with Claude Desktop and STDIO.

2. Remotely in Cloud ☁️

Other servers, like the Google Drive MCP Server, can run either locally or in the cloud. Since Google Drive already requires an internet connection, it often makes more sense to run its MCP Server for it in the cloud too. That way, you don’t need to install anything on your local device.

E. HTTP-based (SSE or streamable-HTTP)

Its quite easy to use for the end-user, no need to install anything (assuming your MCP Client supports SSE and streamable-HTTP). Typically endpoints ending with /mcp are streamable-HTTP while those ending /sse, well they are SSE. If your MCP Client does not support this transport use a Proxy.

✅ Relatively easy option for beginners.
✅ UI-based install of MCP Servers via a built-in marketplace (not all).
✅ AuthZ & AuthN with OAuth typically are included
⚠️ Might need to install MCP Transport Proxy
⚠️ Some are paid. Most current offerings are mostly for personal use.
⚠️ Self-hosted requires a lot of efforts for Enterprise setup, use and support.

All options are valid β€” it depends on what kind of app you’re connecting to and your personal or project-specific needs.

Examples: Supermachine.ai, Databricks.com, Natoma.id, mcpfabric.com, glama.ai, Cloudflare (install, host and support your own), Composio.dev.

🔍 Where to Find MCP Servers

There is no single centralized place to find all available MCP Servers, but there are several reliable sources.

Local MCP Servers

You can run local MCP Servers using development tools or containers. Sources include:

  • Dev tools (e.g., npx/npm, uv/pip)
  • The new DTX format
  • Standalone Containers

Explore the Docker Desktop MCP Toolkit Marketplace.

Officiall Anthropic Extention Deirectory (comming soon).

Check out other places:

Examples of Local MCP Servers:

  • Filesystem on your computer
  • Git repositories
  • System time
  • MCP for Docker
  • Calculator β€” this may look silly, but LLMs do suffer with math and external tool usually would do a much better job.

Remote MCP Servers ☁️

Some apps are beginning to support the MCP protocol natively, meaning the MCP Server is built into the service β€” such as Cloudflare’s own MCP Servers β€” so there’s no need to install or manage any third-party software.

However, the majority of services (as of June 2025) still do not support MCP natively. These require a separate β€œmiddle-man” MCP Server to run externally.

Remote MCP Servers fall into these categories:

1. Business Use (Multi-User/Multi-Tenant)

  • Recent researches identifies potential vulnerabilities within the MCP ecosystem, such as tool poisoning, memory poisoning and preference manipulation adversarial attacks. That require techniques to mitigate these malicious attempts to make the MCP a trusted tool to be adopted by businesses.
  • Designed to integrate with internal tools securely.
  • Support enterprise-grade features like AuthN and AuthZ other than OAuth.
  • Currently limited but growing.
  • Self-hosted requires a lot of efforts for Enterprise setup, use and support.

2. Personal Use

  • More widely available due to fewer security requirements and lower complexity.
  • Easier to set up and manage for individual users.

3. Public servers

🔄 Transport Protocols

MCP supports a few different transport layers for connecting clients and servers:

STDIO

STDIO is the original and most basic transport protocol used by MCP. It allows you to connect to locally running MCP servers via standard input/output streams, which are used by default. Most locally running MCP servers typically support the STDIO transport protocol but may also support SSE and Streamable HTTP. In the case of local servers such as a filesystem MCP, STDIO is typically the preferred option, as it operates solely on your machine β€” eliminating the exposure of your sensitive files over a network and offering a more secure setup by design.

  • Still widely used by many in-development MCP Servers
  • Also used in the Docker Desktop MCP Toolkit Marketplace

Example:

# Ensure Docker Desktop app is running first. Access MCP Toolkit:
docker run -i --rm alpine/socat STDIO TCP:host.docker.internal:8811

# Filesystem:
npx -y @modelcontextprotocol/server-filesystem /tmp

SSE (Server-Sent Events)

The second generation of MCP transport, based on HTTP. It allows data to be streamed from the server to the client.

  • Some local MCP Servers support SSE
  • Some local MCP Servers support both STDIO and SSE

Example:
https://mcp.deepwiki.com/sse

Streamable-HTTP

A newer and more flexible streaming protocol over HTTP. While SSE is simpler and still in use, Streamable-HTTP is preferred for distributed or advanced setups.

Example:
https://mcp.context7.com/mcp

Web Apps with Native MCP Support

Some web apps now natively support MCP using either SSE or Streamable-HTTP: GitHub, Cloudflare, HubSpot, Intercom, PayPal, Pipedream, Plaid, Shopify, Stripe, Square, Twilio and Zapier.

🏆 Top MCP Servers

👨‍💻 For Programmers

  • Context7 β€” Up-to-date documentation about frameworks and libraries
  • DeepWiki β€” Similar to Context7 from the Devin project
  • Microsoft Learn MCP β€” Documentation for popular libraries, frameworks, and tools (Streamable-HTTP only)
  • FastAPI MCP β€” Consume any FastAPI endpoints as MCP tools with built-in authentication support
  • Mem0 / Neo4j / Cognee Graph β€” Graph-based long-term memory
  • OpenMemory β€” personal long-term memories across chats
  • Official GitHub β€” Enables LLMs to find, pull, and push code repositories directly from GitHub
  • Pydantic-AI Run-Python β€” Secure, async-ready code execution generated by AI for testing; automatically installs required dependencies
  • Netlify β€” Build/deploy hosted websites
  • GitIngest β€” Turns a repo into a prompt-friendly text for LLM. Example

🌍 Universal Use

  • MCP LangFlow / Flowise β€” Drag-n-drop low-code AI flows as MCP
  • Wikipedia MCP β€” General knowledge lookup
  • Memory β€” Knowledge Graph Memory Server in JSON for long-term
  • Basic Memory β€” persistent short-term memory in Markdown files
  • Browser Use β€” Allows LLMs to browse the internet, click buttons, and extract data from web pages.
  • Antvis MCP β€” Converts structured data into visual charts and graphs.
  • WolframAlpha MCP β€” A paid service for advanced Math-powered reasoning
  • WrenAI β€” Query any databases with natural language Text2SQL (Requires running an instance of WrenAI app)
  • Sequential Thinking β€” Breaks down complex tasks into steps; supports idea branching, revision, and solution generation

Bonus
These might seem like long-forgotten languages, but they have a very important qualities that all LLMs struggle with: strong logic, reasoning, and deduction:

  • Prolog β€” Logic, Deduction, Precise Reasoning, Backtracking, Explainability, Rule Verification, Math, Dynamic Knowledge Graphs. Neuro-Symbolic AI represents a convergence of symbolic reasoning (exemplified by Prolog’s programming language relying on math and logic) and LLMs’ pattern recognition capabilities.
  • Lisp β€” Like Prolog, Lisp is used in Neuro-Symbolic AI. It automates boilerplate code, enables abstractions and transformations, manipulates math expressions (symbolic computation), reasons about math and logical expressions (not just giving you a computational result), and explains them. It is used for expert systems, ontology engines, and the semantic web. It also offers metaprogramming (homoiconicity) β€” writing code that writes or manipulates other code and ability to introspect and modify itself.

MCP Clients

MCP Clients are available as ready-to-use applications:

Popular Clients

Animation by author: Damien Berezenko
Image by author: Damien Berezenko

MCP SDKs and Libraries for developers

You can find SDKs and Libraries to incorporate MCP Client or MCP Server functionality into your own custom code in Python, TypeScript, Java and others such as FastMCP (Now part of official MCP module), OpenAI Agents SDK and many popular frameworks such as Microsoft AutoGen and many others.

MCP Server Configuration

Most MCP Clients use a JSON config file (mcpServers.json) to load servers. Unfortunately, most do not follow the exact same structure, leading to differences between applications.

Example clients:

  • Cursor, Cline, RooCode: Support config files (for STDIO & SSE)
  • Continue: Similar, but syntax may differ
  • VS Code GitHub Copilot: No dedicated config for MCP for the global config, merged with VS Code User Settings. Workspace-based config slightly different.

🚨 Need for Standardization:

We urgently need to standardize mcpServers.json across all MCP Clients. The differences aren't just about formattingβ€”some clients don't support certain transport protocols at all. Ideally we should have a single mcpServers.json config on a computer that all the MCP Clients could reuse.

  • Different clients support different protocols (STDIO vs. SSE vs. Streamable-HTTP)
  • Some MCP Clients only support one transport (e.g., Claude Desktop, LM STDIO)
  • Lack of syntax standardization across apps
  • Similarly, most MCP Servers don’t support all the transports (e.g., Docker Desktop MCP Toolkit only supports STDIO)

🧪 Example mcpServers.json

Below is an example mcpServers.json configuration file compatible with clients like Cursor, Cline, and RooCode. While VS Code is the same with only difference of using just "servers" instead of "mcpServers":

💡 Some MCP Servers need a database or other external dependency running beforehand such us running Docker app itself or a container with an app that needs to be started & running. Unfortunately, there’s no easy way to do this from the MCP config right now, so you’ll have to automate it with a startup script or run it manually before use.

🔄 Transport Proxies

Unfortunately, some MCP Clients support only specific transport protocols. If your MCP Client does not support the transport used by your MCP Server, you’ll need a proxy to convert between protocols. So we basically create middle-man for the middle-man…

You can use a proxy to convert STDIO ⇄ HTTP-based transports (which include SSE, Streamable-HTTP, and WebSockets).

Popular Proxies

  • SuperGateway β€” Converts between STDIO and HHTP-based transports (SSE, Streamable-HTTP, WebSockets)
  • MCP-Proxy β€” For SSE ⇄ STDIO or SSE ⇄ Streamable-HTTP
  • IBM/mcp-context-forge β€” Versatile proxy with UI for power users and businesses
  • MCP-Remote β€” Simple SSE & Streamable-HTTP β‡’ STDIO proxy with auth. Since most of the MCP Clients support STDIO, this is your goto in most cases.

🚧 MCP’s Limitations

The first two items β€” Security and Complexity β€” are probably the main reasons holding off enterprise businesses from adopting MCP:

1. Security Risks

  • Prompt / Tool Poisoning: Malicious MCP servers can hide harmful instructions in tool descriptions β€” a user invokes a seemingly harmless tool, but the LLM executes hidden malicious actions.
  • Rug-Pulls & Tool Shadowing: Servers can alter behavior after install (β€œrug-pull”) or override trusted tools with malicious duplicates (β€œtool shadowing”).
  • Command Injection: Poorly implemented servers may be vulnerable to arbitrary code execution via shell injection, path traversal, or SSRF attacks.
  • Cleartext credentials: often, an MCP client is configured with a username and password or API keys saved as plain text in a config file, and passing this sensitive information via the MCP server adds security risks.

2. Engineering & Operational Complexity

  • Setup Required: You must define servers, clients, state handling, and tool schemas manually.
  • OAuth-only: MCP currently focuses on version 2.1 of the protocol for AuthN and AuthZ, while real-world enterprise businesses utilize a large variety of different auth mechanisms β€” OAuth is just one of many. Creating adapters between OAuth and your auth mechanism or custom MCP servers adds operational complexity and security risks.

3. Performance & Scalability

  • Context Window Bloat: Multiple active servers can overwhelm the LLM’s limited token memory and attention, slowing down reasoning and increasing costs. Each MCP Server may have a dozen of tools, more than 20–50 tools may significantly reduce quality.

4. Ecosystem Immaturity

  • Limited Adoption: MCP is still new. Many SaaS platforms lack official servers, and community options vary in quality.
  • Inconsistent Client Support: Few clients support MCP natively with inconsistent configs and transport support.

5. Governance & Standardization Gaps

  • Protocol Drift: Rapid changes could break backward compatibility if no stable standard emerges. Such as recent transport protocol change from STDIO to SSE and then to streamable-HTTP. Inconsystencies in mcp json configs.

6. Multi-Tenancy Challenges

  • Scaling Issues: MCP isn’t designed for multi-user or cloud-based deployments out-of-the-box yetβ€” handling sessions, isolation, and concurrency requires custom solutions.

⚠️ Security: Prompt poisoning, cleartex credentials
⚠️ Complexity: Setup overhead, transport dependencies, OAuth-only
️️️⚠️ Performance: Context-window and token limits
⚠️ Ecosystem: Limited adoption, inconsistent support
⚠️ Governance: MCP Standard is still rapidly changing
⚠️ Scale: Hard to support self-hosted multi-user/cloud deployments

The future of AI-powered productivity is here.

With so many moving parts β€” changing platforms, a wide variety of tools, multiple transport protocols, different authentication mechanisms, and many different ways to run MCP Servers β€” it’s clear that the ecosystem is still evolving. But one thing is certain:

MCP is here to stay.

We may never have a truly autonomous AGI, so AI might always need human input. But one more thing is also true:

Humans who are proficient in using AI will replace those who are not. Start using MCP today and future-proof your skills.

🎯 Start today with this step-by step guide:

MCP Practical Guide:

GitHub – qdrddr/damien-ai-4friends: A bootstrap guides in AI for friends from Damien

A bootstrap guides in AI for friends from Damien . Contribute to qdrddr/damien-ai-4friends development by creating an…

github.com

Enjoyed This Story?

If you like this article and you want to support me:

  1. Clap 👏 my article 10 times; that will help me out
  2. Share this article on social media ➡️🌐
  3. Please give me feedback in the comments 💬 below👇. It’ll help me better understand that this work was useful. Even a simple β€œthanks” or β€œ+” will do. Give me good or bad, whatever you think, as long as you tell me where to improve and how.
  4. Follow or connect with me on LinkedIn, Discord & BlueSky.
  5. Join our AI Sky Discord Server.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓