If you want to understand the Model Context Protocol (MCP) ecosystem today, don’t start with buzzwords — start with the tools. MCP isn’t a single product: it’s a set of conventions that a growing number of projects and companies use to let LLMs call out to real systems. That means the ecosystem is naturally fragmented (and useful): there are tools for building MCP servers, tools and apps that act as MCP clients, services and platforms that host MCP servers, registries where you can find servers, gateways and control planes that manage them, and agent frameworks that orchestrate multi-step flows. Below is a walk through the actual tooling you’ll encounter in each of those roles, with practical notes on when you’d pick one over another.
Building MCP Servers: Frameworks and Developer Tooling
If you’re building an MCP server — a program that exposes “tools” and resources to LLMs — there are a few clear approaches you’ll see. On the developer side, lightweight SDKs and frameworks make it much faster to stand up a server. FastMCP is a Python-first example: it’s built to let developers declare tools, generate schemas from type hints, and expose those tools over MCP transports. If you prefer a Postman-style workflow, Postman has added first-class MCP support so teams can prototype, save, share, and test MCP requests just like any other API workflow. These two approaches — code-first SDKs and request/collection tooling — are complementary: use a framework like FastMCP when you need production logic and developer ergonomics; use Postman-style tooling for API design, testing, and cross-team collaboration. A few practical notes: fast, iterative prototyping is frequently done with local/stdio transports or with a quick cloud deploy; if you want to move to a hosted or remote model, plan for OAuth and remote transports early. Use the reference server repositories and the official MCP examples as templates — they show expected endpoints (listTools, invokeTool, readResource) and common error modes.
MCP Clients — The Front Lines Where Users and Agents Interact
Clients are the applications you use to talk to MCP servers. Some are standalone apps; some are features inside larger IDEs, chat apps, or agent UIs. Claude Code is one clear example of an MCP-aware client: Anthropic documents how Claude Code can connect to MCP servers so models can call external tools and run coding tasks. Other popular clients and agents — like Cursor, Windsurf, Goose (a local, extensible agent), LibreChat, and various desktop MCP UIs — either already support MCP or have adapters to work with MCP servers. For many teams, the simplest pattern is: expose the tool via MCP and use a client (developer IDE, chat UI, or agent runner) that can call it. A few practical signals to watch: does the client support the transport you need? (stdio is common for local dev; Streamable HTTP or SSE is needed for remote use.) Does the client support authentication flows you require? Some MCP clients still expect local, unauthenticated servers; others already support OAuth or token-based remote MCPs. Cloudflare, for example, documents how to deploy remote MCP servers and connect them to clients with OAuth flows.
Hosting MCP Servers: Cloud Providers and Self-Hosting
Where your MCP servers run matters for latency, security, and who controls credentials. There are three common patterns:
- Self-hosted on Kubernetes or containers — gives IT full control (and is the route many enterprises prefer) Obot is an open source example of this type of project.
- Remote-hosted servers on a cloud platform — Cloudflare’s one-click remote MCP servers are a good example
- Third-party managed MCP catalogs — some platforms host catalogs of remote MCP servers you can connect to via OAuth.
If you need enterprise governance, you’ll almost always end up wrapping hosting with a gateway or control plane (more on that below) so IT can audit and proxy calls regardless of where the underlying server runs. For a deeper dive on enterprise hosting strategies, see MCP Hosting: Building a Strategy for Deploying and Running MCP Servers in the Enterprise.
Finding MCP Servers: Registries and Marketplaces
Because MCP servers are decentralized, registries and directories have sprung up to make discovery practical. You can find and test a rich set of MCP servers on Obot Chat, or the GitHub MCP Registry acts like an app store for MCP servers — official implementations and community servers are listed there. Community directories like PulseMCP, mcp.so, and Glama index thousands of servers, often tagging them by service (GitHub, Datadog, Slack, etc.), transport type, and security posture; these are the fastest way to see what’s already available before you build. If you want to prototype quickly, copy an existing server from these registries and adapt it to your environment. For a practical guide to building your own registry, see Building an MCP Registry: Why It Matters and How to Get It Right. Practical tip: registries are handy, but treat third-party servers as untrusted until you can review their code and access model — don’t bind sensitive data to a server you haven’t vetted.
Private Registries and Control Planes (Where Obot Lives)
At enterprise scale, discovery isn’t the only problem — governance, role-based publishing, auditability, and deployment become the blockers. That’s why organizations run private MCP registries and gateways. Obot (our MCP Gateway) and IBM’s ContextForge project are examples of software that act as a management/control layer: they provide catalogs that IT controls, proxy traffic for auditing and policy enforcement, and in some cases host MCP servers on-demand in containers. If you’re building for enterprise adoption, plan a control-plane and registry early; it changes your operational model and what you can safely expose to agents. For a technical deep dive, see Deep dive into the Obot MCP Gateway.
Gateways and Proxies: Unify, Secure, and Observe
MCP Gateways sit between clients and servers and solve federation, auth, rate-limits, virtual tools, and observability. Obot includes a gateway and proxy that is designed to provide auditability, access control, oAuth integration and an admin UI. Other gateway projects and proxies focus on simpler patterns — such as adding audit logging or enforcing token exchange — so choose based on the controls you need. Gateways also make it practical to convert legacy REST APIs into virtual MCP tools for agents to use. For more on securing MCP access, see What Is Secure MCP Access? Why It Matters for Enterprise AI.
Agent Frameworks and Orchestration: Nanobot, LangChain, LangGraph, n8n
If your goal is to orchestrate multi-step workflows or compose MCP tools into larger behaviors, you’ll look to agent frameworks. LangChain has added MCP adapters so agents can call MCP tools directly. Nanobot is an agent framework purpose-built around MCP and MCP-UI for richer, chat-driven experiences. For more on Nanobot, see Introducing Nanobot: A New Framework for Turning MCP Servers into AI Agents. Workflow platforms like n8n also provide MCP servers and client nodes so agents can generate and validate real automation workflows (n8n has MCP server/trigger integrations that let agents build n8n flows). These frameworks are where “tools” become workflows.
A Few Pragmatic Recommendations
Prototype with a maintained reference server from the GitHub MCP Registry or a curated directory rather than building from scratch — faster insight, fewer surprises. Plan for authentication and proxying early; remote MCP with OAuth is already a real path (Cloudflare documents how to deploy remote MCP servers and secure them). Treat registries as discovery, not trust — vet any third-party server you plan to rely on. Use private registries and gateways for production-sensitive MCPs (Obot is an example). Pick the right agent framework for your needs: LangChain adapters are great if you already use it; Nanobot and n8n are better when you want richer chat UIs or workflow automation out of the box.
Where to Go Next
If you’re looking to get hands-on, a good next step is to prototype an MCP server using FastMCP or Postman, host it on Cloudflare, and connect it through a gateway like Obot. Once it’s live, you can wire it into an agent framework like Nanobot or LangChain and see how MCP-based tools can be shared safely and consistently inside your organization.