If you’re reading this, chances are you’ve discovered MCP servers running in your organization—probably more than you expected, and possibly without IT’s blessing.
You’re not alone. Over the past few months, I’ve talked to dozens of IT leaders who’ve had the same realization: developers are already using Model Context Protocol servers extensively. Some organizations have found hundreds of developers running MCP servers on their laptops, each with their own scattered credentials and zero audit trails.
The question isn’t whether your organization will use MCP—it’s already happening. The question is whether you’ll manage it centrally or let it remain shadow IT.
Meet with an Obot Architect
Get expert guidance on deploying Obot as your enterprise MCP gateway and aligning it with your infrastructure.
👉 Click here to book a time that works with your schedule.
The Problem: MCP Adoption Is Outpacing Governance
Here’s what we’re seeing across organizations at different stages of their MCP maturity journey:
Shadow IT at scale: Developers are connecting Claude Desktop, GitHub Copilot, and other AI assistants to MCP servers that give them access to internal systems—databases, APIs, file systems, cloud infrastructure. IT often discovers this usage weeks or months after it’s widespread.
Fragmented credentials: Each developer manages their own API keys, OAuth tokens, and service account credentials. These secrets live in config files, environment variables, or worse—hardcoded in scripts. When someone leaves, those credentials rarely get revoked properly.
Zero visibility: Without centralized management, you have no idea which MCP servers are being used, who’s accessing what systems, or what data is flowing through these integrations. Audit logging doesn’t exist. Compliance officers get nervous.
The productivity dilemma: Teams using MCP servers report massive productivity gains—30-50% time savings on routine tasks, faster debugging, better code quality. Banning MCP usage would eliminate these gains and frustrate your best engineers. But allowing unmanaged usage creates unacceptable security and compliance risks.
This is the tension every organization faces: enable innovation or maintain control.
The answer isn’t choosing between them—it’s building infrastructure that provides both.
Why Centralized MCP Management Matters
A centralized approach to MCP server hosting solves these problems systematically:
Visibility and Control: See every MCP server in use, who’s accessing them, and what operations they’re performing. Enforce policies consistently across all integrations. Revoke access instantly when needed.
Security Without Friction: Manage authentication centrally so developers never handle raw credentials. Enforce OAuth flows, rotate tokens automatically, and maintain complete audit trails for compliance.
Enable Adoption: Make approved MCP servers instantly available through a secure catalog. Remove the friction of setup and credential management. Let teams move fast within guardrails you define.
Cost Optimization: Consolidate infrastructure instead of running hundreds of individual MCP server instances. Share resources across teams. Make informed decisions about what to host versus what to proxy.
Compliance and Audit: Satisfy regulatory requirements with comprehensive logging of all MCP interactions. Demonstrate control over AI tool usage to auditors and compliance teams.
This is what an MCP gateway provides: a single control plane for all your MCP servers, whether you’re running them yourself or proxying to third-party services.
But once you’ve decided to centralize management, the next question emerges: where should these servers actually run?
The Two Core Hosting Approaches
When it comes to MCP server hosting, you have two fundamental options:
Self-hosted: You run the MCP servers on your own infrastructure
Remote: You proxy connections to MCP servers hosted by third parties
Most organizations eventually use both approaches for different integrations. Let’s understand each one.
Self-Hosted: Full Control, Full Responsibility
Self-hosting means your MCP servers run on infrastructure you control—typically Docker containers or Kubernetes clusters. An MCP gateway like Obot deploys, manages, and monitors these servers, then makes them available to your AI clients.
How It Works
With Obot’s MCP hosting capabilities, you deploy servers as containers in your environment. The platform supports Node.js, Python, and Go-based servers, plus any containerized implementation.
Get started with Docker in one command:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/obot-platform/obot:latest
For production, deploy to Kubernetes: Obot integrates with your existing K8s infrastructure, deploying internal MCPs automatically as containers that run on-demand with auto-scaling and high availability.
Why Self-Host?
Complete Control: You own the server lifecycle—configuration, updates, security policies, everything.
Performance: Servers running in your cluster typically see 10-50ms latency. No external network hops.
Data Sovereignty: Data never leaves your infrastructure. Critical for compliance requirements (HIPAA, SOC 2, GDPR).
Cost Predictability: Cloud server costs average $7-20 per vCPU per month. At scale, this is significantly cheaper than managed services.
Security: Everything stays within your infrastructure perimeter. You control network segmentation, authentication, and access policies.
The Trade-offs
Self-hosting isn’t free:
You’re responsible for infrastructure management and maintenance
You need container orchestration expertise for production
Your team bears the operational burden
Hosting costs scale with usage (though predictably)
When to Self-Host
Self-hosting makes sense when:
You’re building proprietary or internal integrations
Data cannot leave your infrastructure (compliance, security policies)
You need low latency for critical operations
You have existing container infrastructure to leverage
You’re running high-volume workloads where economics favor self-hosting
Remote Hosting: Simplicity Without Infrastructure
Remote hosting flips the script. Instead of running servers yourself, you proxy connections to MCP servers hosted by third parties—services like Slack, GitHub, Jira, or any provider offering an MCP endpoint.
How It Works
Your MCP gateway acts as a secure proxy between clients and external servers. When a user invokes a tool, the request flows through your gateway, where Obot enforces policies, logs the interaction, and proxies the request to the third-party server.
Authentication credentials never reach end users—the gateway manages token exchange and security. This is “universal MCP proxying”: centralized control without hosting infrastructure.
Why Remote Hosting?
Zero Infrastructure: No servers to maintain, no containers to orchestrate, no scaling to worry about.
Instant Access: Tap into the ecosystem of third-party MCP servers without custom development.
Vendor Expertise: Leverage providers’ SLAs, reliability, and maintenance.
Automatic Updates: Vendors handle server updates, bug fixes, and new features.
Low Upfront Cost: No infrastructure investment required to get started.
The Trade-offs
Remote hosting comes with different constraints:
Higher latency (100-500ms depending on provider location)
Dependency on third-party uptime
Less control over updates and changes
Data flows outside your infrastructure
Potential vendor lock-in
That said, Obot’s gateway mitigates many concerns. All traffic flows through your gateway for inspection, you maintain complete audit trails, and you can revoke access instantly or filter requests before they leave your network.
When to Use Remote Hosting
Remote hosting is the pragmatic choice when:
You’re integrating with SaaS platforms (Slack, GitHub, Zapier)
You want to test new MCP servers before committing infrastructure
Operational simplicity trumps absolute control
Your team lacks DevOps resources
The integration isn’t latency-sensitive
A Note on Serverless Platforms
You might be wondering about serverless platforms like Cloudflare Workers or AWS Lambda. They offer quick deployment—Cloudflare advertises “under 5 minutes” from template to live.
The reality: serverless excels for prototyping but has significant limitations for production. Compute time limits (10-50ms per request on Cloudflare), storage constraints, and usage-based pricing that can reach $1,000-$6,000+/month for enterprise managed services make it less attractive than self-hosted infrastructure at scale.
For experiments, serverless is great. For production, Docker or Kubernetes give you better economics and flexibility.
Reality: Most Teams Use Both (Hybrid)
Here’s what we actually see in practice: most organizations use a hybrid architecture—some servers self-hosted, others proxied remotely, all managed through a single gateway.
Why? Because not all integrations are created equal.
Your internal HR system handling sensitive employee data has different requirements than a weather API providing public information. A hybrid architecture lets you optimize each integration independently.
Common Patterns
Security-Tiered: High-security systems (financial data, HR, internal databases) run self-hosted. Low-risk services (weather APIs, public data sources) are proxied remotely.
Gradual Migration: Start with remote hosting for speed. Migrate critical services to self-hosting as usage patterns and business value become clear.
Why Obot Excels at Hybrid
Obot’s MCP gateway provides a single control plane managing both self-hosted and remote servers with unified authentication, authorization, and audit logging.
Try Obot Today
⬇️ Download the Obot open-source gateway on GitHub and begin integrating your systems with a secure, extensible MCP foundation.
From the user’s perspective, it’s seamless. They see a catalog of MCP servers and invoke tools—completely unaware that some requests hit your Kubernetes cluster while others proxy to external providers.
From your perspective, you get:
Consistent policy enforcement regardless of hosting location
Complete visibility into all MCP interactions
Flexibility to move servers between hosting models as needs evolve
Cost optimization by self-hosting only what matters
How to Choose Your Approach
Here’s a practical framework:
Start with remote hosting if:
You’re integrating primarily with SaaS platforms
You want to move fast and prove value quickly
Your team is small or lacks DevOps expertise
Move to self-hosting for:
Proprietary integrations you’re building internally
Systems handling sensitive or regulated data
High-volume integrations where economics favor self-hosting
Operations requiring low latency
Embrace hybrid as you mature:
Optimize each integration independently
Balance control against operational simplicity
Maintain flexibility to adapt as requirements evolve
Start Simple, Evolve Gradually
Most teams start with remote hosting for speed, then migrate critical services to self-hosting as usage patterns emerge. Monitor your MCP usage through Obot’s analytics, identify candidates for self-hosting based on traffic and business value, and re-evaluate quarterly.
Architecture isn’t a one-time decision—it evolves with your needs.
Getting Started with Obot
The first step isn’t choosing between self-hosted and remote—it’s centralizing management so you have visibility and control.
Try Obot locally with Docker:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/obot-platform/obot:latest
Deploy to Kubernetes for production: Visit the Obot documentation for Kubernetes deployment guides, architecture references, and best practices.
See it in action:
Try the live demo to experience the gateway firsthand
Join the Obot Discord community to discuss your architecture decisions with teams solving similar challenges
Wrapping Up
MCP adoption is happening in your organization whether IT knows about it or not. The urgency isn’t adopting MCP—it’s bringing existing usage under centralized management before security and compliance risks accumulate further.
Once you’ve established that control plane, the hosting decision becomes straightforward: self-host for control and performance, use remote hosting for simplicity and ecosystem access, and embrace hybrid as your needs mature.
The organizations succeeding with MCP aren’t the ones who moved fastest—they’re the ones who built the right foundation. Start with centralized management through a gateway like Obot, then optimize hosting decisions based on actual usage patterns.
As the Model Context Protocol ecosystem matures, having that foundation gives you the flexibility to evolve your architecture as requirements change and new patterns emerge.