I was at MCP Dev Summit North America today, and one talk stood out above the rest.
Meghana Somasundara (Agentic AI Lead) and Rush Tehrani (Head of Engineering, Agentic AI Platform) from Uber took the keynote stage to share what a mature enterprise MCP strategy actually looks like in practice. No hype. Just hard-won lessons from one of the most complex engineering organizations in the world.
I’ll be honest — I got a little lightheaded listening to it. Because everything they’ve built at Uber is exactly what we’re building at Obot AI: for everyone else.
The Scale That Makes Enterprise MCP Strategy Non-Optional
Uber isn’t a typical company dealing with AI integration. They’re dealing with it at a scale most organizations can’t fathom:
- 5,000+ engineers, with 90%+ monthly AI tool usage
- 10,000+ internal services — each a silo of trapped knowledge
- 1,500+ monthly active agents built by teams across the company
- 60,000+ agent executions per week
Without a standardization layer, this is chaos. Every new agent has to independently figure out how to talk to every service. Teams reinvent the wheel constantly. You end up with thousands of bespoke, non-reusable integrations – and no way to govern, secure, or maintain any of it.
That’s the problem MCP (Model Context Protocol) solves. MCP is an open standard that lets AI agents connect to tools and services through a consistent interface. At Uber’s scale, having a coherent enterprise MCP strategy isn’t optional — it’s the only thing standing between a functional AI platform and complete chaos.
The Challenges They Had to Solve
Meghana laid out three categories of problems that get worse the bigger you get:
- MCP Life Cycle — No standard way to develop and deploy MCP servers. Teams reinventing the wheel. Version control at scale becomes a nightmare.
- Security & Privacy — No visibility into all MCP servers and their call patterns. Unauthorized access risks. Third-party server vulnerabilities you didn’t even know existed.
- Discovery & Quality — Both humans and AI agents need to find high-quality, vetted MCPs. A trusted ecosystem doesn’t build itself.
These aren’t small problems. They’re the exact problems that stop enterprises from shipping AI at scale. If you’re thinking about why MCP governance can’t wait, Uber’s experience is a useful data point.
Their Answer: MCP Gateway and Registry
Uber built a purpose-built MCP Gateway and Registry — a central system that sits between their services and their agents and handles everything.
On the lifecycle side: A config-driven approach automatically translates Uber service endpoints into MCP tools. Service owners choose which tools to enable and fine-tune descriptions. Changes are committed as code. There’s a tiered gating system for first-party vs. third-party MCPs, and a central registry serves as the source of truth for versioning and discovery.
On the security side: Auto-auth is on by default for sensitive data. A PII Redactor Service handles automatic data protection. Continuous code scanning runs in the background. Full observability – logging, metrics, tracing – comes out of the box. Guardrails block mutable endpoints and rate-limit all write operations.
The architecture is worth studying. An IDL Crawler Workflow crawls Uber’s internal service repository and feeds an MCP Definition Generator, which stores definitions the Gateway then serves. Users submit tool changes through a UI, triggering a security scan before anything gets deployed. Clean. Auditable. Scalable.
Want to see how Obot’s MCP Gateway compares to what Uber built?
Get expert guidance on deploying Obot as your enterprise MCP Gateway — lifecycle management, security, and discovery, without the multi-year build.
Three Ways Agents Consume MCPs at Uber
Rush walked through how MCP gets used across Uber’s three agent surfaces:
- Uber Agent Builder — A no-code platform for operational agents. Teams search for MCP servers by mentioning them with @, select specific tools to scope what the agent can use, and override parameters so they aren’t LLM-populated. Enterprise-grade agent building without needing engineering resources.
- Uber Agent SDK — A code-first approach for teams that need more control. Config-driven, with the same tool selection and parameter override capabilities as the no-code builder.
- Coding Agents — Claude Code and Cursor, integrated via a single CLI command (
aifx mcp add code-mcp) that installs local and remote MCPs the coding agent can access.
One standardization layer. Three consumption surfaces. Every agent, every team, every use case covered. This is what mature MCP management looks like when it’s working.
The Part That Really Got My Attention: Skills Registry
But Uber isn’t stopping at infrastructure. Their roadmap slide was where I started paying very close attention.
They’re building a Skills Registry — a registry of shareable Skills that combine multiple MCPs to accomplish specific tasks. Paired with Skills Evaluations (output quality scoring, correctness of skill invocation, A/B testing), this is the next layer of intelligence on top of MCP infrastructure.
They’re also extending the MCP Registry to include evaluation metrics, server SLAs, and dynamic discovery on demand — plus systematic MCP Evaluations to improve tool descriptions over time.
This is where the whole industry is heading. Not just “here are your tools,” but “here are your tools, here’s how well they work, and here’s a curated library of proven workflows built on top of them.”
Obot already has Skills support and a Skills Registry is shipping soon. If you don’t want to wait for Uber’s roadmap to materialize, you won’t have to.
Why This Matters Beyond Uber: Your Enterprise MCP Strategy
Uber had to build all of this themselves. They had the engineering resources to do it, and it still took serious investment to get right.
Most companies don’t have that. They have AI ambitions, a growing pile of MCP servers, and no clear path to governing any of it. The gap between “we’re experimenting with MCP” and “we have a real enterprise MCP strategy” is exactly where organizations get stuck.
The MCP Maturity Model is a useful framework for understanding where you are and what the path forward looks like. And Obot is built to close that gap — an open-source MCP Gateway that brings Uber-level infrastructure to any organization ready to build seriously with AI agents.
Whether you’re at Stage 1 (shadow adoption) or actively scaling to hundreds of agents, the infrastructure patterns Uber proved out are available to you today — without a multi-year build. If you want to go deeper on the architecture, the MCP Enterprise Architecture Reference Guide covers the full stack.
What Uber proved today is that this infrastructure isn’t a nice-to-have. It’s what makes AI usable at scale. Full stop.
Ready to build your enterprise MCP strategy?
Obot is the open-source MCP Gateway that gives your organization Uber-level infrastructure — without the engineering headcount. Try it free on GitHub or schedule a demo to see it in action.