Earlier this April, David Soria Parra, one of MCP’s co-creators, gave the opening keynote at the MCP Dev Summit in New York. It was the first edition of the summit under the Linux Foundation’s Agentic AI Foundation, following Obot’s donation of the event late last year. A fitting venue for a talk about MCP becoming community-led infrastructure.
David opened with a number: 110 million SDK downloads per month. React took three years to reach that. MCP got there in sixteen months. The point wasn’t to brag. It was that the ecosystem had been waiting for a standard, and developers piled in the moment one existed. The alternative was building the same integration sixteen times for sixteen proprietary APIs.
What grew up in those sixteen months is really a community: server builders, client maintainers, framework authors, enterprise operators. The roadmap David laid out reads like that community’s working agenda for the year ahead.
Watch the full keynote here 👇:
👉 Want to see how teams are operationalizing MCP beyond the keynote? Try Obot.
The Real Story Is Behind the Firewall
Spend a few hours on Twitter and you’d think the MCP ecosystem is mostly Notion integrations, Blender demos, and the occasional 3D printer. Those are fun, but they’re not where the volume is.
David’s most pointed observation: “behind every corporate firewall, we’re quietly wiring MCPs” into Salesforce, Jira, Confluence, Snowflake, internal wikis, HR systems. Even inside the companies most active in building MCP, the most popular internal servers aren’t the public reference implementations. They’re the ones connected to the company knowledge base and Slack — the ones helping knowledge workers do their jobs.
This deployment surface doesn’t show up on Hacker News. It’s also where the governance problem lives. The protocol grew up the moment it stopped being about file system servers and started being about Salesforce CRM.
Why 2026 Is Different
David framed it cleanly. 2025 answered the question of whether something like MCP was needed. The answer is yes. 2026 is about whether the protocol is ready to support production agentic systems at enterprise scale.
“Does this work” and “can I run this with audit logs, identity integration, and a CISO who’ll sign off” are very different questions. Here’s what’s on the roadmap that matters for the second one.
Cross-App Access: The OAuth Endgame
If you’ve deployed MCP servers in a real organization, you know OAuth is where the friction lives. Each server has its own auth flow. Users get prompted constantly. Tokens drift. Every new server multiplies the problem.
Cross-app access, which David teased for the June spec revision, makes most of this disappear from the user’s view. The user is already logged in to their identity provider. When they connect to an MCP server, the server gets the right token issued without an OAuth screen. Authentication moves to the identity layer, where it belongs.
For enterprises this is the unlock. It’s the difference between MCP being something users notice every time they connect a tool and MCP being invisible infrastructure.
It doesn’t eliminate the need for a control plane. Token brokering, policy enforcement, audit logging, and which user can connect to which server are still gateway problems. But cross-app access removes the worst of the user-facing pain that’s been blocking enterprise adoption.
Agents That Don’t Sit Still
Two related items on the roadmap matter for how enterprise AI evolves this year.
First, a new task primitive for long-running, autonomous work. Today’s MCP servers mostly handle short synchronous calls. As agents start running multi-hour workflows on their own, the protocol needs primitives that match.
Second, triggers — webhooks for MCP. Servers will be able to proactively notify clients of new data or new work, inverting the current client-initiated pattern.
Both have governance implications that aren’t getting enough airtime. An agent running a thirty-minute task on stale credentials is a control plane problem. A server pushing events to clients is a new authentication and rate-limiting question. As the protocol moves toward more autonomy and more server-initiated communication, the infrastructure between agents and servers becomes more important.
Skills Over MCP: Servers That Teach
David confirmed skills over MCP is shipping as an extension in the next few weeks. The idea: an MCP server can bundle not just tools but instructions for how to use them — guidance the agent on the other end can pick up and apply.
This changes what an MCP server is. Today it exposes capabilities and trusts the client to figure out how to use them well. Tomorrow it can ship its own usage manual. For large servers with many tools, this solves a real problem. How do you tell an agent that these three tools always go together, or that this path through the API only makes sense for invoice reconciliation?
For enterprises building internal MCP servers, skills are going to be load-bearing. Your finance system’s MCP server can carry the institutional knowledge of how to use it correctly, instead of leaving it on a Confluence page nobody reads.
The Context Bloat Criticism Is Misplaced
One part of the talk pushed back on a common piece of MCP criticism worth amplifying.
The complaint: MCP servers expose dozens or hundreds of tools, all of which get loaded into the model’s context window, which fills up and degrades performance. Therefore MCP is bloated.
David’s response: this is a client implementation problem, not a protocol problem. Progressive discovery — loading tools only when they’re needed, via tool search — already solves it. Claude Code’s earlier implementation took up roughly a fifth of a 200K-token context window with MCP tool definitions. The current version defers all of it and loads on demand. Same protocol, very different context economics.
If a client you’re evaluating dumps every tool into context, that’s the client. Ask vendors how they handle progressive discovery. Ask whether your gateway can scope tool sets to specific agents or roles, so the client never sees tools the user wasn’t going to use. This gets solved at the infrastructure layer.
The Best of MCP Is Still Ahead
MCP is now the default integration protocol for agentic systems. OpenAI, LangChain, and Pydantic AI all pull it in as a dependency. Every major model provider speaks it. The protocol lives in neutral, community-led governance under the Agentic AI Foundation, the same foundation Obot helped establish through the MCP Dev Summit donation.
What makes David’s roadmap optimistic isn’t the feature list. It’s the posture behind it. The new transport spec exists because community members raised concerns and were heard. SDK v2 is shipping because the maintainers openly admitted the current shapes weren’t good enough. The talk closed asking for more feedback and more criticism. That’s how protocols stay healthy.
Adoption is being decided by your developers, your vendors, and your AI tools, whether or not anyone has scheduled a meeting about it. The real question is whether you adopt it with a control plane or without one.
Without one: OAuth tokens scattered across personal accounts, MCP servers wired into production with no audit trail, sprawl that compounds faster than any team can manage. With one: the same MCP ecosystem your developers want, governed the way your security team needs.
That’s the work for 2026. The protocol is ready. The roadmap is ahead of where most enterprises are. The gap is governance, and that’s what an MCP gateway is for.