When you’re running Claude Code, Gemini, and Codex in parallel, something uncomfortable becomes obvious fast. The AI isn’t the slow part. You are. Teams that need to manage multiple AI coding agents across parallel sessions quickly discover that the real friction isn’t model capability — it’s coordination.
That’s the insight behind a coding agent session manager that manages itself via its own MCP, posted to r/mcp and r/ClaudeAI in March 2026. The developer described the problem directly: “the biggest bottleneck is switching between sessions, deciding what to start next, advancing tasks when a phase finishes.” Not the models. Not the context windows. The human triaging work between them.
How to Manage Multiple AI Coding Agents Without Losing Your Mind
This is the pattern showing up across engineering teams right now. Developers assign different models to different phases of a workflow, Gemini for research, Claude for implementation, Codex for review, and then spend their time doing work that should be automated: deciding what’s ready to hand off, advancing tasks between stages, keeping track of where everything stands. The cognitive overhead compounds as the number of parallel sessions grows.
The community response was telling. Developers are building their own orchestration layers because nothing purpose-built exists in their toolchain. The agtx project, described in the same post, tackles this with a terminal-native kanban board and an orchestrator agent that manages the board via its own MCP server. Add tasks to the backlog, press one key, and according to the developer, “you come back to PRs ready for merge.”
The underlying architecture is straightforward: an orchestrator instance talks to an MCP server over stdio, which reads and writes to a SQLite database, which drives a TUI the developer actually looks at. Each layer has a clear job. Using an AI agent session manager to coordinate other agents automatically is exactly what the next generation of developer tooling needs to support.
The models are capable. The pipelines mostly work. The gap is coordination infrastructure, the layer that decides what runs when, tracks state across sessions, and advances work without a human in the loop at every transition.
Why Developers Are Building Their Own Orchestration Layers
When a developer invests real time building a terminal Kanban board with its own MCP server, phase-specific model assignment, and a TOML-based plugin system, they’re not scratching an itch for fun. They’re documenting an absence. The toolchain didn’t give them what they needed, so they built it themselves. That’s how shadow infrastructure gets created, one custom solution at a time.
The plugin system lets teams snap in spec-driven frameworks like GSD, Spec-kit, OpenSpec, or BMAD through a single TOML file, or define custom workflows from scratch. That’s a non-trivial system to design and maintain. The developer built it anyway because the coordination layer didn’t exist in any off-the-shelf form.
What agtx reveals, concretely, is what it takes to manage multiple AI coding agents without that infrastructure: manual state tracking, human-driven phase transitions, and constant context-switching between sessions that should be running on their own. The project addresses each of those failure points directly, with automatic agent switching, orchestrated task delegation, and a persistent state layer that survives across sessions.
Self-built solutions like agtx solve the immediate problem for individual developers, but they also inherit everything that comes with custom infrastructure: maintenance burden, no shared security model, no audit trail, and a governance story that amounts to “trust whoever wrote this.”
Try Obot Today
⬇️ Download the Obot open-source gateway on GitHub and begin integrating your systems with a secure, extensible MCP foundation.
You Don’t Need to Build This. Discobot Already Does It
Discobot is the direct answer to the gap that agtx was built to fill. It lets developers run, monitor, and manage AI coding agents across isolated sandboxed sessions simultaneously, without assembling a custom SQLite backend, MCP server, and TUI from components that weren’t designed to work together.
Manage Multiple AI Coding Agents Without the Maintenance Tax
The structural difference between Discobot and custom orchestration isn’t the feature list. Custom solutions solve the immediate problem and immediately create a new one: you now own that infrastructure. Every dependency update, every edge case in session state, every time a new model’s CLI behavior changes slightly, that’s your problem to debug.
Discobot is maintained by Obot, which means it sits inside a broader platform that already handles the concerns individual open-source projects never quite get around to: security boundaries, audit trails, access controls. Perfecto’s analysis of MCP security frames the core value proposition clearly: “well-defined permissions and boundaries for AI-tool interaction” and interoperability across providers. A SQLite database managed by a solo developer provides none of that. An AI agent session manager backed by a platform that treats governance as a first-class concern provides all of it.
A solo open-source project can’t absorb the operational surface area that comes with enterprise use at scale. Discobot can, because the security and governance infrastructure is already there, maintained by a team whose entire focus is making agentic AI safe and manageable in production environments.
The Hidden Risk of DIY MCP Infrastructure
The governance problem with DIY orchestration tooling isn’t theoretical. It’s structural.
CIO.com’s reporting on MCP’s rise to executive agendas identifies the core mechanism precisely: MCP integrations can be created by anyone experimenting with AI tooling, expanding the attack surface beyond enterprise-approved systems to an ecosystem of community-built connectors that may never undergo security review. That’s not a warning about bad actors inside your organization. It’s a description of what happens when motivated developers do exactly what they’re supposed to do: solve problems and ship working code.
agtx is a good example of this pattern. The developer built something that works. But that SQLite database, that stdio transport layer, that custom MCP server, none of those components were evaluated by procurement. None of them have an audit trail. The access boundaries are whatever the developer decided to implement, and the security model lives in a README that may or may not reflect what the code does.
The New Stack’s coverage of MCP’s production readiness challenges framed the 2025-era MCP ecosystem as great for hackers but not yet palatable for the CISO. That gap closes when the tooling developers reach for is built with governance as a first-class concern rather than an afterthought.
This is where the Obot MCP Gateway addresses the problem that self-assembled orchestration stacks cannot. The challenge of coordinating multiple AI coding agents across teams isn’t just a developer ergonomics problem; it’s a visibility problem at the organizational level. Who is connecting what to which systems, under what permissions, with what audit trail? A custom orchestration layer built by one developer on one team answers none of those questions for the CISO managing the organization’s full MCP exposure.
Each bespoke session manager added to a team’s workflow is another integration point outside the approved stack, another connector that may never get a security review, another surface that expands organizational exposure without corresponding visibility. The architectural choice to build your own coordination layer feels like a local decision. At scale, it becomes an enterprise governance problem.
MCP Is Becoming Infrastructure. Treat It Like That
Red Hat doesn’t typically move early. When OpenShift AI 3 ships with native MCP support baked directly into the platform, alongside MCP servers for Ansible Automation Platform, Red Hat Enterprise Linux, and Red Hat Lightspeed, that’s an enterprise infrastructure signal, not a developer preview. Red Hat is treating MCP as a foundational layer, the same way it treated Kubernetes before Kubernetes was boring.
MCP Server Support Is Becoming Table Stakes
Predictions for MCP in 2026 point toward a near future where MCP server support is expected from SaaS products as a baseline capability, not a differentiator. OpenShift’s DevHub plug-in will surface available MCP servers directly inside developer workspaces. That’s not a convenience feature; that’s an interface convention becoming standard, the same shift that happened when REST APIs stopped being noteworthy and started being assumed.
According to analysis of MCP orchestration patterns, MCP servers enable structured permissions and real-time data access across complex workflows, making them the natural coordination layer for specialized agents operating in parallel. Security agents, DevOps agents, and coding agents can each operate within defined tool boundaries while communicating through a shared protocol, handling specialization without requiring every agent to carry full context about everything else.
When you need to manage multiple AI coding agents across functional domains at enterprise scale, the question isn’t whether to have a control plane. The question is whether your control plane was designed for that responsibility or inherited it by accident. Centralized visibility into which agents are connecting to which systems, under what permissions, with what audit trail, has to be built into the foundation. Retrofitting it onto distributed, team-by-team orchestration stacks is the harder path, and organizations that choose it will spend more time on infrastructure than on the work that infrastructure is supposed to enable.
The OpenShift integration signals something the broader market is beginning to accept: MCP is infrastructure. And infrastructure gets governed, or it creates risk at the rate it gets adopted.
Intent Architects Need Real Infrastructure
The pattern visible in agtx, in the r/mcp thread, in Red Hat’s OpenShift integration, points in one direction. Developers building their own session managers are solving a real problem with real skill. They’re also building maintenance debt and security surface area that compounds quietly until it doesn’t.
Discobot and the Obot MCP Gateway exist precisely at this intersection: developer velocity and organizational control, treated as the same problem rather than competing priorities. The coordination layer is built. The governance layer is built. Neither requires a SQLite database you maintain yourself.
When your team needs to manage multiple AI coding agents across parallel workflows without inheriting the operational overhead of a hand-assembled stack, that infrastructure is already there.