We are excited to announce the release of Obot v0.16.0, a significant update focused on enterprise readiness and operational control. This release introduces API key authentication for programmatic access, fine-grained model access policies, enhanced Azure OpenAI support, and Kubernetes capacity monitoring. Together, these features give administrators solve real enterprise problems.
Obot now supports API keys for programmatic access to MCP servers. This enables machine-to-machine communication without requiring interactive browser-based OAuth flows—perfect for CI/CD pipelines, automation scripts, monitoring tools, and other integrations.
Each API key:
Is tied to a specific user account
Can be scoped to specific MCP servers or all servers
Supports optional expiration dates
Users can create and manage their own API keys from the profile menu, while administrators have full visibility into all keys across the organization from the User Management section.
Administrators now have fine-grained control over which language models users and groups can access in chat. Model Access Policies replace the previous “Allowed Models” and “Default Model” settings with a more flexible policy-based approach.
Each policy defines:
Who can use the models – individual users, authentication provider groups, or all Obot users
Which models they can access – specific models, default model aliases, or all available models
This enables organizations to limit access to higher-cost or more powerful models to specific teams while maintaining a standard experience for most users. Policies can reference “default model aliases” like “Language Model (Chat)” that automatically resolve to whichever model is currently configured as the default – so changing defaults doesn’t require updating every policy.
For existing installations, your previous allowed models and default model settings are automatically migrated to a new policy, so no action is required.
⬇️ Download the Obot open-source gateway on GitHub and begin integrating your systems with a secure, extensible MCP foundation.
Enhanced Azure OpenAI Support
The Azure OpenAI model provider now supports two authentication methods, giving enterprises flexibility in how they integrate with Azure:
API Key Authentication works with both Azure OpenAI resources and Microsoft Foundry deployments. Provide your API key, endpoint URL, and deployment names in the format name:type (e.g., gpt-5.2:reasoning-llm).
Microsoft Entra Authentication uses a service principal approach for organizations that prefer identity-based access. Configure your Entra app’s Client ID, Client Secret, and Tenant ID along with your Azure OpenAI resource details. Unlike API key authentication, Entra credentials can automatically discover your deployments.
For Kubernetes deployments, the admin interface now includes capacity monitoring that aggregates requested CPU and memory across all MCP servers. This visibility helps administrators:
Monitor cluster utilization at a glance
Prevent resource overcommitment before it causes issues
Plan capacity for growing MCP server deployments
The dashboard appears in the Deployments & Connections section for administrators. When a ResourceQuota is configured in your cluster, Obot displays utilization against your quota limits; otherwise, it shows total resource requests across your MCP servers.
Additional Improvements
This release includes over 80 merged pull requests with improvements across the platform:
Auditor role enhancements: Auditors now have read-only access to API keys, group role assignments, and capacity information