What Are Model Context Protocol (MCP) Tools?
MCP tools are functions exposed by a Model Context Protocol (MCP) server that allow AI models (like Large Language Models) to interact with external systems, perform actions, and access data. These tools enable AI agents to do things like query databases, call APIs, write files, or trigger workflows by calling server-side functions with specific commands.
What MCP tools do:
- Enable external interaction: Tools bridge the gap between an AI model and the outside world, allowing it to act beyond just processing text.
- Standardize communication: MCP provides a standardized way for AI agents to access various tools and data sources without needing custom integrations for each one.
- Allow actions and side effects: They let AI agents perform actions in the real world, such as creating a file or sending a request to another service.
Tools in MCP are not limited to traditional APIs or cloud services. They can also represent functions within an application, device control endpoints, or workflows that chain together multiple operations. By abstracting these functionalities as MCP tools, developers allow models to interact with the external world in a controlled and auditable manner, while keeping responsibilities, permissions, and data flows transparent.
In this article:
What MCP Tools Do
Enable External Interaction
MCP tools provide a structured conduit for AI models to interact with the wider digital ecosystem. Models can make requests to external systems or data sources, extending their utility beyond language processing. For example, a model might use an MCP tool to check a user’s calendar, retrieve market prices, or trigger a workflow in a business application using standardized protocol messages and well-described tool capabilities.
Through this mechanism, models transition from being isolated conversational agents to active participants in larger system workflows. Instead of manually copying and pasting information, users benefit from models that can directly gather data, automate tasks, or take contextual actions based on conversation or user intent.
Standardize Communication
Standardization is a core goal of the MCP design. By defining uniform message formats, parameter structures, and response types, MCP eliminates the risks and confusion associated with custom tool integrations. Models can interact with any compliant tool using predictable protocols, which reduces errors and simplifies debugging for developers. Standardization also supports tool discovery, dynamic capability negotiation, and upgrades.
Because all tools adhere to a shared specification, validation, and monitoring become much more manageable. Service providers can rely on type-safe communication, consistent error reporting, and accountability in production environments. Standardizing the way tools talk to models lays a solid foundation for security reviews, audit trails, and regulatory compliance.
Allow Actions and Side Effects
MCP tools are not limited to passive data retrieval; they can trigger real actions and produce side effects in external systems. Models can initiate state changes, update records, start processes, or send notifications, expanding their role from information providers to true autonomous agents. This ability is carefully managed through explicit tool definitions and capability declarations.
With clear specification of side effects, developers and users maintain trust in the system. Auditable logs of tool invocations, defined input/output types, and error handling mechanisms make it possible to track exactly what the model is doing at any given time. This is especially critical when tools have significant privileges, such as making financial transactions or modifying user accounts.
MCP Tools Definition and Implementation with Code Examples
Let’s see how to define and implement tools in an MCP environment. Instructions and code examples are adapted from the official MCP specification.
Tool Definition and Capabilities
In MCP, each tool is defined using a structured schema that describes its identity, purpose, and how it should be invoked. This definition allows clients (such as LLMs) to discover what the tool does, what inputs it accepts, and how to use it safely.
Here’s a basic example of a tool definition in JSON:
{
"name": "send_email",
"description": "Sends an email to a specified address.",
"inputSchema": {
"type": "object",
"properties": {
"recipient": { "type": "string" },
"subject": { "type": "string" },
"body": { "type": "string" }
},
"required": ["recipient", "subject", "body"]
}
} This definition exposes a tool named send_email. The inputSchema uses JSON Schema to define what parameters are expected. The model must supply a recipient, subject, and body when invoking this tool.
Each tool is uniquely identified by its name and may optionally include a description to help users and models understand its behavior. This self-description ensures tools can be used dynamically and with minimal hardcoding.
In addition to individual tool definitions, servers that support tools must declare the tools capability. This is advertised as part of the model’s capability negotiation so clients know that tool invocation is supported.
{
"capabilities": {
"tools": {
"listChanged": true
}
}
} The listChanged flag indicates whether the server will notify the client if the set of available tools changes. This is useful for dynamic environments where tools may be registered or deregistered during a session.
Protocol Messages
The Model Context Protocol defines standardized JSON-RPC messages for discovering, invoking, and updating tools. These messages allow AI clients to interact with tools consistently and predictably.
1. Listing tools
To retrieve the list of available tools, clients send a tools/list request. This supports pagination using an optional cursor.
Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {
"cursor": "optional-cursor-value"
}
} Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "get_weather",
"title": "Weather Information Provider",
"description": "Get current weather information for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
}
}
],
"nextCursor": "next-page-cursor"
}
} The response includes an array of tools, each with a name, optional title, description, and an inputSchema specifying required parameters. If there are more tools to list, nextCursor is provided for pagination.
2. Calling tools
To invoke a tool, the client sends a tools/call request specifying the tool name and its input arguments.
Request:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"location": "New York"
}
}
} Response:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "Current weather in New York:\nTemperature: 72°F\nConditions: Partly cloudy"
}
],
"isError": false
}
} The server returns structured content, which may include text, media, or other data types depending on the tool. The isError flag indicates whether the call was successful.
3. List changed notification
If the server has declared support for dynamic tool updates using the listChanged capability, it should notify clients when the tool list changes.
Notification:
{
"jsonrpc": "2.0",
"method": "notifications/tools/list_changed"
} This notification prompts the client to re-fetch the tool list, ensuring access to the most up-to-date set of capabilities.
Data Types
MCP defines structured data types for both tool definitions and tool results. These types ensure that clients and models can interpret, validate, and act on data in a consistent way.
Tool definition
A tool is described with the following fields:
{
"name": "get_weather_data",
"title": "Weather Data Retriever",
"description": "Get current weather data for a location",
"inputSchema": { ... },
"outputSchema": { ... },
"annotations": { ... }
} name: Unique identifier for the tool (required).title: Optional human-readable name for display purposes.description: Text description of the tool’s function.inputSchema: A JSON Schema defining required input parameters.outputSchema: (Optional) JSON Schema specifying the structure of the output.annotations: (Optional) Metadata about the tool’s behavior. Clients must treat these as untrusted unless from a verified source.
Tool result
Results from tools may include both unstructured and structured outputs.
Unstructured content
Returned in the content field as an array. Each item includes a type and payload.
Text example:
{
"type": "text",
"text": "Tool result text"
}
Image example:
{
"type": "image",
"data": "base64-encoded-data",
"mimeType": "image/png",
"annotations": {
"audience": ["user"],
"priority": 0.9
}
} Audio example:
{
"type": "audio",
"data": "base64-encoded-audio-data",
"mimeType": "audio/wav"
} Resource link example:
{
"type": "resource_link",
"uri": "file:///project/src/main.rs",
"name": "main.rs",
"description": "Primary application entry point",
"mimeType": "text/x-rust",
"annotations": {
"audience": ["assistant"],
"priority": 0.9
}
}
Embedded resource example:
{
"type": "resource",
"resource": {
"uri": "file:///project/src/main.rs",
"mimeType": "text/x-rust",
"text": "fn main() {\n println!(\"Hello world!\");\n}",
"annotations": {
"audience": ["user", "assistant"],
"priority": 0.7,
"lastModified": "2025-10-12T11:20:00Z"
}
}
} Each content type supports annotations such as audience, priority, and timestamps for downstream handling.
Structured content
Returned in the structuredContent field as a JSON object:
"structuredContent": {
"temperature": 22.5,
"conditions": "Partly cloudy",
"humidity": 65
} Tools that provide structured content should also return the same data as a text block for backward compatibility.
Output schema
If an output schema is defined, servers must conform to it, and clients should validate responses. This supports type-checking, better developer experience, and safer model usage.
Error Handling
MCP supports two categories of error reporting to distinguish between protocol-level issues and tool-specific failures.
Protocol errors
These are standard JSON-RPC errors. They occur when the request itself is invalid, such as referencing a non-existent tool or sending malformed arguments.
Example: Unknown Tool
{
"jsonrpc": "2.0",
"id": 3,
"error": {
"code": -32602,
"message": "Unknown tool: invalid_tool_name"
}
} These errors prevent the tool from running at all.
Tool execution errors
If a tool runs but fails during execution, the response uses the isError: true flag. The failure is described in the content array.
Example: Execution Error
{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "Failed to fetch weather data: API rate limit exceeded"
}
],
"isError": true
}
} These errors may occur due to external API failures, invalid user input, or business logic constraints.
By clearly separating protocol errors from execution failures, MCP allows clients to distinguish between transport issues and operational problems, enabling more precise handling and better user feedback.
Best Practices for Implementing MCP Tools
When implementing MCP tools, following structured best practices ensures reliability, safety, and interoperability. Below are key recommendations grouped by implementation focus:
Get Your Free Enterprise MCP Playbook
Take the next step in operationalizing the MCP Maturity Model. Download the full Obot Enterprise MCP Playbook for detailed stage definitions, architecture patterns, governance checklists, and guidance on moving from early experiments to a fully scaled, AI-ready platform.
📑 Click here to get your free copy.
1. Tool Design and Implementation
- Use clear and consistent naming
Assign each tool a unique, descriptive name that reflects its function. Avoid abbreviations or ambiguous terms to ensure clarity in both development and usage contexts. - Define detailed input schemas
Use JSON Schema definitions to validate all input parameters. Include type information, required fields, and descriptions to guide model usage and improve error handling. - Keep tools atomic and focused
Design tools to perform a single, well-scoped task. Avoid combining unrelated operations in one tool, which complicates validation, error handling, and reuse. - Include descriptive metadata and examples
Document the tool’s purpose, expected inputs, outputs, and side effects. Include usage examples in the description field to help models understand invocation patterns. - Validate outputs against output schemas
If the tool defines an outputSchema, ensure all results conform to it. This supports structured processing, UI rendering, and downstream validation. - Use appropriate timeouts and cancellations
Set clear timeout thresholds for tool execution to avoid indefinite hangs. Support cancellation mechanisms where possible, especially for long-running tools. - Implement rate limiting for costly operations
Tools that access third-party APIs or perform resource-heavy tasks should implement rate limits to prevent overuse and ensure fairness across users. - Log tool usage and failures
Maintain logs of tool invocations, parameters, and responses. This aids in debugging, monitoring, and incident response.
2. Tool Lifecycle and Discovery
- Support dynamic tool discovery
Implement the tools/list and tools/list_changed messages correctly to allow clients to discover and adapt to changing tool sets in real-time. - Maintain backward compatibility
When updating tool definitions, avoid breaking changes. If incompatible changes are needed, register a new tool version with a distinct name. - Annotate tools with version and stability metadata
Use annotations to signal versioning, stability level (e.g., experimental, stable), and deprecation notices. This helps clients manage expectations and dependencies.
3. Testing and Validation
- Perform end-to-end functional testing
Verify that tools work with valid inputs and fail gracefully with invalid ones. Include edge cases and malformed data scenarios. - Mock external dependencies in tests
For tools that rely on external systems, use mocks to simulate failure conditions and test recovery logic. - Conduct security and stress testing
Validate tools under high load, simulate abuse scenarios, and ensure resilience to attacks or misconfigurations. - Ensure tool results are well-formed
Test structured and unstructured outputs for completeness and adherence to defined schemas, including annotations and metadata.
4. Error Handling and Resilience
- Use isError correctly for execution failures
Distinguish between protocol-level errors (e.g., invalid method) and tool execution errors (e.g., API timeout) using the isError field and descriptive content messages. - Avoid leaking internal details
Sanitize error messages returned to clients. Avoid exposing stack traces, internal paths, or sensitive configuration details. - Gracefully handle partial failures
For tools that aggregate data from multiple sources, handle partial failures cleanly and inform the client which parts failed and why.
5. Security Considerations
- Perform strict input validation
Validate all inputs against the inputSchema. Apply additional checks for data type limits, enumeration values, string lengths, and expected formats. - Harden against injection attacks
Sanitize any inputs that will be used in file paths, shell commands, or API requests to prevent command injection or path traversal vulnerabilities. - Implement authentication and authorization
For tools with side effects or privileged operations, ensure the invoking context has the necessary permissions. Use scoped access tokens or context-aware authorization. - Monitor for abuse and misuse
Track usage patterns, apply request throttling, and monitor for unusual invocation behaviors. Integrate alerting for repeated failures or abuse patterns.
Related content: Read our guide to model context protocol use cases (coming soon)
Meet with an Obot Architect
Get expert guidance on deploying Obot as your enterprise MCP gateway and aligning it with your infrastructure. Set up a time to meet with an Obot MCP architect to discuss how to get started.
👉 Click here to book a time that works with your schedule.