Your agent's built-in tools are limited to what you have implemented. When you need to connect to an external service (a code repository, a database dashboard, a deployment system) you have two options: write a custom tool for each integration, or use a protocol that lets external services expose tools in a standard format your agent already understands.
The Model Context Protocol (MCP) is that standard format. An MCP server advertises its capabilities: names, schemas, and behavioral hints. Your agent connects, discovers what the server offers, and constructs tool objects that are structurally identical to your built-in tools. From that point on, the agent loop dispatches MCP tools and built-in tools through exactly the same code path. The loop never knows the difference.
This guide walks through connecting an MCP server from scratch: choosing a transport, establishing the connection, bridging the server's tools into your agent's format, and wiring them into your existing dispatch. By the end, your agent will have access to tools it did not ship with.
What MCP Gives You
Without MCP, every external integration is a custom adapter. You write the API client, handle authentication, translate the response format, implement error recovery, and manage the connection lifecycle. Each new service is a new tool with its own maintenance cost.
With MCP, the external service owns all of that. The server implements the API client, handles auth, formats responses, and manages its own lifecycle. Your agent's responsibility is narrow: connect to the server, discover its tools, and route calls through the existing dispatch pipeline.
The result is that adding a new external capability (say, a payment processing service or a monitoring dashboard) requires zero new tool implementations on your side. You add a server configuration entry, and the tools appear.
Choose a Transport
MCP servers connect via a transport layer that handles the raw communication. The two most common options are:
stdio. The server runs as a local subprocess. Your agent spawns it, communicates via stdin/stdout, and kills it when done. This is the simplest transport and the right default for servers you run locally. Startup cost is one process spawn. No network configuration needed.
HTTP (Streamable HTTP). The server runs remotely. Communication happens over HTTP with bidirectional support. Use this for remote services, shared team servers, or cloud-hosted integrations. Requires network access and potentially authentication.
The transport is selected at configuration time, not at runtime. Once selected, all communication goes through the same request() / notify() interface regardless of which transport is underneath. The tool bridge pattern works identically for both.
Here is a transport selection function:
function create_transport(config: ServerConfig) -> Transport:
if config.type == "stdio":
return StdioTransport(
command=config.command,
args=config.args,
env=config.env
)
elif config.type == "http":
return HttpTransport(
url=config.url,
headers=config.headers
)
else:
raise Error(f"Unknown transport type: {config.type}")Tip: Start with
stdiofor local development and testing. Switch tohttpwhen you need to share the server across multiple agents or deploy it remotely. The tool bridge code does not change when you switch transports.
Connect and Discover
The connection lifecycle has three steps: create the transport, establish the connection, and discover what the server offers. The server advertises its capabilities in response to a tools/list request.
The following connects to an MCP server and retrieves its tool list:
async function connect_mcp_server(name: str, config: ServerConfig) -> McpConnection:
# Step 1: Create transport
transport = create_transport(config)
# Step 2: Connect
client = McpClient()
await client.connect(transport)
# Step 3: Discover capabilities
response = await client.request("tools/list")
return McpConnection(
name=name,
client=client,
raw_tools=response.tools
)The tools/list response contains an array of tool definitions, each with a name, description, input schema (JSON Schema format), and optional annotations. The annotations carry behavioral hints: whether the tool is read-only, whether it is destructive, and other metadata that maps to your agent's concurrency and permission system.
What can go wrong at this stage: The server might not start (bad command, missing binary), the connection might fail (network error, auth required), or tools/list might return tools with malformed schemas. Handle all three cases before proceeding to the bridge.
The connection has a state that determines whether tools are available:
function get_tools_for_connection(connection: McpConnection) -> list:
if connection.state == "connected":
return connection.bridged_tools
return [] # failed, pending, or disabled: return emptyAll non-connected states return an empty tool list. The agent loop gets a consistent interface regardless of server health. A server can fail, require auth, or be disabled, and the loop never sees an error. It simply has fewer tools available.
Bridge the Tools
The tool bridge is the core pattern. It takes the MCP server's tool definitions and constructs tool objects that are structurally identical to your built-in tools. The agent loop and dispatcher cannot tell the difference.
The following bridges MCP tools into your agent's internal format:
function bridge_mcp_tools(connection: McpConnection) -> list:
bridged = []
for mcp_tool in connection.raw_tools:
# Validate the schema before bridging
if not is_valid_json_schema(mcp_tool.input_schema):
log_warning(f"Skipping {mcp_tool.name}: invalid schema")
continue
agent_tool = {
name: f"mcp__{connection.name}__{mcp_tool.name}",
description: truncate(mcp_tool.description, max_chars=2048),
input_schema: mcp_tool.input_schema,
concurrency_class: "READ_ONLY" if mcp_tool.annotations.read_only_hint else "WRITE_EXCLUSIVE",
behavioral_flags: {
is_destructive: mcp_tool.annotations.destructive_hint or False,
requires_permission: not mcp_tool.annotations.read_only_hint,
},
call: create_mcp_call(connection.client, mcp_tool.name)
}
bridged.append(agent_tool)
return bridgedFour things happen in this bridge:
Namespacing. The tool name becomes mcp__{server}__{tool}. This prevents collisions across multiple servers and makes tool ownership traceable in logs. When you see mcp__payments__refund_transaction in a trace, you know immediately which server and which operation.
Description truncation. MCP servers control their own descriptions, and some are verbose. A tool description that consumes 10,000 tokens poisons the context window. The model wastes attention on that one description at the expense of everything else. Truncating to 2,048 characters is a safety measure, not a limitation.
Annotation passthrough. The server's hints (read_only_hint, destructive_hint) map directly to your concurrency and permission system. A tool marked read-only can run concurrently with other read-only tools, while a tool marked destructive triggers the permission cascade.
Schema validation. If the server returns a tool with a malformed schema, skip it rather than crashing. A single bad tool definition should not prevent the other tools from being available.
Tip: Always validate MCP tool schemas at connection time. A malformed schema does not fail loudly. It fails at dispatch time when the model tries to call the tool and the input cannot be parsed. By then, the model has already committed to a plan that includes that tool. Validate early, skip bad tools, and log the problem.
Wire Into Dispatch
The final step is merging bridged MCP tools into your existing dispatch pipeline. The dispatcher already knows how to route built-in tools. Adding MCP tools means extending the tool list, not changing the dispatch logic.
The following shows how to combine built-in tools with bridged MCP tools:
function build_tool_list(builtin_tools: list, mcp_connections: list) -> list:
all_tools = list(builtin_tools)
for connection in mcp_connections:
bridged = bridge_mcp_tools(connection)
all_tools.extend(bridged)
return all_tools
# The dispatch function is unchanged. It handles all tools identically.
async function dispatch_tool(name: str, args: dict, context: ToolContext) -> ToolResult:
tool = find_tool(name, context.all_tools)
if tool is None:
return error_result(f"Unknown tool: {name}")
parsed = tool.input_schema.parse(args)
if not parsed.success:
return error_result(f"Invalid arguments: {parsed.error}")
return await tool.call(parsed.data, context)The dispatch function does not distinguish between built-in tools and MCP tools. Both are tool objects with a name, schema, and call function. The only difference is that an MCP tool's call function routes through the MCP client to the remote server, while a built-in tool's call function runs local code. This is the power of the bridge pattern. The abstraction boundary absorbs the complexity.
Putting It Together
Here is the complete flow from configuration to a working dispatch with MCP tools:
# 1. Configure the MCP server
server_config = {
name: "github",
type: "stdio",
command: "npx",
args: ["-y", "@mcp/github-server"]
}
# 2. Connect and discover
connection = await connect_mcp_server("github", server_config)
# 3. Bridge tools
connection.bridged_tools = bridge_mcp_tools(connection)
# 4. Merge with built-in tools
all_tools = build_tool_list(builtin_tools, [connection])
# 5. Pass to the agent loop. Dispatch handles everything.
response = await agent_loop(question, tools=all_tools)Five steps: configure, connect, bridge, merge, run. The agent now has access to every tool the MCP server exposes, dispatched through the same pipeline as built-in tools.
When you add a second MCP server (say, a deployment service), the change is one new configuration entry and one new connection. The bridge, merge, and dispatch steps handle it automatically. This is how MCP turns the cost of integration from linear (one custom tool per service) to constant (one bridge pattern for all services).
Related
- MCP Integration. The full MCP architecture: transport details, connection state machines, batched startup for many servers, config scope hierarchy, auth caching, and session expiry handling.
- Tool System. How bridged tools integrate with the dispatch algorithm, concurrency partitioning, and behavioral flag composition.
- Safety and Permissions. How the permission cascade applies to MCP tools, including the
destructive_hintandread_only_hintannotations.