How Anthropic's Model Context Protocol Facilitates AI Data Access
Streamlining AI Integration with Local and Remote Data Sources Using the Model Context Protocol
TL;DR: Anthropic open-sourced the Model Context Protocol (MCP), a standard for defining tools that interact with data sources. If you’re familiar with agents or function calling, it will feel familiar.
Quick Context
Anthropic recently open-sourced their Model Context Protocol (MCP), an open standard that enables AI assistants to communicate with external data sources like local files, SQL databases, and third-party APIs (such as GitHub and Google Drive).
The Model Context Protocol has four main components:
MCP Server: Functions like an API gateway, exposing specific capabilities through tools. For example, a "list_tables" tool provides access to database tables.
MCP Host: An AI application (like Claude Desktop or an IDE) that manages the connection between the Client and Server processes.
MCP Client: Handles one-to-one connections with each server process within the host application.
MCP Transport: The core mechanism that manages communication between clients and servers. It converts MCP messages to JSON-RPC format for sending and converts received JSON-RPC messages back to MCP format.
Building an MCP Server
My friend Dexter and I built an MCP Server for the Alpaca trading API (check out the draft PR here: https://github.com/modelcontextprotocol/servers/pull/51)
At a high level, the code does three main things:
Defines Tools: We create tool definitions that specify what operations are available through the Alpaca API. Each tool has a name, description, and an input schema that defines the required parameters. For example:
const getLatestQuoteTool: Tool = {
name: "get_latest_quote",
description: "Get the latest quote for a stock symbol",
inputSchema: {
// Schema definition
}
};
Registers Available Tools: We tell the server which tools are available by registering a handler for tool listing requests:
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [getAccountInfoTool, getAssetBySymbolTool, getLatestQuoteTool, placeOrderTool]
};
});
Implements Tool Execution: We register a handler for tool execution requests. When a tool is called, this handler:
Creates a new authenticated Alpaca client
Routes the request to the appropriate tool implementation based on the tool name
Returns the result in a standardized format
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const client = new AlpacaClient(API_KEY, API_SECRET_KEY);
switch (request.params.name) {
case "get_latest_quote": {
// Tool implementation
}
// ... other cases
}
});
Here’s an example in action, where it checks a stock price and proceeds to purchase shares if a certain condition is met:
Now, our AI system—Claude Desktop, in this case—has access to Alpaca and can place orders on our behalf. Neat!
How will this affect the future of agents?
It’s tough to say. A few organizations, like LangChain and the AI Engineer Foundation, have introduced protocols aimed at standardizing agent interactions. If these gain widespread adoption, maintaining AI systems in production could become easier, thanks to the many integrations contributed by the open-source community.
But if adoption doesn’t take off, it might just be another protocol that fades away. In that case, developers will likely keep focusing on pressing issues like agent evaluation, reliability, and implementing step-up authentication.
What excites me most is the potential for AI to leverage traditional computer systems, especially in industries like healthcare. With a protocol like MCP, assistants like Claude could directly interact with local Electronic Medical Record (EMR) systems that don’t have APIs. This means AI could operate seamlessly with existing software, even accessing local SQL databases, to deliver meaningful improvements without requiring major infrastructure changes.
Big thanks to Dex, Max, and Aditya for chatting with me about this!
Paulo