OpenAI MCP: How to Connect MCP Servers to OpenAI Agents SDK
Introduction to MCP Serves and OpanAI Agents SDK
The Model Context Protocol (MCP) has emerged as a powerful standard for connecting large language models (LLMs) to external tools and data sources through a uniform interface. While initially launched by Anthropic for Claude models, the protocol's elegant design and robust security model make it an attractive option for integration with other AI systems. This article provides a comprehensive technical exploration of integrating MCP servers with OpenAI's Agents SDK, enabling developers to leverage the growing ecosystem of MCP tools while building agent-based applications powered by OpenAI models. We'll explore the architectural considerations, implementation approaches, protocol adaptations, and best practices for creating this integration, offering developers a roadmap to combine these powerful technologies.
For implementation details and access to the comprehensive collection of available MCP servers, visit the official MCP GitHub Repository: https://github.com/modelcontextprotocol/servers (opens in a new tab)
Technical Foundation: MCP and OpenAI Agents SDK
MCP Architecture Overview
The Model Context Protocol implements a client-server architecture where:
- MCP Servers expose tools (functions) and resources (data) via a standardized JSON-based interface
- MCP Clients (typically LLMs) communicate with these servers to extend their capabilities
The protocol supports multiple transport layers:
- STDIO for local process communication with minimal latency
- Server-Sent Events (SSE) for network-based communication
- HTTP/WebSocket for more complex deployments
MCP messages follow a well-defined structure for operations like listing tools, invoking functions, and retrieving resources.
OpenAI Agents SDK Architecture
OpenAI's Agents SDK provides a framework for building AI agents that can:
- Execute complex, multi-step tasks
- Use tools to interact with external systems
- Maintain state across conversation turns
- Follow structured workflows
The SDK's tool integration mechanism functions through:
- Function Calling: A structured JSON interface for tool definition
- Tool Execution: Mechanism for executing external code
- Assistants API: API endpoints for managing assistant instances
Bridging the Gap: MCP-to-Agents Adapter Architecture
Architectural Patterns
To connect MCP servers with OpenAI's Agents SDK, we employ an adapter pattern that translates between the two protocols. The architectural approaches include:
1. Direct Integration (Server-side)
┌───────────┐ ┌───────────────┐ ┌─────────────┐
│ OpenAI │ │ MCP-to-Agents │ │ MCP │
│ Agents SDK│◄────┤ Adapter │◄────┤ Server │
└───────────┘ └───────────────┘ └─────────────┘
2. Proxy Pattern (Client-side)
┌───────────┐ ┌───────────────┐ ┌─────────────┐
│ OpenAI │ │ Tool-to-MCP │ │ MCP │
│ Agents SDK│◄────┤ Proxy │◄────┤ Server │
└───────────┘ └───────────────┘ └─────────────┘
3. Bridge Implementation
┌───────────┐ ┌───────────────┐ ┌─────────────┐
│ OpenAI │ │ MCP Bridge │ │ Multiple │
│ Agents SDK│◄────┤ Server │◄────┤ MCP Servers │
└───────────┘ └───────────────┘ └─────────────┘
Protocol Mapping
The core of the integration involves mapping between MCP's protocol and OpenAI's function calling specification:
MCP Concept | OpenAI Agents SDK Equivalent |
---|---|
Tool | Function Definition |
Tool Parameters | Function Parameters |
Tool Execution | Function Call |
Resource | Custom Implementation Required |
Implementation: Building the MCP Adapter
Core Components
- Tool Discovery: Retrieving and translating MCP tool definitions
- Invocation Relay: Forwarding function calls to MCP servers
- Response Handling: Processing and formatting MCP responses
- Resource Management: Adapting MCP resources to OpenAI's context
Discovery Implementation
This code demonstrates how to discover tools from MCP servers and convert them to OpenAI function definitions:
import { MCPClient } from '@modelcontextprotocol/sdk';
import { OpenAIAssistantTool } from 'openai';
async function discoverMCPTools(mcpServer: MCPClient): Promise<OpenAIAssistantTool[]> {
// Get available tools from MCP server
const tools = await mcpServer.listTools();
// Convert MCP tool definitions to OpenAI function definitions
return tools.map(tool => {
return {
type: 'function',
function: {
name: tool.name.replace('.', '_'), // Adjust for OpenAI naming conventions
description: tool.description,
parameters: convertMCPSchemaToOpenAISchema(tool.parameters)
}
};
});
}
function convertMCPSchemaToOpenAISchema(schema: any): any {
// Handle schema conversion (may require adjustments for specific schema elements)
// OpenAI accepts JSON Schema format with some restrictions
return {
...schema,
// Add any necessary transformations here
};
}
Invocation Relay
This shows how to handle OpenAI function calls and relay them to MCP servers:
import { MCPClient } from '@modelcontextprotocol/sdk';
async function handleFunctionCall(
mcpClient: MCPClient,
functionName: string,
parameters: Record<string, any>
): Promise<any> {
// Convert OpenAI function name to MCP tool name format (if needed)
const mcpToolName = functionName.replace('_', '.');
try {
// Invoke the MCP tool with the provided parameters
const result = await mcpClient.invokeTool(mcpToolName, parameters);
return {
status: 'success',
data: result
};
} catch (error) {
return {
status: 'error',
error: error.message
};
}
}
Resource Adapter
Resources require special handling as they don't directly map to OpenAI's function calling mechanism:
import { MCPClient } from '@modelcontextprotocol/sdk';
async function fetchMCPResource(
mcpClient: MCPClient,
resourceId: string
): Promise<string> {
try {
// Retrieve the resource from the MCP server
const resource = await mcpClient.getResource(resourceId);
// Format the resource data for inclusion in OpenAI context
return formatResourceForOpenAI(resource);
} catch (error) {
return `Error retrieving resource: ${error.message}`;
}
}
function formatResourceForOpenAI(resource: any): string {
// Format the resource as text for inclusion in the context
// This depends on the resource type
if (typeof resource === 'string') {
return resource;
} else {
return JSON.stringify(resource, null, 2);
}
}
Complete Integration Example
Here's a comprehensive example demonstrating how to connect OpenAI's Agents SDK with MCP servers:
import { OpenAI } from 'openai';
import { MCPClient } from '@modelcontextprotocol/sdk';
class MCPAgentAdapter {
private openai: OpenAI;
private mcpClients: Map<string, MCPClient>;
constructor(openaiApiKey: string, mcpServers: Record<string, MCPClient>) {
this.openai = new OpenAI({ apiKey: openaiApiKey });
this.mcpClients = new Map(Object.entries(mcpServers));
}
async createAgent(agentName: string): Promise<string> {
// Discover all tools from all MCP servers
const allTools = [];
for (const [serverName, client] of this.mcpClients.entries()) {
const tools = await this.discoverMCPTools(client);
// Tag tools with server name for routing
tools.forEach(tool => {
tool.function.name = `${serverName}_${tool.function.name}`;
});
allTools.push(...tools);
}
// Create OpenAI Assistant with discovered tools
const assistant = await this.openai.beta.assistants.create({
name: agentName,
instructions: "You are an agent that can use external tools via MCP.",
model: "gpt-4-turbo",
tools: allTools
});
return assistant.id;
}
async processToolCalls(toolCalls: any[]): Promise<any[]> {
const results = [];
for (const call of toolCalls) {
const { name, arguments: args } = call.function;
// Extract server name and function name from the combined name
const [serverName, functionName] = name.split('_', 2);
const mcpClient = this.mcpClients.get(serverName);
if (!mcpClient) {
results.push({
tool_call_id: call.id,
output: `Error: MCP server '${serverName}' not found`
});
continue;
}
try {
// Convert function name back to MCP format
const mcpToolName = functionName.replace('_', '.');
const result = await mcpClient.invokeTool(mcpToolName, JSON.parse(args));
results.push({
tool_call_id: call.id,
output: JSON.stringify(result)
});
} catch (error) {
results.push({
tool_call_id: call.id,
output: `Error: ${error.message}`
});
}
}
return results;
}
private async discoverMCPTools(mcpClient: MCPClient): Promise<any[]> {
// Implementation as shown in previous sections
// ...
}
}
Advanced Implementation Considerations
Authentication and Security
When connecting MCP servers to OpenAI's Agents SDK, security becomes a crucial consideration:
-
Authentication Bridging:
- Store MCP server credentials securely
- Implement token-based authentication for both systems
- Consider OAuth flows for secure credential delegation
-
Permission Scoping:
// Example of permission-scoped MCP client factory function createScopedMCPClient(serverConfig: MCPServerConfig, userPermissions: string[]): MCPClient { return new MCPClient({ ...serverConfig, allowedTools: filterToolsByUserPermissions(serverConfig.tools, userPermissions) }); }
-
Input Validation:
function validateParameters(parameters: any, schema: any): boolean { const ajv = new Ajv(); const validate = ajv.compile(schema); return validate(parameters); }
Performance Optimization
For production deployments, consider these optimizations:
-
Caching:
class CachedMCPClient extends MCPClient { private toolCache: Map<string, any> = new Map(); async listTools(): Promise<any[]> { if (this.toolCache.has('tools')) { return this.toolCache.get('tools'); } const tools = await super.listTools(); this.toolCache.set('tools', tools); return tools; } }
-
Connection Pooling:
class MCPConnectionPool { private connections: Map<string, MCPClient> = new Map(); private maxConnections: number; constructor(maxConnections = 10) { this.maxConnections = maxConnections; } async getConnection(serverUri: string): Promise<MCPClient> { if (!this.connections.has(serverUri)) { if (this.connections.size >= this.maxConnections) { // Implement connection reuse strategy } const client = new MCPClient(serverUri); await client.connect(); this.connections.set(serverUri, client); } return this.connections.get(serverUri)!; } }
-
Asynchronous Processing:
async function processBatchedToolCalls(calls: any[]): Promise<any[]> { // Process tool calls in parallel for better performance return Promise.all(calls.map(call => processToolCall(call))); }
Error Handling and Resilience
Robust error handling is essential for production deployments:
class ResilientMCPClient {
private client: MCPClient;
private maxRetries: number;
constructor(client: MCPClient, maxRetries = 3) {
this.client = client;
this.maxRetries = maxRetries;
}
async invokeTool(toolName: string, parameters: any): Promise<any> {
let lastError;
for (let attempt = 0; attempt < this.maxRetries; attempt++) {
try {
return await this.client.invokeTool(toolName, parameters);
} catch (error) {
if (isTransientError(error)) {
// Exponential backoff
await delay(100 * Math.pow(2, attempt));
lastError = error;
} else {
// Non-transient errors should be thrown immediately
throw error;
}
}
}
throw lastError || new Error('Failed after multiple attempts');
}
}
Deployment Strategies
Docker Containerization
Package your MCP-OpenAI adapter as a Docker container for easier deployment:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/server.js"]
Serverless Deployment
For serverless environments, consider this AWS Lambda example:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { MCPAgentAdapter } from './adapter';
// Initialize adapter outside handler for connection reuse
const adapter = new MCPAgentAdapter(process.env.OPENAI_API_KEY!, {
filesystem: createMCPClient('filesystem'),
database: createMCPClient('database')
});
export async function handler(
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> {
try {
const body = JSON.parse(event.body || '{}');
switch (body.action) {
case 'create_agent':
const agentId = await adapter.createAgent(body.name);
return { statusCode: 200, body: JSON.stringify({ agent_id: agentId }) };
case 'process_tool_calls':
const results = await adapter.processToolCalls(body.tool_calls);
return { statusCode: 200, body: JSON.stringify({ results }) };
default:
return { statusCode: 400, body: 'Invalid action' };
}
} catch (error) {
console.error(error);
return { statusCode: 500, body: `Error: ${error.message}` };
}
}
Conclusion
Integrating MCP servers with OpenAI's Agents SDK represents a powerful combination that extends AI agent capabilities while leveraging the growing ecosystem of MCP-compatible tools. By following the architectural patterns and implementation approaches outlined in this article, developers can create sophisticated AI agents that seamlessly combine OpenAI's advanced models with the diverse capabilities provided by MCP servers.
This integration opens new possibilities for AI applications that can access specialized tools, databases, APIs, and file systems through a consistent interface. As both the OpenAI Agents ecosystem and the MCP server landscape continue to evolve, this integration approach provides a flexible foundation that can adapt to future developments in both technologies.
The technical considerations presented here—from authentication and security to performance optimization and deployment strategies—provide a comprehensive framework for building production-ready integrations that are secure, efficient, and maintainable. By adopting these best practices, developers can create robust AI applications that combine the strengths of both platforms while minimizing the technical challenges of cross-protocol integration.