Skip to main content
The Vercel AI SDK supports MCP servers through the @ai-sdk/mcp package, enabling your AI applications to use MCP tools.

Installation

npm install @ai-sdk/mcp @ai-sdk/openai ai

HTTP Transport (Production)

Use HTTP transport for production deployments with remote MCP servers:
import { experimental_createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const mcpClient = await experimental_createMCPClient({
  transport: {
    type: 'http',
    url: 'https://mcp.runlayer.com/github-a1b2c3/mcp',
    headers: {
      'x-runlayer-api-key': 'your-api-key'
    }
  }
});

try {
  const tools = await mcpClient.tools();

  const result = await generateText({
    model: openai('gpt-4o'),
    tools,
    prompt: 'Get info about the vercel/ai repository'
  });

  console.log(result.text);
} finally {
  await mcpClient.close();
}

Stdio Transport (Local Development)

For local Runlayer MCP servers:
import { experimental_createMCPClient } from '@ai-sdk/mcp';
import { Experimental_StdioMCPTransport } from 'ai/mcp-stdio';

const mcpClient = await experimental_createMCPClient({
  transport: new Experimental_StdioMCPTransport({
    command: 'runlayer',
    args: [
      '936ac3e8-bb75-428d-8db1-b1f08ff07816',
      '--secret', 'your-secret-key',
      '--host', 'https://mcp.runlayer.com'
    ]
  })
});

Streaming Responses

Use streamText for streaming responses with proper cleanup:
import { experimental_createMCPClient } from '@ai-sdk/mcp';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const mcpClient = await experimental_createMCPClient({
  transport: {
    type: 'http',
    url: 'https://mcp.runlayer.com/github-a1b2c3/mcp',
    headers: { 'x-runlayer-api-key': 'your-api-key' }
  }
});

const tools = await mcpClient.tools();

const result = await streamText({
  model: openai('gpt-4o'),
  tools,
  prompt: 'List recent issues in vercel/ai',
  onFinish: async () => {
    await mcpClient.close();
  }
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Multiple MCP Servers

Combine tools from multiple servers:
import { experimental_createMCPClient } from '@ai-sdk/mcp';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const [githubClient, linearClient] = await Promise.all([
  experimental_createMCPClient({
    transport: {
      type: 'http',
      url: 'https://mcp.runlayer.com/github-a1b2c3/mcp',
      headers: { 'x-runlayer-api-key': 'your-api-key' }
    }
  }),
  experimental_createMCPClient({
    transport: {
      type: 'http',
      url: 'https://mcp.runlayer.com/linear-d4e5f6/mcp',
      headers: { 'x-runlayer-api-key': 'your-api-key' }
    }
  })
]);

try {
  const [githubTools, linearTools] = await Promise.all([
    githubClient.tools(),
    linearClient.tools()
  ]);

  const tools = { ...githubTools, ...linearTools };

  const result = await generateText({
    model: openai('gpt-4o'),
    tools,
    prompt: 'Create a GitHub issue and a Linear ticket for the bug report'
  });

  console.log(result.text);
} finally {
  await Promise.all([
    githubClient.close(),
    linearClient.close()
  ]);
}

Next.js API Route

Use in a Next.js API route:
// app/api/chat/route.ts
import { experimental_createMCPClient } from '@ai-sdk/mcp';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const mcpClient = await experimental_createMCPClient({
    transport: {
      type: 'http',
      url: 'https://mcp.runlayer.com/github-a1b2c3/mcp',
      headers: { 'x-runlayer-api-key': 'your-api-key' }
    }
  });

  const tools = await mcpClient.tools();

  const result = await streamText({
    model: openai('gpt-4o'),
    tools,
    messages,
    onFinish: async () => {
      await mcpClient.close();
    }
  });

  return result.toDataStreamResponse();
}

Resources