Skip to main content

MCP Servers

Agent Runtimes provides comprehensive support for MCP Servers, enabling agents to access external tools and data sources through a standardized interface.

Overview

MCP servers are external processes that provide tools, resources, and prompts to AI agents. Agent Runtimes supports two types of MCP server configurations:

MCP Config (from mcp.json)

MCP Config servers are user-defined servers configured in ~/.datalayer/mcp.json. These servers:

  • Start automatically when the agent runtime starts
  • Are fully customizable with your own commands, arguments, and environment variables
  • Appear in the agent form where users can select which servers to include as toolsets
  • Support any MCP-compatible server - if it follows the MCP specification, it will work
  • Are stored separately from catalog servers, allowing the same ID in both without conflict

MCP Catalog (predefined servers)

MCP Catalog servers are predefined server configurations included with Agent Runtimes. These servers:

  • Are NOT started automatically - users must explicitly enable them via API
  • Can be enabled on-demand using the /api/v1/mcp/servers/catalog/{server_name}/enable endpoint
  • Provide common tools like web search, file system access, etc.
  • Have their own storage separate from config servers
Which to use?

For most users, MCP Config is recommended. Add your servers to ~/.datalayer/mcp.json and they'll be available automatically when the agent runtime starts.

The same server ID can exist in both config and catalog - they are tracked independently.

Key Features

  • Automatic Server Lifecycle — Config servers start with the application and stop on shutdown
  • Retry with Backoff — Transient failures trigger automatic retries with exponential backoff
  • Sequential Startup — Multiple MCP servers start sequentially to avoid resource conflicts
  • Status Monitoring — Real-time status of all MCP toolsets via API endpoint
  • Separate Storage — Config and catalog servers are stored independently, allowing same IDs in both
  • Runtime Updates — Dynamically add/remove MCP servers from running agents via PATCH API

MCP Server Examples

Agent Runtimes supports any MCP-compatible server—if it follows the MCP specification, it will work. The table below shows a few popular examples to get you started:

ServerURLTypeDescription
TavilydocsRemoteWeb search and content extraction
Filesystemmodelcontextprotocol/serversLocalFile system access
GitHubgithub/github-mcp-serverLocalGitHub repository access
Google Workspacetaylorwilsdon/google_workspace_mcpLocalGoogle Workspace (Gmail, Gdrive, etc.) access
Slackdatalayer/slack-mcp-serverLocalSlack workspace access
KaggledocsRemoteKaggle datasets, models, competitions, notebooks
AlphaVantagedocsLocalFinancial market data
Chartantvis/mcp-server-chartLocalCharting and visualization
Brave Searchmodelcontextprotocol/serversLocalWeb search
LinkedInstickerdaniel/linkedin-mcp-serverLocalLinkedIn profile, company, and job data
Local vs Remote MCP Servers
  • Local servers run as child processes on your machine (started via npx or uvx)
  • Remote servers are hosted externally and accessed over HTTP (e.g., Kaggle MCP)

Both types are configured in the same mcp.json file, but remote servers use mcp-remote as a bridge.

See the MCP Servers Directory for more options.

Quick Start

Configuring MCP Config Servers

MCP Config servers are configured in ~/.datalayer/mcp.json. These servers start automatically when the agent runtime starts and appear in the agent creation form.

Here's a minimal example:

{
"mcpServers": {
"tavily-remote": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.tavily.com/mcp/?tavilyApiKey=<your-api-key>"
]
}
}
}

Environment variables are automatically expanded using ${VAR_NAME} syntax.

Using MCP Tools in Agents

Once configured, agents automatically receive access to all running MCP toolsets:

from pydantic_ai import Agent
from agent_runtimes.mcp import get_mcp_toolsets

# Get pre-loaded MCP toolsets
mcp_toolsets = get_mcp_toolsets()

# Create agent with MCP tools
agent = Agent(
"anthropic:claude-sonnet-4-20250514",
system_prompt="You are a helpful assistant.",
toolsets=mcp_toolsets,
)

Full Configuration Example

Here's a complete configuration with multiple servers:

{
"mcpServers": {
"tavily-remote": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.tavily.com/mcp/?tavilyApiKey=<your-api-key>"
]
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
},
"linkedin": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/stickerdaniel/linkedin-mcp-server",
"linkedin-mcp-server"
]
},
"kaggle": {
"command": "npx",
"args": [
"mcp-remote",
"https://www.kaggle.com/mcp",
"--header",
"Authorization: Bearer <KAGGLE_TOKEN>"
]
}
}

Server-Specific Setup

Tavily MCP Server

The Tavily MCP Server provides web search and content extraction tools.

Configuration

{
"mcpServers": {
"tavily-remote": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.tavily.com/mcp/?tavilyApiKey=<your-api-key>"
]
}
}
}

Replace <your-api-key> with your Tavily API key from your Tavily account. You can get it from https://app.tavily.com/.

Filesystem MCP Server

The Filesystem MCP Server provides tools for interacting with the local filesystem.

Configuration

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
}
}
}

GitHub MCP Server

The GitHub MCP Server provides tools for interacting with GitHub repositories.

Configuration

{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_TOKEN": "<GITHUB_TOKEN>"
}
}
}
}

Replace <GITHUB_TOKEN> with a GitHub personal access token with appropriate permissions. You can create it on https://github.com/settings/personal-access-tokens/new. Note that some tools might be disabled if your access token has limited permissions (e.g Read-only, etc.).

Google Workspace MCP Server

The Google Workspace MCP Server provides tools for interacting with Google Workspace services like Gmail and Google Drive.

Configuration

{
"mcpServers": {
"google-workspace": {
"command": "uvx",
"args": ["workspace-mcp"],
"env": {
"GOOGLE_OAUTH_CLIENT_ID": "<your-client-id>",
"GOOGLE_OAUTH_CLIENT_SECRET": "<your-client-secret>",
}
}

Replace <your-client-id> and <your-client-secret> with your Google OAuth credentials To set up OAuth credentials:

1. Create OAuth 2.0 Credentials

Visit the Google Cloud Console:

  1. Create a new project (or use an existing one)
  2. Navigate to APIs & ServicesCredentials
  3. Click Create CredentialsOAuth Client ID
  4. Choose Desktop Application as the application type (no redirect URIs needed!)
  5. Download the credentials and note the Client ID and Client Secret

2. Enable Required APIs

In APIs & ServicesLibrary, search for and enable the Google Workspace APIs you plan to use (Gmail, Drive, etc.).

3. Configure Environment

Set your OAuth credentials as environment variables:

export GOOGLE_OAUTH_CLIENT_ID="your-client-id"
export GOOGLE_OAUTH_CLIENT_SECRET="your-client-secret"

Slack MCP Server

The Slack MCP Server provides tools for interacting with Slack workspaces.

Configuration

{
"mcpServers": {
"slack": {
"command": "npx",
"args": ["-y", "@datalayer/slack-mcp-server"],
"env": {
"SLACK_BOT_TOKEN": "<your-slack-bot-token>",
"SLACK_TEAM_ID": "<your-slack-team-id>",
"SLACK_CHANNEL_IDS": "<your-slack-channel-ids>"
}
}
}
}

To get the credentials follow the instructions explained in: https://github.com/zencoderai/slack-mcp-server?tab=readme-ov-file#slack-bot-setup.

Kaggle MCP Server

The Kaggle MCP Server is a remote HTTP server that provides access to Kaggle datasets, models, competitions, notebooks, and benchmarks.

Configuration Options

Option 1: Token Authentication (recommended for Agent Runtimes)

{
"mcpServers": {
"kaggle": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://www.kaggle.com/mcp",
"--header",
"Authorization: Bearer <KAGGLE_TOKEN>"
]
}
}
}

To get your token:

  1. Go to kaggle.com/settings/account
  2. Scroll to API section → Click Create New Token
  3. Replace <KAGGLE_TOKEN> with the generated token value

For Agent Runtimes identity integration, see the Kaggle section in the Identity documentation.

Option 2: Browser OAuth (auto-login)

{
"mcpServers": {
"kaggle": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://www.kaggle.com/mcp"]
}
}
}

This triggers a browser-based login on first tool call. Tokens are cached automatically.

Available Tools

CategoryDescription
NotebooksCreate, run, and manage Kaggle notebooks
DatasetsSearch, download, and explore datasets
ModelsAccess and use Kaggle models
CompetitionsBrowse competitions, download data, submit predictions
BenchmarksAccess Kaggle benchmark tools

AlphaVantage MCP Server

The AlphaVantage MCP Server provides financial market data tools.

Configuration

{
"mcpServers": {
"alphavantage": {
"command": "uvx",
"args": ["av-mcp==0.2.1", "<YOUR_API_KEY>"],
"env": {"MAX_RESPONSE_TOKENS": "100000"}
}
}

Replace <YOUR_API_KEY> with your AlphaVantage API key from your AlphaVantage account. You can get it from https://www.alphavantage.co/support/#api-key.

Chart MCP Server

The Chart MCP Server provides charting and visualization tools.

Configuration

{
"mcpServers": {
"chart": {
"command": "npx",
"args": ["-y", "@antv/mcp-server-chart"]
}
}
}

LinkedIn MCP Server

The LinkedIn MCP server requires browser automation via Playwright.

Setup Steps

Step 1: Install Playwright Chromium

uvx --from playwright playwright install chromium

Step 2: Create a Session File

uvx --from git+https://github.com/stickerdaniel/linkedin-mcp-server linkedin-mcp-server --get-session

This opens a browser window for manual LinkedIn login (handles 2FA, captcha, etc.). You have 5 minutes to complete authentication. The session is saved to ~/.linkedin-mcp/session.json.

Step 3: Add to Configuration

{
"mcpServers": {
"linkedin": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/stickerdaniel/linkedin-mcp-server",
"linkedin-mcp-server"
]
}
}
}
Session Expiration

Sessions may expire over time. If you encounter authentication errors, run --get-session again.

Alternative: Cookie Authentication

You can also authenticate using your li_at cookie:

"env": {
"LINKEDIN_COOKIE": "${LINKEDIN_COOKIE}"
}

To get the cookie: DevTools (F12) → Application → Cookies → linkedin.com → copy li_at value. However, session file authentication is more reliable.

Architecture

┌─────────────────────────────────────────────────────────────┐
│ FastAPI Application │
│ (agent_runtimes) │
└─────────────────────────────────────────────────────────────┘

↓ Lifespan startup
┌─────────────────────────────────────────────────────────────┐
│ MCP Lifecycle Manager │
│ (agent_runtimes/mcp/lifecycle.py) │
│ ┌────────────────────┐ ┌────────────────────┐ │
│ │ Config Servers │ │ Catalog Servers │ │
│ │ (from mcp.json) │ │ (predefined) │ │
│ │ _config_servers │ │ _catalog_servers │ │
│ └────────────────────┘ └────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ │ │
↓ ↓ ↓
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Tavily │ │ LinkedIn │ │ Kaggle │
│ MCP Server │ │ MCP Server │ │ MCP Server │
│ (config) │ │ (catalog) │ │ (remote) │
└─────────────┘ └─────────────┘ └─────────────┘

Server Startup Sequence

  1. Load Configuration — Read ~/.datalayer/mcp.json and expand environment variables
  2. Sequential Start — Start each MCP server one at a time to avoid resource conflicts
  3. Separate Storage — Config servers go to _config_servers, catalog servers to _catalog_servers
  4. Retry Logic — If a server fails with BrokenResourceError, retry up to 3 times with backoff
  5. Tool Discovery — Once connected, list available tools from each server (prefixed with server ID)
  6. Status Tracking — Track ready/failed servers for monitoring

MCP Tool Proxy

The MCP Tool Proxy provides an HTTP endpoint that allows remote code execution environments (like Jupyter kernels in separate containers) to call MCP tools running as stdio subprocesses in the agent-runtimes container.

Two-Container Architecture

In production deployments (e.g., Kubernetes), agent-runtimes and Jupyter often run in separate containers:

┌─────────────────────────────────────────────────────────────────────────────┐
│ Pod │
│ ┌────────────────────────────────┐ ┌────────────────────────────────┐ │
│ │ agent-runtimes :8765 │ │ jupyter :2300 │ │
│ │ ┌──────────────────────────┐ │ │ ┌──────────────────────────┐ │ │
│ │ │ MCP Servers (stdio) │ │ │ │ Jupyter Kernel │ │ │
│ │ │ - github │ │ │ │ - Executes Python code │ │ │
│ │ │ - filesystem │◀─┼────┼──│ - Calls tools via HTTP │ │ │
│ │ │ - tavily │ │HTTP│ │ │ │ │
│ │ └──────────────────────────┘ │ │ └──────────────────────────┘ │ │
│ │ ┌──────────────────────────┐ │ │ │ │
│ │ │ /api/v1/mcp/proxy/* │ │ │ Shared Volume: │ │
│ │ │ HTTP proxy for tools │ │ │ /mnt/shared-agent/ │ │
│ │ └──────────────────────────┘ │ │ ├── generated/ (bindings) │ │
│ └────────────────────────────────┘ │ └── skills/ (user code) │ │
│ └────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘

The problem: Jupyter kernel cannot directly access stdio MCP servers running in another container.

The solution: MCP Tool Proxy exposes MCP tools via HTTP, allowing the Jupyter kernel to call tools through REST API.

How It Works

  1. Agent-runtimes starts MCP servers as stdio subprocesses
  2. MCP Proxy endpoint receives HTTP requests from Jupyter kernel
  3. Routes the request to the appropriate stdio MCP server
  4. Returns the result as JSON

API Endpoints

Call a Tool via Proxy

POST /api/v1/mcp/proxy/{server_name}/tools/{tool_name}
Content-Type: application/json

{
"arguments": {
"owner": "datalayer",
"repo": "agent-runtimes"
}
}

Response:

{
"success": true,
"result": "Repository starred successfully",
"is_error": false
}

List Available Proxy Servers

GET /api/v1/mcp/proxy/servers
{
"servers": ["github", "filesystem", "tavily"],
"count": 3
}

Health Check

GET /api/v1/mcp/proxy/health
{
"status": "healthy",
"servers_available": 3
}

Configuration

The MCP proxy URL is configured via environment variable or API:

VariableDefaultDescription
AGENT_RUNTIMES_MCP_PROXY_URLhttp://0.0.0.0:8765/api/v1/mcp/proxyMCP proxy endpoint URL

When configuring a Jupyter sandbox, the proxy URL is automatically set:

# In runtimes-companion (Kubernetes)
payload["jupyter_sandbox"] = "http://0.0.0.0:2300?token=xxx"
payload["mcp_proxy_url"] = "http://0.0.0.0:8765/api/v1/mcp/proxy"

Codemode Integration

When using Codemode with a Jupyter sandbox, the executor automatically uses HTTP proxy mode:

# CodeModeConfig with mcp_proxy_url
config = CodeModeConfig(
mcp_proxy_url="http://0.0.0.0:8765/api/v1/mcp/proxy"
)

# Generated code in Jupyter kernel calls tools via HTTP:
# from generated.mcp.github import star_repo
# result = await star_repo(owner="datalayer", repo="ui")
# ↓ internally becomes ↓
# POST http://0.0.0.0:8765/api/v1/mcp/proxy/github/tools/star_repo

When to Use MCP Proxy

ScenarioUse MCP Proxy?
Local development (single process)No - direct stdio works
Local Jupyter sandboxYes - recommended for consistency
Kubernetes with separate containersYes - required
Docker Compose with separate servicesYes - required

API Endpoints

MCP Config Servers (from mcp.json)

List MCP Config Servers

GET /api/v1/mcp/servers/config

Returns only servers from ~/.datalayer/mcp.json that are currently running:

[
{
"id": "tavily",
"name": "Tavily Search",
"enabled": true,
"isRunning": true,
"isAvailable": true,
"isConfig": true,
"tools": [
{
"name": "tavily-search",
"description": "Search the web using Tavily"
}
]
}
]

MCP Catalog Servers (predefined)

List Catalog Servers

GET /api/v1/mcp/servers/catalog

Returns all predefined catalog servers (whether running or not):

[
{
"id": "tavily",
"name": "Tavily Search",
"description": "Web search and research capabilities",
"isAvailable": false,
"requiredEnvVars": ["TAVILY_API_KEY"]
}
]

Enable a Catalog Server

POST /api/v1/mcp/servers/catalog/{server_name}/enable

Starts an MCP server from the catalog for the current session.

Disable a Catalog Server

DELETE /api/v1/mcp/servers/catalog/{server_name}/disable

Stops an MCP server and removes it from the current session.

Status and Info

Get MCP Toolsets Status

GET /api/v1/configure/mcp-toolsets-status
{
"initialized": true,
"ready_count": 2,
"failed_count": 0,
"ready_servers": ["tavily", "kaggle"],
"failed_servers": {}
}

Get MCP Toolsets Info

GET /api/v1/configure/mcp-toolsets-info
[
{
"type": "MCPServerStdio",
"id": "tavily",
"command": "npx",
"args": ["-y", "tavily-mcp@0.1.3"]
}
]

General Server Management

EndpointMethodDescription
/api/v1/mcp/serversGETList all running MCP servers
/api/v1/mcp/servers/{id}GETGet specific MCP server details
/api/v1/mcp/serversPOSTAdd a new MCP server
/api/v1/mcp/servers/{id}PUTUpdate an MCP server
/api/v1/mcp/servers/{id}DELETERemove an MCP server
/api/v1/agents/{id}/mcp-serversPATCHUpdate agent's MCP servers at runtime

Runtime MCP Server Updates

You can dynamically update which MCP servers an agent uses without recreating the agent:

PATCH /api/v1/agents/{agent_id}/mcp-servers
Content-Type: application/json

{
"selected_mcp_servers": ["tavily", "filesystem"]
}

This is useful for:

  • Adding tools to a running agent based on user needs
  • Removing unused tools to reduce context size
  • Switching between different MCP server configurations

UI Integration

The Agent Details panel in the chat UI displays real-time MCP toolsets status:

  • ✓ Ready servers with green checkmarks
  • ✗ Failed servers with error details
  • Auto-refresh every 5 seconds

Troubleshooting

Debug Logging

Enable debug logging to see detailed MCP startup information:

python -m agent_runtimes --debug

Common Issues

IssueCauseSolution
Timeout during startupFirst-time package downloads can take minutesWait for download to complete; default timeout is 5 minutes
BrokenResourceErrorMCP server process crashedAutomatic retry (up to 3 times); check server logs
Server not startingMissing command or env varsVerify npx/uvx exists; check environment variables
LinkedIn browser errorPlaywright not installedRun uvx --from playwright playwright install chromium
Kaggle permission errorNot authenticatedComplete OAuth flow or set KAGGLE_TOKEN