Skip to main content

🤖 🚀 Agent Runtimes

Agent Runtimes is a flexible framework for building and deploying AI agents with multiple transport protocols, model providers, and MCP integrations.

Package Scope

Agent Runtimes is the top-level orchestration layer in the Datalayer AI stack:

┌─────────────────────────────────────────────────────────────┐
│ agent-runtimes │ ◀── You are here
│ (Agent hosting, protocols, UI) │
├──────────────────────────┬──────────────────────────────────┤
│ agent-codemode │ agent-skills │
│ (discovery, codegen) │ (skills management) │
├──────────────────────────┴──────────────────────────────────┤
│ code-sandboxes │
│ (Safe code execution environment) │
└─────────────────────────────────────────────────────────────┘

Responsibilities:

  • ✅ Agent hosting and lifecycle management
  • ✅ Multiple transport protocols (AG-UI, Vercel AI, ACP, A2A)
  • ✅ Model provider integration (Anthropic, OpenAI, Azure, Bedrock)
  • ✅ MCP server management and tool routing
  • ✅ React UI components (ChatBase, ChatSidebar, ChatFloating)
  • ✅ Extensions (A2UI, MCP-UI, MCP Apps)
  • ✅ Integration layer for agent-codemode and agent-skills

Not Responsible For:

  • ❌ MCP tool binding generation (→ agent-codemode)
  • ❌ Skill CRUD and lifecycle (→ agent-skills)
  • ❌ Raw code execution (→ code-sandboxes)

Overview

Agent Runtimes provides:

  • 🔌 Multiple Transport Protocols — Connect via AG-UI, Vercel AI, ACP (WebSocket), or A2A for agent-to-agent communication
  • 🤖 Multi-Provider Model Support — Use models from Anthropic, OpenAI, Azure OpenAI, or AWS Bedrock
  • 🛠️ MCP Integration — Connect to Model Context Protocol servers for extended capabilities
  • 📡 Streaming Responses — Real-time streaming for responsive chat experiences
  • 🔄 Per-Request Model Selection — Switch models dynamically without restarting agents
  • 🎨 Ready-to-Use UI Components — React components for building chat interfaces
  • 🧩 Extensions — A2UI, MCP-UI, and MCP Apps support for rich UI experiences

Integration with Other Packages

Agent Runtimes provides an integration layer for the other packages:

from agent_runtimes.integrations.codemode import CodemodeIntegration

# Initialize with agent-runtimes MCP infrastructure
integration = CodemodeIntegration()
await integration.setup()

# Access agent-codemode features
result = await integration.execute_code('''
from generated.mcp.filesystem import read_file
content = await read_file({"path": "/data.txt"})
print(content)
''')

# Access agent-skills features
skills = integration.list_skills()
result = await integration.execute_skill("data-analyzer", {"path": "/data.csv"})

Quick Start

# Install
pip install agent-runtimes

# Start the server
python -m agent_runtimes

Agent Specs (agentspecs)

Agent Runtimes pulls agent, MCP server, skill, and env var specs from the agentspecs repository. The specs are cloned locally and used to generate the runtime catalogs.

See the Agent Specs guide for details on cloning and generation.

# Clone or update agentspecs and regenerate catalogs
make specs

This command:

  • Clones the agentspecs repository into ./agentspecs (or pulls updates)
  • Generates Python and TypeScript catalogs for agents, MCP servers, skills, and env vars

If you already have agentspecs checked out elsewhere, copy or symlink it to ./agentspecs before running make specs.

Configure your model provider:

# Choose one (or more) providers
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"

Key Components

API Endpoints

Agent Runtimes exposes a comprehensive REST API:

EndpointDescription
POST /api/v1/agents/{id}/promptSend prompts with streaming response
GET /api/v1/agentsList all available agents
GET /api/v1/configure/mcp-toolsets-statusCheck MCP server status
GET /api/v1/configure/configGet system configuration

See the API Endpoints documentation for the full API reference.

MCP Integration

Connect to MCP servers for tools like web search, file access, and more:

mcp_servers:
- name: tavily
command: uvx
args: ["mcp-server-tavily"]
env:
TAVILY_API_KEY: "${TAVILY_API_KEY}"

MCP servers are managed with:

  • Automatic retry — 3 attempts with exponential backoff
  • Health monitoring — Status endpoint for checking server readiness
  • Graceful shutdown — Clean resource management on exit

See the MCP Servers documentation for configuration details.

Extensions

ExtensionPurpose
A2UIAgent-to-UI bidirectional communication
MCP-UIBrowse and execute MCP tools
MCP AppsFull application experiences via MCP

See the Extensions documentation for integration guides.

Architecture

┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ChatBase, Protocol Adapters, UI Components │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────┼─────────────────────┐
↓ ↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ AG-UI │ │ Vercel AI │ │ ACP │
│ Transport │ │ Transport │ │ Transport │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└─────────────────────┼─────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ Agent Framework │
│ Pydantic AI (+ more based on feedback) │
└─────────────────────────────────────────────────────────────┘

┌───────────────────┼───────────────────┐
↓ ↓
┌─────────────────────────┐ ┌─────────────────────────┐
│ Model Providers │ │ MCP Servers │
│ Anthropic, OpenAI, etc. │ │ Tavily, Fetch, Custom │
└─────────────────────────┘ └─────────────────────────┘

Built on Pydantic AI

Agent Runtimes is currently built on top of Pydantic AI, a powerful Python agent framework that provides:

  • Type-safe agents — Full type checking with Pydantic models
  • Structured outputs — Reliable JSON responses from LLMs
  • Tool calling — First-class support for function tools and MCP
  • Multi-model support — Anthropic, OpenAI, Google, and more
Community-Driven Expansion

We've chosen Pydantic AI as our initial foundation, but we're open to expanding support for other agent frameworks based on community feedback. If you'd like to see support for Google ADK, LangChain, CrewAI, or other frameworks, please open a discussion or contribute!

Features at a Glance

FeatureDescription
TransportsAG-UI, Vercel AI, ACP (WebSocket), A2A
Model ProvidersAnthropic, OpenAI, Azure OpenAI, AWS Bedrock
Agent FrameworkPydantic AI (more frameworks based on community feedback)
MCP ServersTavily, Fetch, custom servers
ExtensionsA2UI, MCP-UI, MCP Apps
StreamingReal-time SSE and WebSocket streaming
UIReact components with Primer design system

Documentation