🤖 🚀 Agent Runtimes
Agent Runtimes is a flexible framework for building and deploying AI agents with multiple transport protocols, model providers, and MCP integrations.
Package Scope
Agent Runtimes is the top-level orchestration layer in the Datalayer AI stack:
┌─────────────────────────────────────────────────────────────┐
│ agent-runtimes │ ◀── You are here
│ (Agent hosting, protocols, UI) │
├──────────────────────────┬──────────────────────────────────┤
│ agent-codemode │ agent-skills │
│ (discovery, codegen) │ (skills management) │
├────────────────────── ────┴──────────────────────────────────┤
│ code-sandboxes │
│ (Safe code execution environment) │
└─────────────────────────────────────────────────────────────┘
Responsibilities:
- ✅ Agent hosting and lifecycle management
- ✅ Multiple transport protocols (AG-UI, Vercel AI, ACP, A2A)
- ✅ Model provider integration (Anthropic, OpenAI, Azure, Bedrock)
- ✅ MCP server management and tool routing
- ✅ React UI components (ChatBase, ChatSidebar, ChatFloating)
- ✅ Extensions (A2UI, MCP-UI, MCP Apps)
- ✅ Integration layer for agent-codemode and agent-skills
Not Responsible For:
- ❌ MCP tool binding generation (→ agent-codemode)
- ❌ Skill CRUD and lifecycle (→ agent-skills)
- ❌ Raw code execution (→ code-sandboxes)
Overview
Agent Runtimes provides:
- 🔌 Multiple Transport Protocols — Connect via AG-UI, Vercel AI, ACP (WebSocket), or A2A for agent-to-agent communication
- 🤖 Multi-Provider Model Support — Use models from Anthropic, OpenAI, Azure OpenAI, or AWS Bedrock
- 🛠️ MCP Integration — Connect to Model Context Protocol servers for extended capabilities
- 📡 Streaming Responses — Real-time streaming for responsive chat experiences
- 🔄 Per-Request Model Selection — Switch models dynamically without restarting agents
- 🎨 Ready-to-Use UI Components — React components for building chat interfaces
- 🧩 Extensions — A2UI, MCP-UI, and MCP Apps support for rich UI experiences
Integration with Other Packages
Agent Runtimes provides an integration layer for the other packages:
from agent_runtimes.integrations.codemode import CodemodeIntegration
# Initialize with agent-runtimes MCP infrastructure
integration = CodemodeIntegration()
await integration.setup()
# Access agent-codemode features
result = await integration.execute_code('''
from generated.mcp.filesystem import read_file
content = await read_file({"path": "/data.txt"})
print(content)
''')
# Access agent-skills features
skills = integration.list_skills()
result = await integration.execute_skill("data-analyzer", {"path": "/data.csv"})
Quick Start
# Install
pip install agent-runtimes
# Start the server
python -m agent_runtimes
Agent Specs (agentspecs)
Agent Runtimes pulls agent, MCP server, skill, and env var specs from the agentspecs repository. The specs are cloned locally and used to generate the runtime catalogs.
See the Agent Specs guide for details on cloning and generation.
# Clone or update agentspecs and regenerate catalogs
make specs
This command:
- Clones the agentspecs repository into ./agentspecs (or pulls updates)
- Generates Python and TypeScript catalogs for agents, MCP servers, skills, and env vars
If you already have agentspecs checked out elsewhere, copy or symlink it to ./agentspecs before running make specs.
Configure your model provider:
# Choose one (or more) providers
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
Key Components
API Endpoints
Agent Runtimes exposes a comprehensive REST API:
| Endpoint | Description |
|---|---|
POST /api/v1/agents/{id}/prompt | Send prompts with streaming response |
GET /api/v1/agents | List all available agents |
GET /api/v1/configure/mcp-toolsets-status | Check MCP server status |
GET /api/v1/configure/config | Get system configuration |
See the API Endpoints documentation for the full API reference.
MCP Integration
Connect to MCP servers for tools like web search, file access, and more:
mcp_servers:
- name: tavily
command: uvx
args: ["mcp-server-tavily"]
env:
TAVILY_API_KEY: "${TAVILY_API_KEY}"
MCP servers are managed with:
- Automatic retry — 3 attempts with exponential backoff
- Health monitoring — Status endpoint for checking server readiness
- Graceful shutdown — Clean resource management on exit
See the MCP Servers documentation for configuration details.
Extensions
| Extension | Purpose |
|---|---|
| A2UI | Agent-to-UI bidirectional communication |
| MCP-UI | Browse and execute MCP tools |
| MCP Apps | Full application experiences via MCP |
See the Extensions documentation for integration guides.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ChatBase, Protocol Adapters, UI Components │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
↓ ↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ AG-UI │ │ Vercel AI │ │ ACP │
│ Transport │ │ Transport │ │ Transport │
└────────── ─────┘ └───────────────┘ └───────────────┘
│ │ │
└─────────────────────┼─────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Agent Framework │
│ Pydantic AI (+ more based on feedback) │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
↓ ↓
┌─────────────────────────┐ ┌─────────────────────────┐
│ Model Providers │ │ MCP Servers │
│ Anthropic, OpenAI, etc. │ │ Tavily, Fetch, Custom │
└─────────────────────────┘ └─────────────────────────┘
Built on Pydantic AI
Agent Runtimes is currently built on top of Pydantic AI, a powerful Python agent framework that provides:
- Type-safe agents — Full type checking with Pydantic models
- Structured outputs — Reliable JSON responses from LLMs
- Tool calling — First-class support for function tools and MCP
- Multi-model support — Anthropic, OpenAI, Google, and more
We've chosen Pydantic AI as our initial foundation, but we're open to expanding support for other agent frameworks based on community feedback. If you'd like to see support for Google ADK, LangChain, CrewAI, or other frameworks, please open a discussion or contribute!
Features at a Glance
| Feature | Description |
|---|---|
| Transports | AG-UI, Vercel AI, ACP (WebSocket), A2A |
| Model Providers | Anthropic, OpenAI, Azure OpenAI, AWS Bedrock |
| Agent Framework | Pydantic AI (more frameworks based on community feedback) |
| MCP Servers | Tavily, Fetch, custom servers |
| Extensions | A2UI, MCP-UI, MCP Apps |
| Streaming | Real-time SSE and WebSocket streaming |
| UI | React components with Primer design system |
Documentation
📄️ Transports
Agent Runtimes supports multiple transport protocols for communicating with AI agents. Each transport has different characteristics suited for various use cases.
📄️ Identity
AI agents that act on behalf of users need secure identity and authorization mechanisms to access external services like GitHub, Gmail, Kaggle, or enterprise APIs. This section describes the identity strategy for Agent Runtimes.
📄️ Models
Agent Runtimes supports multiple AI model providers through pydantic-ai. Models are configured via environment variables and can be selected per-request (except for A2A protocol).
📄️ MCP Servers
Agent Runtimes provides comprehensive support for MCP Servers, enabling agents to access external tools and data sources through a standardized interface.
📄️ Agent Specs
Agent Runtimes uses the agentspecs repository
📄️ Programmatic Tools
What this page covers
📄️ Extensions
Agent Runtimes supports several extension protocols that enable rich user interfaces and inter-agent communication.
📄️ Hooks
Agent Runtimes provides a comprehensive set of React hooks for building AI agent interfaces. These hooks are organized into three categories based on their purpose.
📄️ Integrations
Agent Runtimes provides a flexible agent architecture built on top of Pydantic AI.
📄️ CLI
The agent-runtimes package provides a command-line interface for starting and managing the Agent Runtimes server.
📄️ API Endpoints
Agent Runtimes exposes a comprehensive REST API for managing agents, executing prompts, and monitoring system status.