Agent Runtimes
Agent Runtimes is a flexible framework for building and deploying AI agents with multiple transport protocols, model providers, and MCP integrations.
Overview
Agent Runtimes provides:
- 🔌 Multiple Transport Protocols — Connect via AG-UI, Vercel AI, ACP (WebSocket), or A2A for agent-to-agent communication
- 🤖 Multi-Provider Model Support — Use models from Anthropic, OpenAI, Azure OpenAI, or AWS Bedrock
- 🛠️ MCP Integration — Connect to Model Context Protocol servers for extended capabilities
- 📡 Streaming Responses — Real-time streaming for responsive chat experiences
- 🔄 Per-Request Model Selection — Switch models dynamically without restarting agents
- 🎨 Ready-to-Use UI Components — React components for building chat interfaces
- 🧩 Extensions — A2UI, MCP-UI, and MCP Apps support for rich UI experiences
Quick Start
# Install
pip install agent-runtimes
# Start the server
python -m agent_runtimes
Configure your model provider:
# Choose one (or more) providers
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
Key Components
API Endpoints
Agent Runtimes exposes a comprehensive REST API:
| Endpoint | Description |
|---|---|
POST /api/v1/agents/{id}/prompt | Send prompts with streaming response |
GET /api/v1/agents | List all available agents |
GET /api/v1/configure/mcp-toolsets-status | Check MCP server status |
GET /api/v1/configure/config | Get system configuration |
See the Endpoints documentation for the full API reference.
MCP Integration
Connect to MCP servers for tools like web search, file access, and more:
mcp_servers:
- name: tavily
command: uvx
args: ["mcp-server-tavily"]
env:
TAVILY_API_KEY: "${TAVILY_API_KEY}"
MCP servers are managed with:
- Automatic retry — 3 attempts with exponential backoff
- Health monitoring — Status endpoint for checking server readiness
- Graceful shutdown — Clean resource management on exit
See the MCP documentation for configuration details.
Extensions
| Extension | Purpose |
|---|---|
| A2UI | Agent-to-UI bidirectional communication |
| MCP-UI | Browse and execute MCP tools |
| MCP Apps | Full application experiences via MCP |
See the Extensions documentation for integration guides.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ ChatBase, Protocol Adapters, UI Components │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
↓ ↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ AG-UI │ │ Vercel AI │ │ ACP │
│ Transport │ │ Transport │ │ Transport │
└───────────────┘ └───────────────┘ └───────────────┘
│ │ │
└─────────────────────┼─────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Agent Framework │
│ Pydantic AI (+ more based on feedback) │
└─────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
↓ ↓
┌─────────────────────────┐ ┌─────────────────────────┐
│ Model Providers │ │ MCP Servers │
│ Anthropic, OpenAI, etc. │ │ Tavily, Fetch, Custom │
└─────────────────────────┘ └─────────────────────────┘
Built on Pydantic AI
Agent Runtimes is currently built on top of Pydantic AI, a powerful Python agent framework that provides:
- Type-safe agents — Full type checking with Pydantic models
- Structured outputs — Reliable JSON responses from LLMs
- Tool calling — First-class support for function tools and MCP
- Multi-model support — Anthropic, OpenAI, Google, and more
We've chosen Pydantic AI as our initial foundation, but we're open to expanding support for other agent frameworks based on community feedback. If you'd like to see support for Google ADK, LangChain, CrewAI, or other frameworks, please open a discussion or contribute!
Features at a Glance
| Feature | Description |
|---|---|
| Transports | AG-UI, Vercel AI, ACP (WebSocket), A2A |
| Model Providers | Anthropic, OpenAI, Azure OpenAI, AWS Bedrock |
| Agent Framework | Pydantic AI (more frameworks based on community feedback) |
| MCP Servers | Tavily, Fetch, custom servers |
| Extensions | A2UI, MCP-UI, MCP Apps |
| Streaming | Real-time SSE and WebSocket streaming |
| UI | React components with Primer design system |
Documentation
📄️ Transports
Agent Runtimes supports multiple transport protocols for communicating with AI agents. Each transport has different characteristics suited for various use cases.
📄️ Models
Agent Runtimes supports multiple AI model providers through pydantic-ai. Models are configured via environment variables and can be selected per-request (except for A2A protocol).
📄️ Model Context Protocol
Agent Runtimes provides comprehensive support for the Model Context Protocol (MCP), enabling agents to access external tools and data sources through a standardized interface.
📄️ Extensions
Agent Runtimes supports several extension protocols that enable rich user interfaces and inter-agent communication.
📄️ Agents
Agent Runtimes provides a flexible agent architecture built on top of Pydantic AI.
📄️ Endpoints
Agent Runtimes exposes a comprehensive REST API for managing agents, executing prompts, and monitoring system status.