Skip to main content

API Endpoints

Agent Runtimes exposes a comprehensive REST API for managing agents, executing prompts, and monitoring system status.

Base URL

http://localhost:8765/api/v1

Agents

Manage AI agents and their configurations.

List Agents

GET /api/v1/agents

Returns a list of all registered agents.

Response:

{
"agents": [
{
"id": "pydantic-ai",
"name": "Pydantic AI Agent",
"transport": "streaming",
"model": "gpt-4o"
}
]
}

Get Agent

GET /api/v1/agents/{agent_id}

Get details for a specific agent.

Response:

{
"id": "pydantic-ai",
"name": "Pydantic AI Agent",
"transport": "streaming",
"model": "gpt-4o",
"mcp_toolsets": ["tavily", "fetch"]
}

Create Agent

POST /api/v1/agents
Content-Type: application/json

Create a new agent.

Request Body:

{
"id": "my-agent",
"name": "My Custom Agent",
"model": "gpt-4o",
"system_prompt": "You are a helpful assistant."
}

Update Agent

PUT /api/v1/agents/{agent_id}
Content-Type: application/json

Update an existing agent configuration.

Delete Agent

DELETE /api/v1/agents/{agent_id}

Remove an agent from the system.

Prompts

Execute prompts against agents and manage conversations.

Send Prompt (Streaming)

POST /api/v1/agents/{agent_id}/prompt
Content-Type: application/json

Send a prompt to an agent and receive a streaming response.

Request Body:

{
"message": "What is the weather in Paris?",
"conversation_id": "optional-conversation-id"
}

Response: Server-Sent Events (SSE) stream

data: {"type": "text", "content": "Based on "}
data: {"type": "text", "content": "the current weather..."}
data: {"type": "tool_call", "name": "get_weather", "args": {...}}
data: {"type": "tool_result", "result": {...}}
data: {"type": "done"}

Send Prompt (Non-Streaming)

POST /api/v1/agents/{agent_id}/prompt/sync
Content-Type: application/json

Send a prompt and wait for the complete response.

Response:

{
"response": "Based on the current weather data...",
"tool_calls": [...],
"usage": {
"prompt_tokens": 150,
"completion_tokens": 200
}
}

Configuration

System configuration and status endpoints.

Get Configuration

GET /api/v1/configure/config

Get the current system configuration.

Response:

{
"agents": [...],
"mcp_servers": [...],
"models": [...]
}

MCP Toolsets Status

GET /api/v1/configure/mcp-toolsets-status

Get the status of MCP toolsets initialization.

Response:

{
"initialized": true,
"ready_count": 2,
"total_count": 2,
"servers": {
"tavily": {
"ready": true,
"tools": ["tavily_search", "tavily_extract"]
},
"fetch": {
"ready": true,
"tools": ["fetch"]
}
}
}

MCP Toolsets Info

GET /api/v1/configure/mcp-toolsets-info

Get detailed information about all MCP toolsets including available tools.

Response:

{
"toolsets": [
{
"name": "tavily",
"tools": [
{
"name": "tavily_search",
"description": "Search the web using Tavily",
"parameters": {...}
}
]
}
]
}

Extensions

A2UI Endpoints

GET  /api/v1/a2ui/           # A2UI protocol
GET /api/v1/a2ui/agents # List A2UI agents

MCP-UI Endpoints

GET  /api/v1/mcp-ui/         # MCP-UI protocol
GET /api/v1/mcp-ui/agents # List MCP-UI agents

Conversations

Manage conversation history and context.

Get Conversation

GET /api/v1/conversations/{conversation_id}

Retrieve a conversation by ID.

List Conversations

GET /api/v1/conversations

List all conversations, optionally filtered by agent.

Delete Conversation

DELETE /api/v1/conversations/{conversation_id}

Delete a conversation and its history.

Health & Status

Health Check

GET /api/v1/health

Response:

{
"status": "healthy",
"version": "0.4.5"
}

Readiness Check

GET /api/v1/ready

Returns 200 when the service is ready to accept requests.

Error Responses

All endpoints return standard error responses:

{
"detail": "Agent not found",
"status_code": 404
}

Common Status Codes

CodeDescription
200Success
201Created
400Bad Request
404Not Found
422Validation Error
500Internal Server Error

Authentication

info

Authentication is optional and configurable. When enabled, include the API key in the request header:

Authorization: Bearer your-api-key

Rate Limiting

Rate limiting can be configured per endpoint. When rate limited, you'll receive:

HTTP/1.1 429 Too Many Requests
Retry-After: 60

OpenAPI Documentation

Interactive API documentation is available at:

  • Swagger UI: http://localhost:8765/docs
  • ReDoc: http://localhost:8765/redoc
  • OpenAPI JSON: http://localhost:8765/openapi.json