API Endpoints
Agent Runtimes exposes a comprehensive REST API for managing agents, executing prompts, and monitoring system status.
Base URL
http://localhost:8765/api/v1
Most API endpoints should be called without trailing slashes. However, mounted protocol endpoints require trailing slashes:
Standard endpoints (no trailing slash):
- ✅
/api/v1/agents - ✅
/api/v1/mcp/servers
Protocol endpoints (trailing slash required):
- ✅
/api/v1/ag-ui/{agent_id}/- AG-UI protocol (mounted Starlette apps) - ✅
/api/v1/a2a/{agent_id}/- A2A protocol (FastA2A compatibility) - ✅
/api/v1/examples/{example_name}/- AG-UI example agents
Without the trailing slash, mounted apps return a 307 redirect which may break streaming clients.
Agents
Manage AI agents and their configurations.
List Agents
GET /api/v1/agents
Returns a list of all registered agents.
Response:
{
"agents": [
{
"id": "pydantic-ai",
"name": "Pydantic AI Agent",
"transport": "streaming",
"model": "gpt-4o"
}
]
}
Get Agent
GET /api/v1/agents/{agent_id}
Get details for a specific agent.
Response:
{
"id": "pydantic-ai",
"name": "Pydantic AI Agent",
"transport": "streaming",
"model": "gpt-4o",
"mcp_toolsets": ["tavily", "fetch"]
}
Create Agent
POST /api/v1/agents
Content-Type: application/json
Create a new agent.
Request Body:
{
"id": "my-agent",
"name": "My Custom Agent",
"model": "gpt-4o",
"system_prompt": "You are a helpful assistant."
}
Update Agent
PUT /api/v1/agents/{agent_id}
Content-Type: application/json
Update an existing agent configuration.
Update Agent MCP Servers (Runtime)
PATCH /api/v1/agents/{agent_id}/mcp-servers
Content-Type: application/json
Dynamically update an agent's selected MCP servers without recreating the agent. This allows adding or removing MCP tools from a running agent.
Request Body:
{
"selected_mcp_servers": ["tavily", "filesystem"]
}
Response:
{
"agent_id": "my-agent",
"selected_mcp_servers": ["tavily", "filesystem"],
"message": "MCP servers updated successfully"
}
Start Agent MCP Servers
Start catalog MCP servers defined for running agents. Environment variables can be provided to configure the servers (e.g., API keys). If the agent has Codemode enabled, the Codemode toolset will be rebuilt to include the newly started servers as programmatic tools.
In a Kubernetes pod, agent-runtimes starts before the Jupyter container is ready. The code sandbox goes through two phases:
| Phase | Trigger | Sandbox Variant | Code Execution |
|---|---|---|---|
| 1 — Startup | Dockerfile CMD (--codemode, no --jupyter-sandbox) | local-eval | In-process eval() (fallback) |
| 2 — Companion call | POST /api/v1/agents/mcp-servers/start with jupyter_sandbox + mcp_proxy_url | local-jupyter | Jupyter kernel (:2300) |
Phase 1: The agent-runtimes container starts with --codemode and --no-catalog-mcp-servers. Because no --jupyter-sandbox flag is provided, the CodeSandboxManager defaults to variant="local-eval".
Phase 2: Once all containers in the pod are healthy, the runtimes-companion sidecar calls this endpoint with:
jupyter_sandbox:http://127.0.0.1:2300?token=<token>— the Jupyter kernel URLmcp_proxy_url:http://127.0.0.1:8765/api/v1/mcp/proxy— the HTTP-to-stdio proxy
The endpoint calls CodeSandboxManager.configure_from_url(), which switches the variant to local-jupyter and rebuilds the Codemode toolset. All subsequent code execution runs inside the Jupyter kernel instead of local eval.
┌──────────────────────────────────────────────────────────────────────┐
│ Pod │
│ │
│ companion ──POST /mcp-servers/start──▶ agent-runtimes :8765 │
│ (jupyter_sandbox, │ │
│ mcp_proxy_url, ├─ starts MCP servers │
│ env_vars) ├─ switches to local-jupyter │
│ └─ rebuilds codemode │
│ │
│ agent-runtimes :8765 ──run_code() ──▶ jupyter :2300 │
│ jupyter :2300 ──HTTP tool call──▶ agent-runtimes /api/v1/mcp/proxy │
│ │
│ Shared Volume: /mnt/shared-agent/ │
│ ├── generated/ (Python tool bindings written by codemode) │
│ └── skills/ (SKILL.md files for agent skills) │
└──────────────────────────────────────────────────────────────────────┘
Start MCP Servers for All Agents
POST /api/v1/agents/mcp-servers/start
Content-Type: application/json
Start catalog MCP servers for all running agents.
Request Body:
{
"env_vars": [
{"name": "TAVILY_API_KEY", "value": "tvly-xxx"},
{"name": "GITHUB_TOKEN", "value": "ghp_xxx"}
],
"jupyter_sandbox": "http://localhost:8888?token=my-token",
"mcp_proxy_url": "http://127.0.0.1:8765/api/v1/mcp/proxy"
}
| Field | Type | Required | Description |
|---|---|---|---|
env_vars | array | No | List of environment variables to set before starting servers |
env_vars[].name | string | Yes | Environment variable name |
env_vars[].value | string | Yes | Environment variable value |
jupyter_sandbox | string | No | Jupyter server URL with token for code execution. When provided, switches the sandbox from local-eval to local-jupyter (Phase 2 of the bootstrap) |
mcp_proxy_url | string | No | HTTP proxy URL the Jupyter kernel uses to call MCP tools. Defaults to http://127.0.0.1:8765/api/v1/mcp/proxy when jupyter_sandbox is set |
Example Request:
curl -X POST http://localhost:8000/api/v1/agents/mcp-servers/start \
-H "Content-Type: application/json" \
-d '{"env_vars": [{"name": "TAVILY_API_KEY", "value": "tvly-xxx"}]}'
Response: 200 OK
{
"agents_processed": ["agent-1", "agent-2"],
"started_servers": ["tavily", "github"],
"stopped_servers": [],
"already_running": [],
"already_stopped": [],
"failed_servers": [],
"codemode_rebuilt": true,
"sandbox_configured": true,
"sandbox_variant": "local-jupyter",
"message": "Started 2 server(s) across 2 agent(s), sandbox=local-jupyter"
}
Start MCP Servers for a Specific Agent
POST /api/v1/agents/{agent_id}/mcp-servers/start
Content-Type: application/json
Start catalog MCP servers for a specific running agent.
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
agent_id | string | The agent identifier |
Request Body:
{
"env_vars": [
{"name": "TAVILY_API_KEY", "value": "tvly-xxx"},
{"name": "GITHUB_TOKEN", "value": "ghp_xxx"}
]
}
| Field | Type | Required | Description |
|---|---|---|---|
env_vars | array | No | List of environment variables to set before starting servers |
env_vars[].name | string | Yes | Environment variable name |
env_vars[].value | string | Yes | Environment variable value |
Example Request:
curl -X POST http://localhost:8000/api/v1/agents/my-agent/mcp-servers/start \
-H "Content-Type: application/json" \
-d '{"env_vars": [{"name": "TAVILY_API_KEY", "value": "tvly-xxx"}]}'
Response: 200 OK
{
"agent_id": "my-agent",
"started_servers": ["tavily"],
"stopped_servers": [],
"already_running": ["github"],
"already_stopped": [],
"failed_servers": [
{"server_id": "filesystem", "error": "Server config not found"}
],
"codemode_rebuilt": true,
"message": "Started 1 server(s), 1 already running, 1 failed"
}
Response Fields
| Field | Type | Description |
|---|---|---|
agent_id | string | The agent identifier (only for single-agent endpoint) |
agents_processed | array | List of agent IDs that were processed (only for all-agents endpoint) |
started_servers | array | List of server IDs that were successfully started |
already_running | array | List of server IDs that were already running |
failed_servers | array | List of servers that failed to start with error details |
codemode_rebuilt | boolean | Whether the Codemode toolset was rebuilt |
sandbox_configured | boolean | Whether the code sandbox was (re)configured |
sandbox_variant | string | The sandbox variant after configuration (local-eval or local-jupyter) |
message | string | Summary message |
Error Responses:
| Code | Description |
|---|---|
404 | Agent not found (single-agent endpoint only) |
500 | Failed to start MCP servers |
Stop Agent MCP Servers
Stop catalog MCP servers for running agents.
Stop MCP Servers for All Agents
POST /api/v1/agents/mcp-servers/stop
Stop catalog MCP servers for all running agents.
Example Request:
curl -X POST http://localhost:8000/api/v1/agents/mcp-servers/stop
Response: 200 OK
{
"agents_processed": ["agent-1", "agent-2"],
"started_servers": [],
"stopped_servers": ["tavily", "github"],
"already_running": [],
"already_stopped": [],
"failed_servers": [],
"codemode_rebuilt": false,
"message": "Stopped 2 server(s) across 2 agent(s)"
}
Stop MCP Servers for a Specific Agent
POST /api/v1/agents/{agent_id}/mcp-servers/stop
Stop catalog MCP servers for a specific running agent.
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
agent_id | string | The agent identifier |
Example Request:
curl -X POST http://localhost:8000/api/v1/agents/my-agent/mcp-servers/stop
Response: 200 OK
{
"agent_id": "my-agent",
"started_servers": [],
"stopped_servers": ["tavily", "github"],
"already_running": [],
"already_stopped": ["filesystem"],
"failed_servers": [],
"codemode_rebuilt": false,
"message": "Stopped 2 server(s), 1 already stopped"
}
Response Fields
| Field | Type | Description |
|---|---|---|
agent_id | string | The agent identifier (only for single-agent endpoint) |
agents_processed | array | List of agent IDs that were processed (only for all-agents endpoint) |
stopped_servers | array | List of server IDs that were successfully stopped |
already_stopped | array | List of server IDs that were already stopped |
failed_servers | array | List of servers that failed to stop with error details |
message | string | Summary message |
Error Responses:
| Code | Description |
|---|---|
404 | Agent not found (single-agent endpoint only) |
500 | Failed to stop MCP servers |
Code Sandbox Configuration
The Code Sandbox Manager controls how code is executed when using Code Mode or Skills. It supports two variants:
local-eval: Uses Pythonexec()for simple code execution (default)local-jupyter: Connects to an external Jupyter server for persistent kernel state
Get Sandbox Status
GET /api/v1/agents/sandbox/status
Get the current status of the code sandbox manager.
Example Request:
curl http://localhost:8000/api/v1/agents/sandbox/status
Response: 200 OK
{
"variant": "local-eval",
"jupyter_url": null,
"jupyter_token_set": false,
"sandbox_running": true
}
| Field | Type | Description |
|---|---|---|
variant | string | Current sandbox variant (local-eval or local-jupyter) |
jupyter_url | string | Jupyter server URL if configured |
jupyter_token_set | boolean | Whether a Jupyter token is configured |
sandbox_running | boolean | Whether a sandbox instance is currently active |
Configure Sandbox
POST /api/v1/agents/sandbox/configure
Content-Type: application/json
Configure the code sandbox manager at runtime. If a sandbox is running with a different configuration, it will be stopped and recreated on next use.
Request Body:
{
"variant": "local-jupyter",
"jupyter_url": "http://localhost:8888?token=my-token"
}
| Field | Type | Required | Description |
|---|---|---|---|
variant | string | No | Sandbox variant: local-eval (default) or local-jupyter |
jupyter_url | string | Required for local-jupyter | Jupyter server URL (can include token as query param) |
jupyter_token | string | No | Jupyter token (overrides token in URL if provided) |
Example Requests:
# Configure for Jupyter sandbox
curl -X POST http://localhost:8000/api/v1/agents/sandbox/configure \
-H "Content-Type: application/json" \
-d '{"variant": "local-jupyter", "jupyter_url": "http://localhost:8888?token=my-token"}'
# Reset to local-eval sandbox
curl -X POST http://localhost:8000/api/v1/agents/sandbox/configure \
-H "Content-Type: application/json" \
-d '{"variant": "local-eval"}'
Response: 200 OK
{
"variant": "local-jupyter",
"jupyter_url": "http://localhost:8888",
"jupyter_token_set": true,
"sandbox_running": false
}
Error Responses:
| Code | Description |
|---|---|
400 | jupyter_url is required when variant is local-jupyter |
500 | Failed to configure sandbox |
Restart Sandbox
POST /api/v1/agents/sandbox/restart
Restart the code sandbox with current configuration. This stops any running sandbox and creates a new instance.
Example Request:
curl -X POST http://localhost:8000/api/v1/agents/sandbox/restart
Response: 200 OK
{
"variant": "local-jupyter",
"jupyter_url": "http://localhost:8888",
"jupyter_token_set": true,
"sandbox_running": true
}
Error Responses:
| Code | Description |
|---|---|
500 | Failed to restart sandbox |
Delete Agent
DELETE /api/v1/agents/{agent_id}
Remove an agent from the system.
Prompts
Execute prompts against agents and manage conversations.
Send Prompt (Streaming)
POST /api/v1/agents/{agent_id}/prompt
Content-Type: application/json
Send a prompt to an agent and receive a streaming response.
Request Body:
{
"message": "What is the weather in Paris?",
"conversation_id": "optional-conversation-id"
}
Response: Server-Sent Events (SSE) stream
data: {"type": "text", "content": "Based on "}
data: {"type": "text", "content": "the current weather..."}
data: {"type": "tool_call", "name": "get_weather", "args": {...}}
data: {"type": "tool_result", "result": {...}}
data: {"type": "done"}
Send Prompt (Non-Streaming)
POST /api/v1/agents/{agent_id}/prompt/sync
Content-Type: application/json
Send a prompt and wait for the complete response.
Response:
{
"response": "Based on the current weather data...",
"tool_calls": [...],
"usage": {
"prompt_tokens": 150,
"completion_tokens": 200
}
}
Configuration
System configuration and status endpoints.
Get Configuration
GET /api/v1/configure
Get the current system configuration including available models and MCP servers.
Response:
{
"models": [...],
"mcp_servers": [...],
"tools": [...]
}
MCP Toolsets Status
GET /api/v1/configure/mcp-toolsets-status
Get the status of MCP toolsets initialization.
Response:
{
"initialized": true,
"ready_count": 3,
"failed_count": 0,
"ready_servers": ["tavily", "linkedin", "kaggle"],
"failed_servers": {}
}
MCP Toolsets Info
GET /api/v1/configure/mcp-toolsets-info
Get detailed information about running MCP toolsets.
Response:
[
{
"type": "MCPServerStdio",
"id": "tavily",
"command": "npx",
"args": ["-y", "tavily-mcp@0.1.3"]
}
]
MCP Servers
Manage MCP (Model Context Protocol) servers for tool integration. There are two types of MCP servers:
- MCP Config: User-defined servers from
~/.datalayer/mcp.jsonthat start automatically - MCP Catalog: Predefined servers that can be enabled on-demand
Config and catalog servers are stored separately, allowing the same server ID to exist in both without conflict. For example, you can have a custom tavily server in your mcp.json while also having access to the predefined tavily in the catalog.
When you start the server with the --no-config-mcp-servers CLI flag, config MCP servers from ~/.datalayer/mcp.json are not started automatically. You can then use the endpoints below to dynamically enable MCP servers from the catalog at runtime:
# Start server without config MCP servers
python -m agent_runtimes --no-config-mcp-servers
# Then enable MCP servers via API
curl -X POST http://localhost:8000/api/v1/mcp/servers/catalog/tavily/enable
curl -X POST http://localhost:8000/api/v1/mcp/servers/catalog/github/enable
This is useful for scenarios where you want fine-grained control over which MCP servers are running, or when you need to manage resources carefully.
List MCP Config Servers
GET /api/v1/mcp/servers/config
Get all running MCP Config servers from ~/.datalayer/mcp.json. These servers start automatically when the agent runtime starts.
Response:
[
{
"id": "tavily",
"name": "Tavily Search",
"description": "Web search and research capabilities",
"enabled": true,
"tools": [
{
"name": "tavily-search",
"description": "Search the web using Tavily",
"enabled": true
}
],
"isAvailable": true,
"isRunning": true,
"isConfig": true,
"transport": "stdio"
}
]
List Catalog Servers
GET /api/v1/mcp/servers/catalog
Get all predefined MCP servers from the catalog. These are NOT started automatically.
Response:
[
{
"id": "tavily",
"name": "Tavily Search",
"description": "Web search and research capabilities via Tavily API",
"enabled": true,
"tools": [],
"command": "npx",
"args": ["-y", "tavily-mcp"],
"requiredEnvVars": ["TAVILY_API_KEY"],
"isAvailable": false,
"isConfig": false,
"transport": "stdio"
}
]
List All Available Servers
GET /api/v1/mcp/servers/available
Get all available MCP servers - combines catalog servers with running config servers. Since config and catalog servers are stored separately, the same ID can appear in both (as separate entries).
Response:
[
{
"id": "tavily",
"name": "Tavily Search",
"isAvailable": true,
"isRunning": true,
"isConfig": true,
"transport": "stdio"
},
{
"id": "tavily",
"name": "Tavily Search",
"description": "Web search via Tavily API",
"isAvailable": false,
"isRunning": false,
"isConfig": false,
"transport": "stdio"
}
]
Note: The same ID (tavily) can appear twice - once from config (user's mcp.json) and once from catalog (predefined).
List Running Servers
GET /api/v1/mcp/servers
Get all currently running MCP servers (both config and catalog).
Response:
[
{
"id": "tavily",
"name": "Tavily Search",
"enabled": true,
"tools": [
{
"name": "tavily-search",
"description": "A powerful web search tool...",
"enabled": true,
"inputSchema": {...}
}
],
"isAvailable": true
}
]
Get Server
GET /api/v1/mcp/servers/{server_id}
Get details for a specific MCP server.
Enable Catalog Server
POST /api/v1/mcp/servers/catalog/{server_name}/enable
Start an MCP server from the catalog for the current session. Only works for catalog servers (not config servers which start automatically).
This is particularly useful when:
- You started the server with
--no-config-mcp-servers - You want to dynamically add tools to an agent at runtime
- You need fine-grained control over which MCP servers are running
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
server_name | string | The name/ID of the MCP server from the catalog (e.g., tavily, github, filesystem) |
Example Request:
curl -X POST http://localhost:8000/api/v1/mcp/servers/catalog/tavily/enable
Response: 201 Created
{
"id": "tavily",
"name": "Tavily Search",
"enabled": true,
"tools": [
{
"name": "tavily-search",
"description": "Search the web using Tavily"
}
],
"isRunning": true,
"isConfig": false
}
Error Responses:
| Code | Description |
|---|---|
404 | Server not found in catalog |
500 | Failed to start the MCP server |
Disable Catalog Server
DELETE /api/v1/mcp/servers/catalog/{server_name}/disable
Stop an MCP server and remove it from the current session. This stops the MCP server process and frees up resources.
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
server_name | string | The name/ID of the MCP server to disable |
Example Request:
curl -X DELETE http://localhost:8000/api/v1/mcp/servers/catalog/tavily/disable
Response: 204 No Content
Error Responses:
| Code | Description |
|---|---|
404 | Server is not currently enabled |
500 | Failed to stop the MCP server |
Create Server
POST /api/v1/mcp/servers
Content-Type: application/json
Add a new MCP server configuration.
Request Body:
{
"id": "my-server",
"name": "My Custom Server",
"command": "npx",
"args": ["-y", "my-mcp-server"],
"enabled": true
}
Update Server
PUT /api/v1/mcp/servers/{server_id}
Content-Type: application/json
Update an existing MCP server configuration.
Delete Server
DELETE /api/v1/mcp/servers/{server_id}
Remove an MCP server configuration.
Response: 204 No Content
AG-UI Protocol
The AG-UI (Agent-User Interface) protocol provides streaming agent communication using Server-Sent Events (SSE). Each agent is mounted as a Starlette application.
AG-UI endpoints are mounted Starlette apps and require a trailing slash. Without it, you'll get a 307 redirect that breaks streaming.
List AG-UI Agents
GET /api/v1/ag-ui/agents
List all agents available via AG-UI protocol.
Response:
{
"protocol": "ag-ui",
"version": "0.1.0",
"agents": [
{
"agent_id": "pydantic-ai",
"endpoint": "/api/v1/ag-ui/pydantic-ai/"
}
],
"agents_endpoint": "/api/v1/ag-ui/agents",
"terminate_endpoint": "/api/v1/ag-ui/terminate",
"note": "Each agent is mounted at /api/v1/ag-ui/{agent_id}/ (trailing slash required)"
}
AG-UI Protocol Info
GET /api/v1/ag-ui/
Get AG-UI protocol information and list of available agents.
Send Message (Streaming)
POST /api/v1/ag-ui/{agent_id}/
Content-Type: application/json
Send a message to an AG-UI agent and receive streaming SSE response.
Request Body:
{
"thread_id": "thread-123",
"run_id": "run-456",
"messages": [
{
"id": "msg-1",
"role": "user",
"content": "Hello, how are you?"
}
],
"state": {},
"tools": [],
"context": [],
"forwardedProps": {}
}
Response: Server-Sent Events (SSE) stream with AG-UI events:
event: RUN_STARTED
data: {"type": "RUN_STARTED", "thread_id": "thread-123", "run_id": "run-456"}
event: TEXT_MESSAGE_START
data: {"type": "TEXT_MESSAGE_START", "message_id": "msg-2", "role": "assistant"}
event: TEXT_MESSAGE_CONTENT
data: {"type": "TEXT_MESSAGE_CONTENT", "message_id": "msg-2", "delta": "Hello! "}
event: TEXT_MESSAGE_CONTENT
data: {"type": "TEXT_MESSAGE_CONTENT", "message_id": "msg-2", "delta": "I'm doing well."}
event: TEXT_MESSAGE_END
data: {"type": "TEXT_MESSAGE_END", "message_id": "msg-2"}
event: RUN_FINISHED
data: {"type": "RUN_FINISHED", "thread_id": "thread-123", "run_id": "run-456"}
Terminate Run
POST /api/v1/ag-ui/terminate
Content-Type: application/json
Terminate an active AG-UI run.
Request Body:
{
"thread_id": "thread-123",
"run_id": "run-456"
}
AG-UI Event Types
| Event | Description |
|---|---|
RUN_STARTED | Run has started processing |
RUN_FINISHED | Run completed successfully |
RUN_ERROR | Run encountered an error |
TEXT_MESSAGE_START | Beginning of a text message |
TEXT_MESSAGE_CONTENT | Incremental text content (streaming) |
TEXT_MESSAGE_END | End of a text message |
TOOL_CALL_START | Tool invocation started |
TOOL_CALL_ARGS | Tool call arguments (streaming) |
TOOL_CALL_END | Tool invocation completed |
TOOL_CALL_RESULT | Result from tool execution |
STATE_SNAPSHOT | Current state snapshot |
STATE_DELTA | Incremental state update |
CUSTOM | Custom event type |
Example: Using curl
curl -X POST "http://localhost:8765/api/v1/ag-ui/pydantic-ai/" \
-H "Content-Type: application/json" \
-d '{
"thread_id": "test-thread",
"run_id": "test-run",
"messages": [{"id": "1", "role": "user", "content": "Hello!"}],
"state": {},
"tools": [],
"context": [],
"forwardedProps": {}
}'
Example: Using TypeScript
import { AGUIAdapter } from '@datalayer/agent-runtimes';
const adapter = new AGUIAdapter({
baseUrl: 'http://localhost:8765/api/v1/ag-ui/pydantic-ai/',
// Note: The adapter automatically ensures trailing slash
});
await adapter.sendMessage('Hello!', {
onToken: (token) => console.log(token),
onComplete: (message) => console.log('Done:', message),
});
Example: Using Python
from agent_runtimes.transports.clients import AGUIClient
async with AGUIClient("http://localhost:8765/api/v1/ag-ui/pydantic-ai/") as client:
async for event in client.run("Hello!"):
print(event)
Extensions
A2UI Endpoints
GET /api/v1/a2ui/ # A2UI protocol
GET /api/v1/a2ui/agents # List A2UI agents
MCP-UI Endpoints
GET /api/v1/mcp-ui/ # MCP-UI protocol
GET /api/v1/mcp-ui/agents # List MCP-UI agents
Conversations
Manage conversation history and context.
Get Conversation
GET /api/v1/conversations/{conversation_id}
Retrieve a conversation by ID.
List Conversations
GET /api/v1/conversations
List all conversations, optionally filtered by agent.
Delete Conversation
DELETE /api/v1/conversations/{conversation_id}
Delete a conversation and its history.
Health & Status
Health Check
GET /api/v1/health
Response:
{
"status": "healthy",
"version": "0.4.5"
}
Readiness Check
GET /api/v1/ready
Returns 200 when the service is ready to accept requests.
Error Responses
All endpoints return standard error responses:
{
"detail": "Agent not found",
"status_code": 404
}
Common Status Codes
| Code | Description |
|---|---|
200 | Success |
201 | Created |
400 | Bad Request |
404 | Not Found |
422 | Validation Error |
500 | Internal Server Error |
Authentication
Authentication is optional and configurable. When enabled, include the API key in the request header:
Authorization: Bearer your-api-key
Rate Limiting
Rate limiting can be configured per endpoint. When rate limited, you'll receive:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
OpenAPI Documentation
Interactive API documentation is available at:
- Swagger UI:
http://localhost:8765/docs - ReDoc:
http://localhost:8765/redoc - OpenAPI JSON:
http://localhost:8765/openapi.json