Skip to content
Navigation

Agents are the core building block in Orbiter. Each agent wraps an LLM model, a set of tools, optional handoff targets, and lifecycle hooks. When executed, an agent runs an LLM-tool loop: it calls the LLM, executes any requested tool calls in parallel, feeds the results back, and repeats until the LLM produces a text-only response or the step limit is reached.

Basic Usage

python
from orbiter.agent import Agent
from orbiter.tool import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"Sunny, 72F in {city}"

agent = Agent(
    name="weather_bot",
    model="openai:gpt-4o",
    instructions="You are a helpful weather assistant.",
    tools=[get_weather],
)

To execute an agent, use run() (see Running Agents):

python
from orbiter.runner import run

result = await run(agent, "What's the weather in Tokyo?")
print(result.output)

Constructor Parameters

All parameters are keyword-only. Only name is required.

ParameterTypeDefaultDescription
namestrrequiredUnique identifier for this agent
modelstr"openai:gpt-4o"Model string in "provider:model_name" format
instructionsstr | Callable[..., str]""System prompt, or a callable that returns one
toolslist[Tool] | NoneNoneTools available to this agent
handoffslist[Agent] | NoneNoneOther agents this agent can delegate to
hookslist[tuple[HookPoint, Hook]] | NoneNoneLifecycle hooks as (HookPoint, Hook) tuples
output_typetype[BaseModel] | NoneNonePydantic model for structured output validation
max_stepsint10Maximum LLM-tool round-trips before stopping
temperaturefloat1.0LLM sampling temperature
max_tokensint | NoneNoneMaximum output tokens per LLM call
memoryAnyNoneOptional memory store for persistent memory
contextAnyNoneOptional context engine for hierarchical state

Dynamic Instructions

The instructions parameter can be a callable. This is useful when the system prompt needs to change based on runtime context:

python
def make_instructions(agent_name: str) -> str:
    return f"You are {agent_name}. Today is Monday. Be concise."

agent = Agent(
    name="dynamic_bot",
    instructions=make_instructions,
)

When instructions is callable, it is invoked with the agent’s name as the argument at the start of each run() call.

Agent.describe()

The describe() method returns a summary dict of the agent’s capabilities. This is useful for debugging, logging, and capability advertisement in multi-agent systems.

python
agent = Agent(
    name="helper",
    model="anthropic:claude-sonnet-4-20250514",
    tools=[get_weather],
    max_steps=5,
)

print(agent.describe())
# {
#     "name": "helper",
#     "model": "anthropic:claude-sonnet-4-20250514",
#     "tools": ["get_weather"],
#     "handoffs": [],
#     "max_steps": 5,
#     "output_type": None,
# }

Tool Registration

Tools are indexed by name for O(1) lookup. Duplicate tool names raise AgentError:

python
from orbiter.agent import Agent, AgentError
from orbiter.tool import tool

@tool
def greet(name: str) -> str:
    """Say hello."""
    return f"Hello, {name}!"

@tool(name="greet")  # duplicate!
def greet_v2(name: str) -> str:
    """Say hello differently."""
    return f"Hi there, {name}!"

try:
    agent = Agent(name="bot", tools=[greet, greet_v2])
except AgentError as e:
    print(e)  # "Duplicate tool name 'greet' on agent 'bot'"

Handoffs

Handoffs allow an agent to delegate to another agent. The target agents are registered by name:

python
researcher = Agent(name="researcher", instructions="You research topics deeply.")
writer = Agent(name="writer", instructions="You write clearly and concisely.")

coordinator = Agent(
    name="coordinator",
    instructions="Delegate research to 'researcher' and writing to 'writer'.",
    handoffs=[researcher, writer],
)

Handoffs are primarily used within Swarm orchestration in handoff mode.

The Agent.run() Method

The run() method executes the agent’s LLM-tool loop directly:

python
output = await agent.run(
    "What's the weather?",
    provider=my_provider,   # required: object with async complete()
    messages=history,       # optional: prior conversation
    max_retries=3,          # default: 3 retry attempts
)
# output is an AgentOutput with .text, .tool_calls, .usage

In most cases, you should use the top-level run() function from orbiter.runner instead, which adds state tracking, loop detection, and auto-resolved providers. See Running Agents.

AgentConfig

For serializable configuration, use AgentConfig from orbiter.config:

python
from orbiter.config import AgentConfig

config = AgentConfig(
    name="my_agent",
    model="openai:gpt-4o",
    instructions="Be helpful.",
    temperature=0.7,
    max_steps=5,
)
FieldTypeDefault
namestrrequired
modelstr"openai:gpt-4o"
instructionsstr""
temperaturefloat1.0 (range: 0.0-2.0)
max_tokensint | NoneNone
max_stepsint10 (min: 1)

Error Handling

Agent errors raise AgentError, a subclass of OrbiterError:

  • Duplicate tools: Registering two tools with the same name
  • Duplicate handoffs: Registering two handoff targets with the same name
  • No provider: Calling run() without a provider
  • Retries exhausted: All retry attempts failed for an LLM call
  • Context length exceeded: Input exceeds the model’s context window
python
from orbiter.agent import Agent, AgentError

try:
    result = await agent.run("Hello", provider=None)
except AgentError as e:
    print(f"Agent error: {e}")

API Summary

SymbolModuleDescription
Agentorbiter.agentCore agent class
AgentErrororbiter.agentAgent-level error
AgentConfigorbiter.configSerializable agent configuration
AgentOutputorbiter.typesOutput from a single LLM call
RunResultorbiter.typesFinal result of run()