Skip to main content

Parameters

ParameterTypeDefaultDescription
modelUnion[str, Model]"openai/gpt-4o"Model identifier or Model instance
nameOptional[str]NoneAgent name for identification
memoryOptional[Memory]NoneMemory instance for conversation history
dbOptional[DatabaseBase]NoneDatabase instance (overrides memory if provided)
session_idOptional[str]NoneSession identifier for tracking conversations
user_idOptional[str]NoneUser identifier for multi-user scenarios
debugboolFalseEnable debug logging
debug_levelint1Debug level (1 = standard, 2 = detailed). Only used when debug=True
company_urlOptional[str]NoneCompany URL for context
company_objectiveOptional[str]NoneCompany objective for context
company_descriptionOptional[str]NoneCompany description for context
company_nameOptional[str]NoneCompany name for context
system_promptOptional[str]NoneCustom system prompt
reflectionboolFalseReflection capabilities (default is False)
compression_strategyLiteral["none", "simple", "llmlingua"]"none"The method for context compression (‘none’, ‘simple’, ‘llmlingua’)
compression_settingsOptional[Dict[str, Any]]NoneA dictionary of settings for the chosen strategy. For “simple”: {"max_length": 2000}. For “llmlingua”: {"ratio": 0.5, "model_name": "...", "instruction": "..."}
reliability_layerOptional[Any]NoneReliability layer for robustness
agent_id_Optional[str]NoneSpecific agent ID
canvasOptional[Canvas]NoneCanvas instance for visual interactions
retryint1Number of retry attempts
modeRetryMode"raise"Retry mode behavior (“raise” or “return_false”)
roleOptional[str]NoneAgent role
goalOptional[str]NoneAgent goal
instructionsOptional[str]NoneSpecific instructions
educationOptional[str]NoneAgent education background
work_experienceOptional[str]NoneAgent work experience
feed_tool_call_resultsboolFalseInclude tool results in memory
show_tool_callsboolTrueDisplay tool calls
tool_call_limitint5Maximum tool calls per execution
enable_thinking_toolboolFalseEnable orchestrated thinking
enable_reasoning_toolboolFalseEnable reasoning capabilities
toolsOptional[list]NoneList of tools to register with this agent (can be functions, ToolKits, or other agents)
user_policyOptional[Union[Policy, List[Policy]]]NoneUser input safety policy (single policy or list of policies)
agent_policyOptional[Union[Policy, List[Policy]]]NoneAgent output safety policy (single policy or list of policies)
tool_policy_preOptional[Union[Policy, List[Policy]]]NoneTool safety policy for pre-execution validation (single policy or list of policies)
tool_policy_postOptional[Union[Policy, List[Policy]]]NoneTool safety policy for post-execution validation (single policy or list of policies)
user_policy_feedbackboolFalseEnable feedback loop for user policy violations (returns helpful message instead of blocking)
agent_policy_feedbackboolFalseEnable feedback loop for agent policy violations (re-executes agent with feedback)
user_policy_feedback_loopint1Maximum retry count for user policy feedback (default 1)
agent_policy_feedback_loopint1Maximum retry count for agent policy feedback (default 1)
settingsOptional[ModelSettings]NoneModel-specific settings
profileOptional[ModelProfile]NoneModel profile configuration
reflection_configOptional[ReflectionConfig]NoneConfiguration for reflection and self-evaluation
model_selection_criteriaOptional[Dict[str, Any]]NoneDefault criteria dictionary for recommend_model_for_task() (see SelectionCriteria)
use_llm_for_selectionboolFalseDefault flag for whether to use LLM in recommend_model_for_task()
reasoning_effortOptional[Literal["low", "medium", "high"]]NoneReasoning effort level for OpenAI models (“low”, “medium”, “high”)
reasoning_summaryOptional[Literal["concise", "detailed"]]NoneReasoning summary type for OpenAI models (“concise”, “detailed”)
thinking_enabledOptional[bool]NoneEnable thinking for Anthropic/Google models (True/False)
thinking_budgetOptional[int]NoneToken budget for thinking (Anthropic: budget_tokens, Google: thinking_budget)
thinking_include_thoughtsOptional[bool]NoneInclude thoughts in output (Google models)
reasoning_formatOptional[Literal["hidden", "raw", "parsed"]]NoneReasoning format for Groq models (“hidden”, “raw”, “parsed”)
culture_managerOptional[CultureManager]NoneCultureManager instance for cultural knowledge operations
add_culture_to_contextboolFalseAdd cultural knowledge to system prompt (default False)
update_cultural_knowledgeboolFalseExtract cultural knowledge after runs (default False)
enable_agentic_cultureboolFalseGive agent tools to update culture (default False)
metadataOptional[Dict[str, Any]]NoneAgent metadata (passed to prompt)

Functions

do

Execute a task synchronously. Parameters:
  • task (Union[str, Task]): Task to execute (can be a Task object or a string description)
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • return_output (bool): If True, return full AgentRunOutput. If False (default), return content only.
Returns:
  • Any: Task content (str, BaseModel, etc.) if return_output=False, Full AgentRunOutput if return_output=True

do_async

Execute a task asynchronously using the pipeline architecture. Parameters:
  • task (Union[str, Task]): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • return_output (bool): If True, return full AgentRunOutput. If False (default), return content only.
  • state (Optional[State]): Graph execution state
  • graph_execution_id (Optional[str]): Graph execution identifier
  • _resume_context (Optional[AgentRunContext]): Internal - context for HITL resumption
  • _resume_step_index (Optional[int]): Internal - step index to resume from
Returns:
  • Any: Task content (str, BaseModel, etc.) if return_output=False, Full AgentRunOutput if return_output=True

stream

Stream task execution synchronously - yields events/text as they arrive. Parameters:
  • task (Union[str, Task]): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • events (bool): If True, yield AgentEvent objects. If False (default), yield text chunks.
  • state (Optional[State]): Graph execution state
  • event (Optional[bool]): Deprecated, use ‘events’ instead.
Returns:
  • Iterator[Union[str, AgentStreamEvent]]: AgentEvent if events=True, str if events=False

astream

Stream task execution asynchronously - yields events or text as they arrive. Note: HITL (Human-in-the-Loop) features are not supported in streaming mode. Use do_async() for HITL functionality. Parameters:
  • task (Union[str, Task]): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • events (bool): If True, yield AgentEvent objects. If False (default), yield text chunks.
  • state (Optional[State]): Graph execution state
  • event (Optional[bool]): Deprecated, use ‘events’ instead.
Returns:
  • AsyncIterator[Union[str, AgentStreamEvent]]: AgentEvent if events=True, str if events=False
Execute a task synchronously and print the result. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The result object (with output printed to console)
Execute a task asynchronously and print the result. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The result object (with output printed to console)

continue_run

Continue a paused agent run (synchronous wrapper). Automatically detects if the original run was streaming and continues in the same mode, or you can override with the streaming parameter. Supports all HITL continuation scenarios:
  1. External tool execution: Pass task object with external results filled
  2. Durable execution (error recovery): Pass run_id to load from storage
  3. Cancel run resumption: Pass run_id to load from storage
Parameters:
  • task (Optional[Task]): Task object (for external tool execution with results)
  • run_id (Optional[str]): Run ID to load from storage (for durable/cancel)
  • model (Optional[Union[str, Model]]): Override model
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • return_output (bool): If True, return full AgentRunOutput. If False (default), return content only.
  • streaming (Optional[bool]): If True, return list of events/text. If False, return result. If None (default), auto-detect from original run.
  • event (bool): If True (with streaming), return list of AgentEvent objects. If False (with streaming), return list of text chunks.
  • external_tool_executor (Optional[Callable[[RunRequirement], str]]): Optional function that executes external tools. When provided, if the agent pauses again with NEW external tool requirements, the executor is called automatically for each requirement.
Returns:
  • For direct mode: Task content if return_output=False, AgentRunOutput if return_output=True
  • For streaming mode: List of events (if event=True) or text chunks (if event=False)

continue_run_async

Continue a paused agent run using StepResult-based intelligent resumption. Note: HITL continuation is only supported in direct call mode (streaming=False). Supports all HITL continuation scenarios:
  1. External tool execution: Resume from MessageBuildStep with tool results
  2. Durable execution (error recovery): Resume from exact failed step
  3. Cancel run resumption: Resume from exact cancelled step
Parameters:
  • task (Optional[Task]): Task object with external results (for external tool continuation)
  • run_id (Optional[str]): Run ID to load from storage (for durable/cancel continuation)
  • model (Optional[Union[str, Model]]): Override model
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • return_output (bool): If True, return full AgentRunOutput. If False, return content only.
  • state (Optional[State]): Graph execution state
  • streaming (bool): Must be False. Streaming mode not supported for HITL continuation.
  • event (bool): Ignored (streaming not supported)
  • external_tool_executor (Optional[Callable[[RunRequirement], str]]): Optional function that executes external tools. When provided, if the agent pauses again with NEW external tool requirements, the executor is called automatically for each requirement.
  • graph_execution_id (Optional[str]): Graph execution identifier
Returns:
  • Any: Task content or AgentRunOutput (if return_output=True)
Raises:
  • ValueError: If streaming=True is passed

recommend_model_for_task

Get a model recommendation for a specific task (synchronous version of recommend_model_for_task_async). Parameters:
  • task (Union[Task, str]): Task object or task description string
  • criteria (Optional[Dict[str, Any]]): Optional criteria dictionary for model selection
  • use_llm (Optional[bool]): Optional flag to use LLM for selection
Returns:
  • ModelRecommendation: Object containing recommendation details

recommend_model_for_task_async

Get a model recommendation for a specific task. This method analyzes the task and returns a recommendation for the best model to use. The user can then decide whether to use the recommended model or stick with the default. Parameters:
  • task (Union[Task, str]): Task object or task description string
  • criteria (Optional[Dict[str, Any]]): Optional criteria dictionary for model selection (overrides agent’s default)
  • use_llm (Optional[bool]): Optional flag to use LLM for selection (overrides agent’s default)
Returns:
  • ModelRecommendation: Object containing:
    • model_name: Recommended model identifier
    • reason: Explanation for the recommendation
    • confidence_score: Confidence level (0.0 to 1.0)
    • selection_method: “rule_based” or “llm_based”
    • estimated_cost_tier: Cost estimate (1-10)
    • estimated_speed_tier: Speed estimate (1-10)
    • alternative_models: List of alternative model names

get_agent_id

Get display-friendly agent ID. Returns:
  • str: Agent name or formatted agent ID

get_cache_stats

Get cache statistics for this agent’s session. Returns:
  • Dict[str, Any]: Cache statistics

clear_cache

Clear the agent’s session cache.

get_run_output

Get the AgentRunOutput from the last execution. Returns:
  • Optional[AgentRunOutput]: The complete run output, or None if no run has been executed

get_run_id

Get the current run ID. Returns:
  • Optional[str]: The current run ID, or None if no run is active.

cancel_run

Cancel a run by its ID. If no run_id is provided, cancels the current run. Parameters:
  • run_id (Optional[str]): The ID of the run to cancel. If None, cancels the current run.
Returns:
  • bool: True if the run was found and cancelled, False otherwise.

get_last_model_recommendation

Get the last model recommendation made by the agent. Returns:
  • Optional[ModelRecommendation]: ModelRecommendation object or None if no recommendation was made

add_tools

Dynamically add tools to the agent and register them. Parameters:
  • tools (Union[Any, List[Any]]): A single tool or list of tools to add
Raises:
  • DisallowedOperation: If any tool is blocked by the safety policy

remove_tools

Remove tools from the agent. Supports removing:
  • Tool names (strings)
  • Function objects
  • Agent objects
  • MCP handlers (and all their tools)
  • Class instances (ToolKit or regular classes, and all their tools)
  • Builtin tools (AbstractBuiltinTool instances)
Parameters:
  • tools (Union[str, List[str], Any, List[Any]]): Single tool or list of tools to remove (any type)

get_tool_defs

Get the tool definitions for all currently registered tools. Returns:
  • List[ToolDefinition]: List of tool definitions from the ToolManager