Skip to main content

Parameters

ParameterTypeDefaultDescription
modelUnion[str, Model]"openai/gpt-4o"Model identifier or Model instance
nameOptional[str]NoneAgent name for identification
memoryOptional[Memory]NoneMemory instance for conversation history
debugboolFalseEnable debug logging
company_urlOptional[str]NoneCompany URL for context
company_objectiveOptional[str]NoneCompany objective for context
company_descriptionOptional[str]NoneCompany description for context
system_promptOptional[str]NoneCustom system prompt
reflectionOptional[bool]FalseReflection capabilities
compression_strategyLiteral["none", "simple", "llmlingua"]"none"The method for context compression
compression_settingsOptional[Dict[str, Any]]NoneA dictionary of settings for the chosen strategy. For simple: max_length 2000. For llmlingua: ratio 0.5, model_name, instruction
reliability_layerOptional[Any]NoneReliability layer for robustness
agent_id_Optional[str]NoneSpecific agent ID
canvasOptional[Canvas]NoneCanvas instance for visual interactions
retryint1Number of retry attempts
modeRetryMode"raise"Retry mode behavior (raise or return_false)
roleOptional[str]NoneAgent role
goalOptional[str]NoneAgent goal
instructionsOptional[str]NoneSpecific instructions
educationOptional[str]NoneAgent education background
work_experienceOptional[str]NoneAgent work experience
feed_tool_call_resultsboolFalseInclude tool results in memory
show_tool_callsboolTrueDisplay tool calls
tool_call_limitint5Maximum tool calls per execution
enable_thinking_toolboolFalseEnable orchestrated thinking
enable_reasoning_toolboolFalseEnable reasoning capabilities
user_policyOptional[Policy]NoneUser input safety policy
agent_policyOptional[Policy]NoneAgent output safety policy
settingsOptional[ModelSettings]NoneModel-specific settings
profileOptional[ModelProfile]NoneModel profile configuration
reflection_configOptional[ReflectionConfig]NoneConfiguration for reflection and self-evaluation
model_selection_criteriaOptional[Dict[str, Any]]NoneDefault criteria dictionary for recommend_model_for_task (see SelectionCriteria)
use_llm_for_selectionboolFalseDefault flag for whether to use LLM in recommend_model_for_task

Functions

do

Execute a task synchronously. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The task response

do_async

Execute a task asynchronously with complete framework integration. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • state (Optional[State]): Graph execution state
  • graph_execution_id (Optional[str]): Graph execution identifier
Returns:
  • Any: The task response

stream

Stream task execution with StreamRunResult wrapper. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • StreamRunResult: Advanced streaming result wrapper

stream_async

Stream task execution asynchronously with StreamRunResult wrapper. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • state (Optional[State]): Graph execution state
  • graph_execution_id (Optional[str]): Graph execution identifier
Returns:
  • StreamRunResult: Advanced streaming result wrapper
Execute a task synchronously and print the result. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The result object (with output printed to console)
Execute a task asynchronously and print the result. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The result object (with output printed to console)
Stream task execution synchronously and print output. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The final output
Stream task execution asynchronously and print output. Parameters:
  • task (Task): Task to execute
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The final output

continue_run

Continue execution of a paused task. Parameters:
  • task (Task): Task to continue
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
Returns:
  • Any: The task response

continue_async

Continue execution of a paused task asynchronously. Parameters:
  • task (Task): Task to continue
  • model (Optional[Union[str, Model]]): Override model for this execution
  • debug (bool): Enable debug mode
  • retry (int): Number of retries
  • state (Optional[State]): Graph execution state
  • graph_execution_id (Optional[str]): Graph execution identifier
Returns:
  • Any: The task response

recommend_model_for_task

Get a model recommendation for a specific task (synchronous). Parameters:
  • task (Union[Task, str]): Task object or task description string
  • criteria (Optional[Dict[str, Any]]): Optional criteria dictionary for model selection
  • use_llm (Optional[bool]): Optional flag to use LLM for selection
Returns:
  • ModelRecommendation: Object containing recommendation details

recommend_model_for_task_async

Get a model recommendation for a specific task (asynchronous). Parameters:
  • task (Union[Task, str]): Task object or task description string
  • criteria (Optional[Dict[str, Any]]): Optional criteria dictionary for model selection
  • use_llm (Optional[bool]): Optional flag to use LLM for selection
Returns:
  • ModelRecommendation: Object containing:
    • model_name: Recommended model identifier
    • reason: Explanation for the recommendation
    • confidence_score: Confidence level (0.0 to 1.0)
    • selection_method: “rule_based” or “llm_based”
    • estimated_cost_tier: Cost estimate (1-10)
    • estimated_speed_tier: Speed estimate (1-10)
    • alternative_models: List of alternative model names

get_agent_id

Get display-friendly agent ID. Returns:
  • str: Agent name or formatted agent ID

get_cache_stats

Get cache statistics for this agent’s session. Returns:
  • Dict[str, Any]: Cache statistics

clear_cache

Clear the agent’s session cache.

get_run_result

Get the persistent RunResult that accumulates messages across all executions. Returns:
  • RunResult: The agent’s run result containing all messages and the last output

reset_run_result

Reset the RunResult to start fresh (clears all accumulated messages).

get_stream_run_result

Get the persistent StreamRunResult that accumulates messages across all streaming executions. Returns:
  • StreamRunResult: The agent’s stream run result containing all messages and the last output

reset_stream_run_result

Reset the StreamRunResult to start fresh (clears all accumulated messages).

get_last_model_recommendation

Get the last model recommendation made by the agent. Returns:
  • Optional[Any]: ModelRecommendation object or None if no recommendation was made
I