Parameters
Parameter | Type | Default | Description |
---|---|---|---|
model | Union[str, Model] | "openai/gpt-4o" | Model identifier or Model instance |
name | Optional[str] | None | Agent name for identification |
memory | Optional[Memory] | None | Memory instance for conversation history |
debug | bool | False | Enable debug logging |
company_url | Optional[str] | None | Company URL for context |
company_objective | Optional[str] | None | Company objective for context |
company_description | Optional[str] | None | Company description for context |
system_prompt | Optional[str] | None | Custom system prompt |
reflection | Optional[bool] | False | Reflection capabilities |
compression_strategy | Literal["none", "simple", "llmlingua"] | "none" | The method for context compression |
compression_settings | Optional[Dict[str, Any]] | None | A dictionary of settings for the chosen strategy. For simple: max_length 2000. For llmlingua: ratio 0.5, model_name, instruction |
reliability_layer | Optional[Any] | None | Reliability layer for robustness |
agent_id_ | Optional[str] | None | Specific agent ID |
canvas | Optional[Canvas] | None | Canvas instance for visual interactions |
retry | int | 1 | Number of retry attempts |
mode | RetryMode | "raise" | Retry mode behavior (raise or return_false) |
role | Optional[str] | None | Agent role |
goal | Optional[str] | None | Agent goal |
instructions | Optional[str] | None | Specific instructions |
education | Optional[str] | None | Agent education background |
work_experience | Optional[str] | None | Agent work experience |
feed_tool_call_results | bool | False | Include tool results in memory |
show_tool_calls | bool | True | Display tool calls |
tool_call_limit | int | 5 | Maximum tool calls per execution |
enable_thinking_tool | bool | False | Enable orchestrated thinking |
enable_reasoning_tool | bool | False | Enable reasoning capabilities |
user_policy | Optional[Policy] | None | User input safety policy |
agent_policy | Optional[Policy] | None | Agent output safety policy |
settings | Optional[ModelSettings] | None | Model-specific settings |
profile | Optional[ModelProfile] | None | Model profile configuration |
reflection_config | Optional[ReflectionConfig] | None | Configuration for reflection and self-evaluation |
model_selection_criteria | Optional[Dict[str, Any]] | None | Default criteria dictionary for recommend_model_for_task (see SelectionCriteria) |
use_llm_for_selection | bool | False | Default flag for whether to use LLM in recommend_model_for_task |
Functions
do
Execute a task synchronously.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The task response
do_async
Execute a task asynchronously with complete framework integration.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retriesstate
(Optional[State]): Graph execution stategraph_execution_id
(Optional[str]): Graph execution identifier
Any
: The task response
stream
Stream task execution with StreamRunResult wrapper.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
StreamRunResult
: Advanced streaming result wrapper
stream_async
Stream task execution asynchronously with StreamRunResult wrapper.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retriesstate
(Optional[State]): Graph execution stategraph_execution_id
(Optional[str]): Graph execution identifier
StreamRunResult
: Advanced streaming result wrapper
print_do
Execute a task synchronously and print the result.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The result object (with output printed to console)
print_do_async
Execute a task asynchronously and print the result.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The result object (with output printed to console)
print_stream
Stream task execution synchronously and print output.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The final output
print_stream_async
Stream task execution asynchronously and print output.
Parameters:
task
(Task): Task to executemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The final output
continue_run
Continue execution of a paused task.
Parameters:
task
(Task): Task to continuemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retries
Any
: The task response
continue_async
Continue execution of a paused task asynchronously.
Parameters:
task
(Task): Task to continuemodel
(Optional[Union[str, Model]]): Override model for this executiondebug
(bool): Enable debug moderetry
(int): Number of retriesstate
(Optional[State]): Graph execution stategraph_execution_id
(Optional[str]): Graph execution identifier
Any
: The task response
recommend_model_for_task
Get a model recommendation for a specific task (synchronous).
Parameters:
task
(Union[Task, str]): Task object or task description stringcriteria
(Optional[Dict[str, Any]]): Optional criteria dictionary for model selectionuse_llm
(Optional[bool]): Optional flag to use LLM for selection
ModelRecommendation
: Object containing recommendation details
recommend_model_for_task_async
Get a model recommendation for a specific task (asynchronous).
Parameters:
task
(Union[Task, str]): Task object or task description stringcriteria
(Optional[Dict[str, Any]]): Optional criteria dictionary for model selectionuse_llm
(Optional[bool]): Optional flag to use LLM for selection
ModelRecommendation
: Object containing:- model_name: Recommended model identifier
- reason: Explanation for the recommendation
- confidence_score: Confidence level (0.0 to 1.0)
- selection_method: “rule_based” or “llm_based”
- estimated_cost_tier: Cost estimate (1-10)
- estimated_speed_tier: Speed estimate (1-10)
- alternative_models: List of alternative model names
get_agent_id
Get display-friendly agent ID.
Returns:
str
: Agent name or formatted agent ID
get_cache_stats
Get cache statistics for this agent’s session.
Returns:
Dict[str, Any]
: Cache statistics
clear_cache
Clear the agent’s session cache.
get_run_result
Get the persistent RunResult that accumulates messages across all executions.
Returns:
RunResult
: The agent’s run result containing all messages and the last output
reset_run_result
Reset the RunResult to start fresh (clears all accumulated messages).
get_stream_run_result
Get the persistent StreamRunResult that accumulates messages across all streaming executions.
Returns:
StreamRunResult
: The agent’s stream run result containing all messages and the last output
reset_stream_run_result
Reset the StreamRunResult to start fresh (clears all accumulated messages).
get_last_model_recommendation
Get the last model recommendation made by the agent.
Returns:
Optional[Any]
: ModelRecommendation object or None if no recommendation was made