Overview
Every task execution tracks detailed metrics including token counts, timing breakdowns, estimated cost, and tool calls. All timing and token data lives intask.usage (TaskUsage) — the single source of truth for task-level metrics.
When printing is enabled (print_do / print_do_async), a Task Metrics panel is displayed after each task execution.
Accessing Task Metrics
After executing a task, metrics are available directly on theTask instance.
Token & Cost Properties
| Property | Type | Description |
|---|---|---|
total_input_token | int | None | Number of input/prompt tokens used |
total_output_token | int | None | Number of output/completion tokens generated |
total_cost | float | None | Estimated cost in USD |
price_id | str | Unique identifier for tracking this task’s pricing |
Timing Properties
| Property | Type | Description |
|---|---|---|
duration | float | None | Total wall-clock execution time (seconds) |
model_execution_time | float | None | Time spent inside LLM API calls (seconds) |
tool_execution_time | float | None | Time spent executing tools (seconds) |
upsonic_execution_time | float | None | Framework overhead = duration - model_execution_time - tool_execution_time (seconds) |
start_time | float | None | Unix timestamp when the task started |
end_time | float | None | Unix timestamp when the task finished |
Response & Tool Properties
| Property | Type | Description |
|---|---|---|
response | str | BaseModel | None | The task’s output after execution |
tool_calls | list[dict] | List of tool calls made during execution |
The usage Property (TaskUsage)
task.usage returns the underlying TaskUsage object — the single source of truth for all task-level metrics. The timing properties above (duration, model_execution_time, upsonic_execution_time) delegate directly to this object.
| Property | Type | Description |
|---|---|---|
requests | int | Number of LLM API requests made for this task |
tool_calls | int | Number of tool calls executed during this task |
input_tokens | int | Input tokens (accumulated from model responses) |
output_tokens | int | Output tokens (accumulated from model responses) |
total_tokens | int | Sum of input_tokens + output_tokens |
cache_write_tokens | int | Tokens written to cache |
cache_read_tokens | int | Tokens read from cache |
reasoning_tokens | int | Tokens used for reasoning |
duration | float | None | Total task execution time (seconds) |
model_execution_time | float | None | Time in LLM API calls (seconds) |
tool_execution_time | float | None | Time spent executing tools (seconds) |
upsonic_execution_time | float | None | Framework overhead time (seconds) |
cost | float | None | Estimated cost of this task |
Basic Usage
Printed Panel
When you useprint_do or print_do_async, the Task Metrics panel is displayed:
Accessing Usage Directly
For programmatic access to the full usage data, usetask.usage:
Monitoring Multiple Tasks
Independence Across Tasks
Each task has its own independentTaskUsage instance. Running multiple tasks on the same agent does not mix their metrics:
Cost Optimization Tips
- Choose the right model — Use smaller models (
gpt-4o-mini) for simple tasks - Optimize prompts — Shorter, clearer prompts reduce input tokens
- Use caching — Enable task caching for repeated queries
- Monitor timing — Use
model_execution_timevsupsonic_execution_timeto identify bottlenecks
Related Documentation
- Agent Metrics — Accumulated agent-level usage across all runs
- Task Caching — Reduce costs with caching
- Task Attributes — All available task attributes
- Task Results — Accessing task execution results

