Skip to main content

Overview

Task metrics provide detailed insights into the cost and token usage of your AI operations. Every task execution tracks input/output tokens, calculates costs based on the model used, and provides a unique identifier for tracking purposes.

Available Metrics

After executing a task, you can access the following metrics:
AttributeTypeDescription
total_costfloatTotal cost in USD for the task execution
total_input_tokenintNumber of input tokens processed
total_output_tokenintNumber of output tokens generated
price_idstrUnique identifier for the pricing model used
get_total_cost()methodMethod to retrieve the total cost

Basic Usage

Tracking Agent Costs

from upsonic import Agent, Task

# Create agent and task
agent = Agent(model="anthropic/claude-sonnet-4-5")
task = Task("Summarize the key benefits of AI agents in production systems")

# Execute task
result = agent.print_do(task)

# Access metrics
print(f"Cost: ${task.total_cost:.6f}")
print(f"Input tokens: {task.total_input_token}")
print(f"Output tokens: {task.total_output_token}")
print(f"Price ID: {task.price_id}")
Output:
Cost: $0.000011
Input tokens: 68
Output tokens: 45
Price ID: 939ab57e-6a77-45e2-b593-f65212056bcd

Tracking Direct LLM Call Costs

from upsonic import Task
from upsonic.client import LLMClient

# Create LLM client
client = LLMClient(model="anthropic/claude-sonnet-4-5")

# Create task
task = Task("What are the benefits of RAG systems?")

# Execute with client
response = client.chat([{"role": "user", "content": task.description}])

# Metrics are automatically tracked
print(f"Total cost: ${task.get_total_cost():.6f}")
print(f"Tokens used: {task.total_input_token + task.total_output_token}")

Cost Monitoring

Monitoring Multiple Tasks

from upsonic import Agent, Task

agent = Agent(model="anthropic/claude-sonnet-4-5")

tasks = [
    Task("Analyze sentiment of customer feedback"),
    Task("Extract key entities from the document"),
    Task("Generate a summary report")
]

total_cost = 0
total_tokens = 0

for task in tasks:
    result = agent.print_do(task)
    
    cost = task.total_cost
    tokens = task.total_input_token + task.total_output_token
    
    total_cost += cost
    total_tokens += tokens
    
    print(f"Task: {task.description[:40]}...")
    print(f"  Cost: ${cost:.6f}")
    print(f"  Tokens: {tokens}\n")

print(f"Total cost: ${total_cost:.6f}")
print(f"Total tokens: {total_tokens}")

Budget Alerting

from upsonic import Agent, Task

def execute_with_budget(agent, task, max_cost=0.01):
    """Execute task with cost limit"""
    result = agent.print_do(task)
    
    if task.total_cost > max_cost:
        print(f"⚠️  Warning: Task exceeded budget!")
        print(f"   Expected: ${max_cost}")
        print(f"   Actual: ${task.total_cost:.6f}")
    
    return result

agent = Agent(model="anthropic/claude-sonnet-4-5")
task = Task("Generate a detailed analysis report")

result = execute_with_budget(agent, task, max_cost=0.001)

Cost Comparison Across Models

from upsonic import Agent, Task

models = [
    "openai/gpt-4o",
    "openai/gpt-4o-mini",
    "anthropic/claude-sonnet-4-5"
]

task_description = "Explain quantum computing in simple terms"

print("Cost Comparison:\n")

for model in models:
    agent = Agent(model=model)
    task = Task(task_description)
    
    result = agent.print_do(task)
    
    print(f"{model}")
    print(f"  Cost: ${task.total_cost:.6f}")
    print(f"  Tokens: {task.total_input_token + task.total_output_token}\n")

Best Practices

1. Cost Tracking Context Manager

class CostTracker:
    def __init__(self):
        self.total_cost = 0
        self.task_count = 0
    
    def track(self, task):
        self.total_cost += task.total_cost
        self.task_count += 1
    
    def summary(self):
        avg_cost = self.total_cost / self.task_count if self.task_count > 0 else 0
        return {
            "total_cost": self.total_cost,
            "task_count": self.task_count,
            "average_cost": avg_cost
        }

# Usage
tracker = CostTracker()
agent = Agent(model="anthropic/claude-sonnet-4-5")

for i in range(10):
    task = Task(f"Process item {i}")
    agent.print_do(task)
    tracker.track(task)

stats = tracker.summary()
print(f"Processed {stats['task_count']} tasks")
print(f"Total cost: ${stats['total_cost']:.6f}")
print(f"Average: ${stats['average_cost']:.6f}")

2. Logging for Analysis

import json
from datetime import datetime

def log_task_metrics(task, filename="task_metrics.jsonl"):
    """Log task metrics for later analysis"""
    metrics = {
        "timestamp": datetime.now().isoformat(),
        "description": task.description[:100],
        "cost": task.total_cost,
        "input_tokens": task.total_input_token,
        "output_tokens": task.total_output_token,
        "price_id": task.price_id
    }
    
    with open(filename, "a") as f:
        f.write(json.dumps(metrics) + "\n")

# Usage
agent = Agent(model="anthropic/claude-sonnet-4-5")
task = Task("Analyze customer sentiment")
agent.print_do(task)
log_task_metrics(task)

Understanding Price IDs

Each model and pricing tier has a unique price_id. This identifier:
  • Links to the specific pricing model used
  • Remains consistent for the same model/tier combination
  • Helps track pricing changes over time
  • Can be used for detailed billing breakdowns
from upsonic import Agent, Task

agent = Agent(model="anthropic/claude-sonnet-4-5")
task = Task("Sample query")
agent.print_do(task)

print(f"Model: {agent.model}")
print(f"Price ID: {task.price_id}")

Cost Optimization Tips

  1. Choose the right model: Use smaller models (gpt-4o-mini) for simpler tasks
  2. Optimize prompts: Shorter, clearer prompts use fewer input tokens
  3. Cache results: Use task caching for repeated queries
  4. Batch operations: Process multiple items in a single task when possible
  5. Monitor trends: Track metrics over time to identify optimization opportunities

Troubleshooting

Metrics Not Available

If metrics are not populated, ensure:
  • The task has been executed
  • The model provider supports token counting
  • The task completed successfully
task = Task("Test query")
result = agent.print_do(task)

if task.total_cost is None:
    print("Metrics not available - check task execution")
else:
    print(f"Metrics available: ${task.total_cost:.6f}")
Cost Tracking AccuracyCosts are calculated based on the model provider’s pricing at execution time. Token counts are precise, and costs are calculated using the latest pricing information from each provider.