Skip to main content
Tasks support various response formats to structure the output according to your needs. You can specify the expected format using the response_format parameter.

String Response (Default)

from upsonic import Agent, Task

# Create agent
agent = Agent(model="openai/gpt-4o")

# Default string response
task = Task(
    description="What is the capital of Japan?",
    response_format=str
)

# Execute and print result
result = agent.do(task)
print(result)

Pydantic Model Response

from upsonic import Agent, Task
from pydantic import BaseModel

class AnalysisResult(BaseModel):
    summary: str
    confidence: float
    recommendations: list[str]
    key_metrics: dict[str, float]

# Create agent
agent = Agent(model="openai/gpt-4o")

# Task with structured response
task = Task(
    description="Analyze the current state of electric vehicle market and provide structured results",
    response_format=AnalysisResult
)

# Execute and access structured result
result = agent.do(task)
print(f"Summary: {result.summary}")
print(f"Confidence: {result.confidence}")
print(f"Recommendations: {result.recommendations}")
print(f"Key Metrics: {result.key_metrics}")

Complex Nested Models

from upsonic import Agent, Task
from pydantic import BaseModel
from typing import List, Optional

class Metric(BaseModel):
    name: str
    value: float
    unit: str

class Recommendation(BaseModel):
    title: str
    description: str
    priority: str
    estimated_impact: float

class DetailedAnalysis(BaseModel):
    summary: str
    confidence: float
    metrics: List[Metric]
    recommendations: List[Recommendation]
    risk_factors: Optional[List[str]] = None

# Create agent
agent = Agent(model="openai/gpt-4o")

# Task with complex structured response
task = Task(
    description="Perform comprehensive analysis of the renewable energy sector with detailed metrics and recommendations",
    response_format=DetailedAnalysis
)

# Execute and access nested structured result
result = agent.do(task)
print(f"Summary: {result.summary}")
print(f"Confidence: {result.confidence}")
print(f"Metrics: {result.metrics}")
print(f"Recommendations: {result.recommendations}")

Response Format Types

TypeDescriptionUse Case
strSimple text responseBasic questions, summaries
BaseModelStructured Pydantic modelComplex data, analysis results
NoneNo format constraintFlexible responses

Dynamic Response Format

from upsonic import Agent, Task
from pydantic import BaseModel

class AnalysisResult(BaseModel):
    summary: str
    confidence: float
    recommendations: list[str]

# Response format can be set dynamically
def create_analysis_task(analysis_type: str):
    if analysis_type == "simple":
        return Task(description="Provide a brief summary of AI trends", response_format=str)
    elif analysis_type == "detailed":
        return Task(description="Provide detailed analysis of AI trends", response_format=AnalysisResult)
    else:
        return Task(description="Analyze AI trends", response_format=None)  # defaults to str

# Create agent
agent = Agent(model="openai/gpt-4o")

# Use dynamic task creation
task = create_analysis_task("detailed")
result = agent.do(task)
print(f"Summary: {result.summary}")
print(f"Confidence: {result.confidence}")

Best Practices

  • Structured Data: Use Pydantic models for complex, structured responses
  • Field Validation: Leverage Pydantic’s built-in validation for data integrity
  • Optional Fields: Use Optional types for fields that might not always be present
  • Nested Models: Break down complex responses into smaller, reusable models
  • Type Hints: Always provide clear type hints for better IDE support and validation
  • Default Values: Set appropriate default values for optional fields