What are LLM Models?
Large Language Models (LLMs) are the foundation of the Upsonic AI Agent Framework. The framework provides a unified interface to interact with various LLM providers, allowing you to build AI agents that can leverage different models without changing your code structure.Model Architecture
In Upsonic, all model classes inherit from the baseModel class, which provides:
- Unified Interface: Consistent API across all providers
- LCEL Integration: Models implement the
Runnableinterface for chain composition - Streaming Support: Real-time response streaming for better UX
- Tool Calling: Native function calling capabilities
- Structured Output: Type-safe responses using Pydantic models
- Memory Management: Built-in conversation history support
Key Components
1. Model Settings
Model settings control the behavior of LLM requests:openai_, anthropic_, google_).
2. Model Profiles
Profiles define model capabilities and behaviors:3. Model Inference
Useinfer_model() to automatically select the appropriate model class:
Usage Patterns
Basic Usage
With Custom Settings
LCEL Chains
Error Handling
The framework provides comprehensive error handling for LLM operations:Common Exceptions
ModelHTTPError
Raised when an HTTP error occurs during model requests:UserError
Raised for user-facing configuration or usage errors:UnexpectedModelBehavior
Raised when a model responds in an unexpected way:Handling Rate Limits
Handling Token Limits
Handling Invalid Responses
Global Error Handling
Disable model requests globally for testing:Best Practices
- Always Use Environment Variables: Store API keys in environment variables, never hardcode them
- Implement Retry Logic: Network errors and rate limits are common, implement exponential backoff
- Monitor Token Usage: Track usage to avoid unexpected costs
- Handle Timeouts: Set appropriate timeouts based on your use case
- Validate Outputs: Use structured output with Pydantic models for type safety
- Log Errors: Implement comprehensive logging for debugging
- Use Streaming: For better UX, use streaming responses when available
- Test Error Paths: Write tests that cover error scenarios
Next Steps
- Explore Compatibility Overview to see feature support across providers
- Learn about Native Model Providers for direct API access
- Check Model Gateways for unified access to multiple models

