Overview
The Upsonic AI Agent Framework provides comprehensive support for the latest language models from major providers. The framework offers a flexible model system where you can create model instances with custom settings and profiles, enabling fine-tuned control over model behavior and performance.Supported Models
The framework supports the latest models from major providers:OpenAI Models
gpt-5,gpt-5-mini,gpt-5-nanogpt-4o,gpt-4o-minio1-preview,o1-mini,o1-proo3,o3-mini,o3-pro
Anthropic Models
claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022claude-3-7-sonnet-20250219claude-4-opus-20250514,claude-4-sonnet-20250514claude-opus-4-0,claude-sonnet-4-0
Google Models
gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.5-progemini-2.0-flash,gemini-2.0-flash-lite
Other Major Providers
- Groq:
llama-3.3-70b-versatile,llama-3.1-8b-instant - Cohere:
command-r-plus,command-r,c4ai-aya-expanse-32b - Mistral:
mistral-large-latest,codestral-latest - Grok:
grok-4,grok-3,grok-3-mini - Bedrock: Various Amazon, Anthropic, Meta, and Mistral models
Environment Variables
Each provider requires specific environment variables for authentication:OpenAI
Anthropic
Groq
Cohere
Mistral
Hugging Face
Grok
Azure OpenAI
Other Providers
- DeepSeek:
DEEPSEEK_API_KEY - Fireworks:
FIREWORKS_API_KEY - OpenRouter:
OPENROUTER_API_KEY - GitHub:
GITHUB_API_KEY - Ollama:
OLLAMA_BASE_URL(API key optional)
Overview
Upsonic framework provides a flexible model system where you can create model instances with custom settings and profiles. All model classes inherit from a baseModel class and support provider-specific settings that extend the base ModelSettings class.
Base Model Class
All model classes inherit from the baseModel class:
Base ModelSettings
The baseModelSettings class provides common parameters supported across multiple providers:
Provider-Specific Settings
Each provider extends the baseModelSettings with provider-specific parameters. All provider-specific parameters are prefixed with the provider name to avoid conflicts.
Anthropic Settings
OpenAI Settings
Google Settings
Groq Settings
Bedrock Settings
Model Creation Patterns
Pattern 1: Create Model with Settings and Profile
Pattern 2: Create Agent with Model, Settings, and Profile
Pattern 3: Create Model with Provider-Specific Settings Only
Usage Examples
Anthropic with Thinking Enabled
OpenAI with High-Effort Reasoning
Google with Thinking Configuration
Groq with Reasoning Format
Best Practices
- Use Provider-Specific Settings: Always use the appropriate provider-specific settings class for the model you’re using.
-
Combine Base and Provider Settings: You can combine base
ModelSettingsparameters with provider-specific ones. - Set Appropriate Budgets: For thinking/reasoning features, set appropriate token budgets based on task complexity.
-
Enable Framework Features: Use
enable_thinking_tool=Trueon the Agent for orchestrated thinking capabilities. - Test Different Configurations: Experiment with different settings to find the optimal configuration for your use case.
Notes
- All provider-specific parameters are prefixed with the provider name (e.g.,
anthropic_,openai_,google_) - Settings are merged when passed to both the model and agent
- Profile settings control how the model handles tools and structured output
- Some features like thinking/reasoning are only available on specific model variants
- Always check the provider documentation for the latest supported models and features

