Skip to main content

What is LLM Model?

Large Language Models (LLMs) are the foundation of the Upsonic AI Agent Framework. The framework provides a unified interface to interact with various LLM providers, allowing you to build AI agents that can leverage different models without changing your code structure. In Upsonic, all model classes inherit from the base Model class, which provides:
  • Unified Interface: Consistent API across all providers
  • UEL Integration: Models implement the Runnable interface for chain composition
  • Streaming Support: Real-time response streaming for better UX
  • Tool Calling: Native function calling capabilities
  • Structured Output: Type-safe responses using Pydantic models
  • Memory Management: Built-in conversation history support

Examples

With Model Classes

from upsonic import Agent, Task
from upsonic.models.anthropic import AnthropicModel

model = AnthropicModel(model_name="claude-sonnet-4-5")
agent = Agent(model=model)

task = Task("Hello, how are you?")
result = agent.do(task)
print(result)

With Model as String

from upsonic import Agent, Task

agent = Agent(model="anthropic/claude-sonnet-4-5")

task = Task("Hello, how are you?")
result = agent.do(task)
print(result)

Supported LLM Models

Upsonic supports a wide range of LLM providers, from native APIs to local deployments and cloud-based solutions.

Browse All LLM Provider Integrations

See all 27+ supported LLM providers — native, cloud, local, and model gateways — with setup guides and examples.
  • Native Providers: OpenAI, Anthropic, Google Gemini, Mistral, Cohere, Grok, xAI, Moonshot AI
  • Cloud Providers: Azure OpenAI, AWS Bedrock, HuggingFace, Heroku, OVHcloud
  • Local Providers: Ollama, vLLM, LM Studio
  • Model Gateways: Groq, OpenRouter, LiteLLM, NVIDIA NIM, Together AI, SambaNova, Cerebras, Vercel, and more