Skip to main content What is an LLM Connection?
LLM Connections allow your agents to communicate with AI models from various providers. AgentOS supports both cloud-based and local LLM providers, giving you flexibility in choosing the right models for your use case.
You can configure multiple LLM connections and select different models for each agent based on your requirements.
OpenAI
Connect OpenAI models including GPT-4, GPT-3.5, and other models from the OpenAI API.
Setup Steps
Navigate to Settings > LLM Connections
Click Add LLM Connection
Select OpenAI as the provider
Enter your OpenAI API Key
(Optional) Configure organization ID
Save the connection
Your agents can now use OpenAI models by referencing this connection.
Anthropic
Configure Anthropic’s Claude models including Claude 3 Opus, Sonnet, and Haiku for your agents.
Setup Steps
Go to Settings > LLM Connections
Click Add LLM Connection
Select Anthropic as the provider
Enter your Anthropic API Key
Save the connection
Your agents can now leverage Claude models for natural language understanding and generation.
Ollama
Connect your local Ollama instance to run open-source models locally without external API calls.
Setup Steps
Navigate to Settings > LLM Connections
Click Add LLM Connection
Select Ollama as the provider
Enter your Ollama server URL (e.g., http://localhost:11434)
Save the connection
Your agents can now use locally hosted models through Ollama, ensuring data privacy and reducing API costs.
Nvidia NIM
Integrate Nvidia NIM for optimized model inference with enterprise-grade performance.
Setup Steps
Go to Settings > LLM Connections
Click Add LLM Connection
Select Nvidia NIM as the provider
Enter your NIM endpoint URL
(Optional) Configure API key if required
Save the connection
Your agents can now use Nvidia’s optimized model serving infrastructure for high-performance inference.
Multiple LLM Connections
You can configure multiple LLM connections in AgentOS. Each agent uses one LLM connection at a time, but you can:
Update the connection - Switch an agent to a different LLM provider
Test different models - Use the Benchmark feature to compare models
Mix providers - Different agents can use different LLM connections
Benchmark Feature
The Benchmark feature allows you to test your agent with different LLM connections and compare:
Token Usage - Compare token consumption across models
Response Time - Measure inference speed
Output Quality - Evaluate response quality
Cost Analysis - Compare costs between different providers
This helps you choose the most cost-effective and performant model for your specific use case.
Using LLM Connections in Agents
When deploying an agent, you can select which LLM connection to use. The agent configuration allows you to:
Select Provider - Choose from your configured LLM connections
Select Model - Pick the specific model (e.g., gpt-4, claude-3-opus)
AgentOS handles all API communication, credential management, and model invocation automatically.