Documentation Index
Fetch the complete documentation index at: https://docs.upsonic.ai/llms.txt
Use this file to discover all available pages before exploring further.
This example demonstrates how to create and use an Upsonic Agent with Ollama models to run LLMs locally. The example shows how to configure the agent to use local models for privacy and cost-efficiency.
Overview
Upsonic framework provides seamless integration for local models via Ollama. This example showcases:
- Local Model Integration β Using
OllamaModel to connect to local LLMs
- Privacy β Running inferences entirely on your local machine
- Task Execution β Running simple QA tasks
- FastAPI Server β Running the agent as a production-ready API server
Project Structure
ollama_agent/
βββ main.py # Agent entry point
βββ upsonic_configs.json # Upsonic CLI configuration
βββ README.md # Quick start guide
Environment Variables
OLLAMA_BASE_URL="http://localhost:11434/v1/"
Installation
# Install dependencies from upsonic_configs.json
upsonic install
Managing Dependencies
# Add a package
upsonic add <package> <section>
# Remove a package
upsonic remove <package> <section>
Sections: api, streamlit, development
Option 1: Run Directly
Runs the agent with a default test query (βHello, how are you?β).
Option 2: Run as API Server
Server starts at http://localhost:8000. API documentation at /docs.
Example API call:
curl -X POST http://localhost:8000/call \
-H "Content-Type: application/json" \
-d '{"user_query": "Explain quantum computing in one sentence."}'
How It Works
| Component | Description |
|---|
| OllamaModel | Connects to the local Ollama instance (default port 11434) |
| Agent | Uses the local model for inference |
| Task | Encapsulates the user query |
| Execution | Runs the task synchronously or asynchronously |
Example Output
Query:
Response:
"I'm just an AI, so I don't have feelings, but I'm functioning perfectly! How can I help you today?"
Complete Implementation
main.py
"""
Ollama Agent Example
This example demonstrates how to create and use an Upsonic Agent with local Ollama models.
This file contains:
- async main(inputs): For use with `upsonic run` CLI command (FastAPI server)
- direct execution: For running as a script `python main.py`
"""
from upsonic import Agent, Task
from upsonic.models.ollama import OllamaModel
# Initialize the model
# Ensure you have pulled the model: `ollama pull gpt-oss:20b`
model = OllamaModel(model_name="gpt-oss:20b")
async def main(inputs: dict) -> dict:
"""
Async main function for FastAPI server (used by `upsonic run` command).
"""
user_query = inputs.get("user_query", "Hello, how are you?")
agent = Agent(model=model)
task = Task(description=user_query)
result = await agent.do_async(task)
return {
"bot_response": result
}
if __name__ == "__main__":
import asyncio
print("π€ Running Ollama Agent directly...")
inputs = {"user_query": "Hello, how are you?"}
print(f"Task: {inputs['user_query']}")
result = asyncio.run(main(inputs))
print("-" * 50)
print("Result:")
print(result["bot_response"])
print("-" * 50)
upsonic_configs.json
{
"environment_variables": {
"#OLLAMA_BASE_URL": {
"type": "string",
"description": "Ollama Base URL",
"default": "http://localhost:11434/v1/"
}
},
"machine_spec": {
"cpu": 2,
"memory": 4096,
"storage": 1024
},
"agent_name": "Ollama Agent",
"description": "Simple Upsonic Agent using local Ollama models.",
"icon": "terminal",
"language": "python",
"streamlit": false,
"proxy_agent": false,
"dependencies": {
"api": [
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"upsonic",
"pip"
],
"development": [
"watchdog",
"python-dotenv",
"pytest"
]
},
"entrypoints": {
"api_file": "main.py"
},
"input_schema": {
"inputs": {
"user_query": {
"type": "string",
"description": "User's input question for the agent",
"required": true,
"default": null
}
}
},
"output_schema": {
"bot_response": {
"type": "string",
"description": "Agent's generated response"
}
}
}
Repository
View the complete example: Ollama Agent