Documentation Index
Fetch the complete documentation index at: https://docs.upsonic.ai/llms.txt
Use this file to discover all available pages before exploring further.
This example demonstrates how to create and use an Upsonic Agent with NVIDIA NIM models using the NvidiaModel class. The example shows how to leverage NVIDIAβs powerful AI models through their NIM (NVIDIA Inference Microservice) API, including models like Llama 3.1 Nemotron 70B, GPT-OSS, Mistral, and many others.
Overview
Upsonic framework provides seamless integration with NVIDIAβs AI models through the NIM API. This example showcases:
- NvidiaModel Integration β Using NVIDIA NIM API to access various AI models
- Agent Configuration β Creating an Upsonic Agent with NVIDIA models
- Task Execution β Running tasks with the configured agent
- FastAPI Server β Running the agent as a production-ready API server
The NvidiaModel class provides access to NVIDIAβs curated collection of AI models, including:
- Llama 3.1 Nemotron 70B β High-performance instruction-tuned model
- GPT-OSS models β OpenAIβs open-source models
- Mistral models β Mistral AIβs powerful language models
- And many more β Access to NVIDIAβs full model catalog
Project Structure
examples/nvidia-agent/
βββ main.py # Agent with NVIDIA model
βββ upsonic_configs.json # Upsonic CLI configuration
βββ .env.example # Example environment variables
βββ README.md # Quick start guide
Environment Variables
You can configure the model using environment variables:
# Set API key
export NVIDIA_API_KEY="your-api-key"
# Or use NGC_API_KEY (alternative)
export NGC_API_KEY="your-api-key"
# Optional: Set custom base URL
export NVIDIA_BASE_URL="https://your-custom-endpoint.com/v1"
Getting your NVIDIA API key:
- Visit https://build.nvidia.com/
- Sign up or log in to your NVIDIA account
- Navigate to API Keys section
- Create a new API key
- Copy the key to your environment variables
Installation
# Install dependencies
upsonic install
Managing Dependencies
# Add a package
upsonic add <package> <section>
upsonic add requests api
# Remove a package
upsonic remove <package> <section>
upsonic remove requests api
Sections: api, streamlit, development
Option 1: Run Directly
Runs the agent with a default test query.
Option 2: Run as API Server
Server starts at http://localhost:8000. API documentation at /docs.
Example API call:
curl -X POST http://localhost:8000/call \
-H "Content-Type: application/json" \
-d '{"user_query": "What is artificial intelligence?"}'
How It Works
| Component | Description |
|---|
| NvidiaModel | Wraps NVIDIA NIM API for model access |
| Agent | Upsonic Agent configured with NvidiaModel |
| Task | Task object containing user query |
| Execution | Agent processes task and returns response |
Example Output
Query:
"What is artificial intelligence?"
Response:
"Artificial intelligence (AI) is a branch of computer science that aims to create
systems capable of performing tasks that typically require human intelligence..."
Complete Implementation
main.py
"""
NVIDIA Agent Example
This example demonstrates how to create and use an Agent with NVIDIA models.
The example shows:
1. Creating a NvidiaModel instance
2. Creating an Agent with the NvidiaModel
3. Creating a Task
4. Executing the task with the agent
This file contains:
- async main(inputs): For use with `upsonic run` CLI command (FastAPI server)
"""
from upsonic import Agent, Task
from upsonic.models.nvidia import NvidiaModel
async def main(inputs: dict) -> dict:
"""
Async main function for FastAPI server (used by `upsonic run` command).
This function is called by the Upsonic CLI when running the agent as a server.
It receives inputs from the API request and returns a response dictionary.
Args:
inputs: Dictionary containing input parameters as defined in upsonic_configs.json
Expected key: "user_query" (string)
Returns:
Dictionary with output schema as defined in upsonic_configs.json
Expected key: "bot_response" (string)
"""
user_query = inputs.get("user_query", "Hi, how are you?")
model = NvidiaModel(
model_name="meta/llama-3.1-nemotron-70b-instruct:1.0"
)
agent = Agent(
model=model,
name="NVIDIA Agent"
)
answering_task = Task(
description=f"Answer the user question: {user_query}"
)
result = await agent.print_do_async(answering_task)
return {
"bot_response": result
}
if __name__ == "__main__":
import asyncio
asyncio.run(main({"user_query": "Hi, how are you?"}))
upsonic_configs.json
{
"environment_variables": {
"UPSONIC_WORKERS_AMOUNT": {
"type": "number",
"description": "The number of workers for the Upsonic API",
"default": 1
},
"API_WORKERS": {
"type": "number",
"description": "The number of workers for the Upsonic API",
"default": 1
},
"RUNNER_CONCURRENCY": {
"type": "number",
"description": "The number of runners for the Upsonic API",
"default": 1
},
"NEW_FEATURE_FLAG": {
"type": "string",
"description": "New feature flag added in version 2.0",
"default": "enabled"
}
},
"machine_spec": {
"cpu": 2,
"memory": 4096,
"storage": 1024
},
"agent_name": "NVIDIA Agent",
"description": "NVIDIA-powered AI Agent using Upsonic framework with NvidiaModel",
"icon": "book",
"language": "book",
"streamlit": false,
"proxy_agent": false,
"dependencies": {
"api": [
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"upsonic",
"pip"
],
"streamlit": [
"streamlit==1.32.2",
"pandas==2.2.1",
"numpy==1.26.4"
],
"development": [
"watchdog",
"python-dotenv",
"ipdb",
"pytest",
"streamlit-autorefresh"
]
},
"entrypoints": {
"api_file": "main.py",
"streamlit_file": "streamlit_app.py"
},
"input_schema": {
"inputs": {
"user_query": {
"type": "string",
"description": "User's question or query for the NVIDIA agent to answer",
"required": true,
"default": null
}
}
},
"output_schema": {
"bot_response": {
"type": "string",
"description": "NVIDIA agent's generated response to the user query"
}
}
}
For more information on NVIDIA NIM:
Repository
View the complete example: NVIDIA Agent Example