Skip to main content

Overview

Huggingface provides access to thousands of open-source models through their Inference API. Great for experimentation with cutting-edge models. Model Class: HuggingFaceModel

Authentication

export HF_TOKEN="hf_..."

Examples

from upsonic import Agent, Task
from upsonic.models.huggingface import HuggingFaceModel

model = HuggingFaceModel(model_name="meta-llama/Llama-3.3-70B-Instruct")
agent = Agent(model=model)

task = Task("Hello, how are you?")
result = agent.do(task)
print(result)

Model Settings

You can set model parameters in two ways: on the model or on the Agent. On the model:
from upsonic import Agent, Task
from upsonic.models.huggingface import HuggingFaceModel, HuggingFaceModelSettings

model = HuggingFaceModel(
    model_name="meta-llama/Llama-3.3-70B-Instruct",
    settings=HuggingFaceModelSettings(max_tokens=1024, temperature=0.7)
)
agent = Agent(model=model)
On the Agent:
from upsonic import Agent, Task
from upsonic.models.huggingface import HuggingFaceModelSettings

agent = Agent(
    model="huggingface/meta-llama/Llama-3.3-70B-Instruct",
    settings=HuggingFaceModelSettings(max_tokens=1024)
)

Parameters

ParameterTypeDescriptionDefaultSource
max_tokensintMaximum tokens to generate2048Base
temperaturefloatSampling temperature (0.0-1.0)0.7Base
top_pfloatNucleus sampling0.95Base
seedintRandom seedNoneBase
stop_sequenceslist[str]Stop sequencesNoneBase
timeoutfloatRequest timeout (seconds)600Base