Parameters
Parameter | Type | Default | Description |
---|---|---|---|
model_name | GoogleModelName | None | The name of the model to use |
provider | Literal['google-gla', 'google-vertex'] | Provider[Client] | 'google-gla' | The provider to use for authentication and API access. Can be either the string ‘google-gla’ or ‘google-vertex’ or an instance of Provider[httpx.AsyncClient] . If not provided, a new provider will be created using the other parameters |
profile | ModelProfileSpec | None | None | The model profile to use. Defaults to a profile picked by the provider based on the model name |
settings | ModelSettings | None | None | The model settings to use. Defaults to None |
Functions
__init__
Initialize a Gemini model.
Parameters:
model_name
(GoogleModelName): The name of the model to useprovider
(Literal[‘google-gla’, ‘google-vertex’] | Provider[Client]): The provider to use for authentication and API access. Can be either the string ‘google-gla’ or ‘google-vertex’ or an instance ofProvider[httpx.AsyncClient]
. If not provided, a new provider will be created using the other parametersprofile
(ModelProfileSpec | None): The model profile to use. Defaults to a profile picked by the provider based on the model namesettings
(ModelSettings | None): The model settings to use. Defaults to None
base_url
Get the base URL for the provider API.
Returns:
str
: The base URL for the provider API
model_name
Get the model name.
Returns:
GoogleModelName
: The model name
system
Get the model provider.
Returns:
str
: The model provider
request
Make a request to the model.
Parameters:
messages
(list[ModelMessage]): The messages to send to the modelmodel_settings
(ModelSettings | None): Model-specific settingsmodel_request_parameters
(ModelRequestParameters): Request parameters
ModelResponse
: The model response
count_tokens
Make a request to the model for counting tokens.
Parameters:
messages
(list[ModelMessage]): The messages to send to the modelmodel_settings
(ModelSettings | None): Model-specific settingsmodel_request_parameters
(ModelRequestParameters): Request parameters
RequestUsage
: The token usage information
request_stream
Make a request to the model and return a streaming response.
Parameters:
messages
(list[ModelMessage]): The messages to send to the modelmodel_settings
(ModelSettings | None): Model-specific settingsmodel_request_parameters
(ModelRequestParameters): Request parameters
AsyncIterator[StreamedResponse]
: An async iterator of streamed responses
_get_tools
Get the tools for the model request.
Parameters:
model_request_parameters
(ModelRequestParameters): Request parameters
list[ToolDict] | None
: The tools for the request
_get_tool_config
Get the tool configuration for the model request.
Parameters:
model_request_parameters
(ModelRequestParameters): Request parameterstools
(list[ToolDict] | None): The tools to configure
ToolConfigDict | None
: The tool configuration
_generate_content
Generate content using the Gemini API.
Parameters:
messages
(list[ModelMessage]): The messages to send to the modelstream
(bool): Whether to stream the responsemodel_settings
(GoogleModelSettings): Google-specific model settingsmodel_request_parameters
(ModelRequestParameters): Request parameters
GenerateContentResponse | Awaitable[AsyncIterator[GenerateContentResponse]]
: The response from Gemini
_build_content_and_config
Build content and configuration for the Gemini API.
Parameters:
messages
(list[ModelMessage]): The messages to send to the modelmodel_settings
(GoogleModelSettings): Google-specific model settingsmodel_request_parameters
(ModelRequestParameters): Request parameters
tuple[list[ContentUnionDict], GenerateContentConfigDict]
: The content and configuration
_process_response
Process a non-streamed response, and prepare a message to return.
Parameters:
response
(GenerateContentResponse): The response from Gemini
ModelResponse
: The processed model response
_process_streamed_response
Process a streamed response, and prepare a streaming response to return.
Parameters:
response
(AsyncIterator[GenerateContentResponse]): The streamed response from Geminimodel_request_parameters
(ModelRequestParameters): Request parameters
StreamedResponse
: The processed streamed response
_map_messages
Map messages to Gemini format.
Parameters:
messages
(list[ModelMessage]): The messages to map
tuple[ContentDict | None, list[ContentUnionDict]]
: The mapped messages
_map_user_prompt
Map a user prompt to Gemini format.
Parameters:
part
(UserPromptPart): The user prompt part to map
list[PartDict]
: The mapped user prompt
_map_response_schema
Map a response schema to Gemini format.
Parameters:
o
(OutputObjectDefinition): The output object definition to map
dict[str, Any]
: The mapped response schema