Package generative_models (1.50.0)

API documentation for generative_models package.

Classes

Candidate

A response candidate generated by the model.

ChatSession

Chat session holds the chat history.

Content

The multi-part content of a message.

Usage:

response = model.generate_content(contents=[
    Content(role="user", parts=[Part.from_text("Why is sky blue?")])
])
```

FinishReason

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Enum values:

FINISH_REASON_UNSPECIFIED (0):
    The finish reason is unspecified.
STOP (1):
    Natural stop point of the model or provided
    stop sequence.
MAX_TOKENS (2):
    The maximum number of tokens as specified in
    the request was reached.
SAFETY (3):
    The token generation was stopped as the
    response was flagged for safety reasons. NOTE:
    When streaming the Candidate.content will be
    empty if content filters blocked the output.
RECITATION (4):
    The token generation was stopped as the
    response was flagged for unauthorized citations.
OTHER (5):
    All other reasons that stopped the token
    generation
BLOCKLIST (6):
    The token generation was stopped as the
    response was flagged for the terms which are
    included from the terminology blocklist.
PROHIBITED_CONTENT (7):
    The token generation was stopped as the
    response was flagged for the prohibited
    contents.
SPII (8):
    The token generation was stopped as the
    response was flagged for Sensitive Personally
    Identifiable Information (SPII) contents.

FunctionDeclaration

A representation of a function declaration.

Usage: Create function declaration and tool:

get_current_weather_func = generative_models.FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit",
                ]
            }
        },
        "required": [
            "location"
        ]
    },
)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

GenerationConfig

Parameters for the generation.

GenerationResponse

The response from the model.

GenerativeModel

Initializes GenerativeModel.

Usage:

model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```

HarmBlockThreshold

Probability based thresholds levels for blocking.

Enum values:

HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):
    Unspecified harm block threshold.
BLOCK_LOW_AND_ABOVE (1):
    Block low threshold and above (i.e. block
    more).
BLOCK_MEDIUM_AND_ABOVE (2):
    Block medium threshold and above.
BLOCK_ONLY_HIGH (3):
    Block only high threshold (i.e. block less).
BLOCK_NONE (4):
    Block none.

HarmCategory

Harm categories that will block the content.

Enum values:

HARM_CATEGORY_UNSPECIFIED (0):
    The harm category is unspecified.
HARM_CATEGORY_HATE_SPEECH (1):
    The harm category is hate speech.
HARM_CATEGORY_DANGEROUS_CONTENT (2):
    The harm category is dangerous content.
HARM_CATEGORY_HARASSMENT (3):
    The harm category is harassment.
HARM_CATEGORY_SEXUALLY_EXPLICIT (4):
    The harm category is sexually explicit
    content.

Image

The image that can be sent to a generative model.

Part

A part of a multi-part Content message.

Usage:

text_part = Part.from_text("Why is sky blue?")
image_part = Part.from_image(Image.load_from_file("image.jpg"))
video_part = Part.from_uri(uri="gs://.../video.mp4", mime_type="video/mp4")
function_response_part = Part.from_function_response(
    name="get_current_weather",
    response={
        "content": {"weather_there": "super nice"},
    }
)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)
```

ResponseValidationError

Common base class for all non-exit exceptions.

SafetySetting

Parameters for the generation.

Tool

A collection of functions that the model may use to generate response.

Usage: Create tool from function declarations:

get_current_weather_func = generative_models.FunctionDeclaration(...)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```