- 2.0.0 (latest)
- 1.43.0
- 1.42.0
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.0
- 1.36.0
- 1.35.0
- 1.34.1
- 1.33.0
- 1.32.1
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.0
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.0
- 1.23.1
- 1.22.0
- 1.21.0
- 1.20.1
- 1.19.0
- 1.18.0
- 1.17.1
- 1.16.0
- 1.15.0
- 1.14.1
- 1.13.5
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.1
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.2
- 1.2.0
- 1.1.1
- 1.0.0
- 0.8.2
- 0.7.1
- 0.6.0
- 0.5.0
- 0.4.1
- 0.3.0
- 0.2.0
- 0.1.1
Parameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Generative model parameters to control the model behavior.
.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
Attributes |
|
---|---|
Name | Description |
temperature |
float
The temperature used for sampling during response generation. Value ranges from 0 to 1. Temperature controls the degree of randomness in token selection. Lower temperature means less randomness, while higher temperature means more randomness. Valid range: [0.0, 1.0] This field is a member of oneof _ _temperature .
|
input_token_limit |
google.cloud.dialogflowcx_v3beta1.types.LlmModelSettings.Parameters.InputTokenLimit
The input token limit. This setting is currently only supported by playbooks. This field is a member of oneof _ _input_token_limit .
|
output_token_limit |
google.cloud.dialogflowcx_v3beta1.types.LlmModelSettings.Parameters.OutputTokenLimit
The output token limit. This setting is currently only supported by playbooks. Only one of output_token_limit and max_output_tokens is allowed to be set. This field is a member of oneof _ _output_token_limit .
|
Classes
InputTokenLimit
InputTokenLimit(value)
The input token limits for 1 LLM call. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.
OutputTokenLimit
OutputTokenLimit(value)
The output token limits for 1 LLM call. The limits are subject to change. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.