- 1.71.1 (latest)
- 1.71.0
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
SafetySetting(
*,
category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
method: typing.Optional[
google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
] = None
)
Parameters for the generation.
Classes
HarmBlockMethod
HarmBlockMethod(value)
Probability vs severity.
Enum values:
HARM_BLOCK_METHOD_UNSPECIFIED (0):
The harm block method is unspecified.
SEVERITY (1):
The harm block method uses both probability
and severity scores.
PROBABILITY (2):
The harm block method uses the probability
score.
HarmBlockThreshold
HarmBlockThreshold(value)
Probability based thresholds levels for blocking.
Enum values:
HARM_BLOCK_THRESHOLD_UNSPECIFIED (0):
Unspecified harm block threshold.
BLOCK_LOW_AND_ABOVE (1):
Block low threshold and above (i.e. block
more).
BLOCK_MEDIUM_AND_ABOVE (2):
Block medium threshold and above.
BLOCK_ONLY_HIGH (3):
Block only high threshold (i.e. block less).
BLOCK_NONE (4):
Block none.
HarmCategory
HarmCategory(value)
Harm categories that will block the content.
Enum values:
HARM_CATEGORY_UNSPECIFIED (0):
The harm category is unspecified.
HARM_CATEGORY_HATE_SPEECH (1):
The harm category is hate speech.
HARM_CATEGORY_DANGEROUS_CONTENT (2):
The harm category is dangerous content.
HARM_CATEGORY_HARASSMENT (3):
The harm category is harassment.
HARM_CATEGORY_SEXUALLY_EXPLICIT (4):
The harm category is sexually explicit
content.
Methods
SafetySetting
SafetySetting(
*,
category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
method: typing.Optional[
google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
] = None
)
Safety settings.