Harm categories that can be detected in user input and model responses.
| Enums | |
|---|---|
HARM_CATEGORY_UNSPECIFIED |
Default value. This value is unused. |
HARM_CATEGORY_HATE_SPEECH |
Content that promotes violence or incites hatred against individuals or groups based on certain attributes. |
HARM_CATEGORY_DANGEROUS_CONTENT |
Content that promotes, facilitates, or enables dangerous activities. |
HARM_CATEGORY_HARASSMENT |
Abusive, threatening, or content intended to bully, torment, or ridicule. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
Content that contains sexually explicit material. |
HARM_CATEGORY_CIVIC_INTEGRITY |
Deprecated: Election filter is not longer supported. The harm category is civic integrity. |
HARM_CATEGORY_IMAGE_HATE |
Images that contain hate speech. |
HARM_CATEGORY_IMAGE_DANGEROUS_CONTENT |
Images that contain dangerous content. |
HARM_CATEGORY_IMAGE_HARASSMENT |
Images that contain harassment. |
HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICIT |
Images that contain sexually explicit content. |
HARM_CATEGORY_JAILBREAK |
Prompts designed to bypass safety filters. |