Veo on Vertex AI is designed with Google's AI principles in mind. However, it is important for developers to understand how to test and deploy Google's models safely and responsibly. To aid developers, Veo on Vertex AI has built-in safety features to help customers block potentially harmful outputs within their use cases. For more information, see safety filters.
We encourage customers to use fairness, interpretability, privacy, and security best practices when developing AI applications. For more information, see The People + AI Guidebook.
Safety filters
Veo on Vertex AI offers several ways to input prompts to generate videos, including
text, video, and images. Prompts that are provided to Veo are
assessed against a list of safety filters, which include harmful categories
(for example, violence
, sexual
, derogatory
, and toxic
). These safety
filters aim to filter out input images and videos that violate the Google Cloud
Platform Acceptable Use Policy,
Generative AI Prohibited Use
Policy, or Our AI
Principles.
If the model responds to a request with an error message, such as The
prompt couldn't be submitted
or it might violate our
policies
, then the input is triggering a safety filter. If fewer images
than requested are returned, then some generated output is being blocked for not
meeting safety requirements.
Safety filter code categories
Depending on the safety filters that you configure, your output may contain a safety code similar to: "Veo could not generate videos because the input image violates Vertex AI's usage guidelines. If you think this was an error, send feedback. Support codes: 15236754"
The code listed in the output corresponds to a specific harmful category.
The following table displays the error code to safety category mappings:
Error code | Safety category | Description |
---|---|---|
58061214 17301594
|
Child | Detects child content where it isn't allowed due to the API request settings or allowlisting. |
29310472 15236754
|
Celebrity | Detects a photorealistic representation of a celebrity in the request. |
62263041 |
Dangerous content | Detects content that's potentially dangerous in nature. |
57734940 22137204
|
Hate | Detects hate-related topics or content. |
74803281 29578790 42876398
|
Other | Detects other miscellaneous safety issues with the request |
92201652 |
Personal information | Detects Personally Identifiable Information (PII) in the text, such as mentioning a credit card number, home addresses, or other such information. |
89371032 49114662 72817394
|
Prohibited content | Detects the request of prohibited content in the request. |
90789179 63429089 43188360
|
Sexual | Detects content that's sexual in nature. |
78610348 |
Toxic | Detects toxic topics or content in the text. |
61493863 56562880
|
Violence | Detects violence-related content from the image or text. |
32635315 |
Vulgar | Detects vulgar topics or content from the text. |
What's next
Learn about Responsible AI for Large Language Models (LLMs)
Learn more about Google's recommendations for Responsible AI practices
Read our blog, A shared agenda for responsible AI progress