- HTTP request
- Request body
- Response body
- Authorization Scopes
- Feature
- VideoContext
- LabelDetectionConfig
- LabelDetectionMode
- ShotChangeDetectionConfig
- ExplicitContentDetectionConfig
- SpeechTranscriptionConfig
- SpeechContext
Performs asynchronous video annotation. Progress and results can be retrieved through the google.longrunning.Operations
interface. Operation.metadata
contains AnnotateVideoProgress
(progress). Operation.response
contains AnnotateVideoResponse
(results).
HTTP request
POST https://videointelligence.googleapis.com/v1beta2/videos:annotate
The URL uses Google API HTTP annotation syntax.
Request body
The request body contains data with the following structure:
JSON representation | |
---|---|
{ "inputUri": string, "inputContent": string, "features": [ enum( |
Fields | |
---|---|
inputUri |
Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: |
inputContent |
The video data bytes. If unset, the input video(s) should be specified via A base64-encoded string. |
features[] |
Requested video annotation features. |
videoContext |
Additional video context and/or feature-specific parameters. |
outputUri |
Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: |
locationId |
Optional cloud region where annotation should take place. Supported cloud regions: |
Response body
If successful, the response body contains an instance of Operation
.
Authorization Scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/cloud-platform
For more information, see the Auth Guide.
Feature
Video annotation feature.
Enums | |
---|---|
FEATURE_UNSPECIFIED |
Unspecified. |
LABEL_DETECTION |
Label detection. Detect objects, such as dog or flower. |
SHOT_CHANGE_DETECTION |
Shot change detection. |
EXPLICIT_CONTENT_DETECTION |
Explicit content detection. |
SPEECH_TRANSCRIPTION |
Speech transcription. |
VideoContext
Video context and/or feature-specific parameters.
JSON representation | |
---|---|
{ "segments": [ { object( |
Fields | |
---|---|
segments[] |
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment. |
labelDetectionConfig |
Config for LABEL_DETECTION. |
shotChangeDetectionConfig |
Config for SHOT_CHANGE_DETECTION. |
explicitContentDetectionConfig |
Config for EXPLICIT_CONTENT_DETECTION. |
speechTranscriptionConfig |
Config for SPEECH_TRANSCRIPTION. |
LabelDetectionConfig
Config for LABEL_DETECTION.
JSON representation | |
---|---|
{
"labelDetectionMode": enum( |
Fields | |
---|---|
labelDetectionMode |
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to |
stationaryCamera |
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with |
model |
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
LabelDetectionMode
Label detection mode.
Enums | |
---|---|
LABEL_DETECTION_MODE_UNSPECIFIED |
Unspecified. |
SHOT_MODE |
Detect shot-level labels. |
FRAME_MODE |
Detect frame-level labels. |
SHOT_AND_FRAME_MODE |
Detect both shot-level and frame-level labels. |
ShotChangeDetectionConfig
Config for SHOT_CHANGE_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
ExplicitContentDetectionConfig
Config for EXPLICIT_CONTENT_DETECTION.
JSON representation | |
---|---|
{ "model": string } |
Fields | |
---|---|
model |
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest". |
SpeechTranscriptionConfig
Config for SPEECH_TRANSCRIPTION.
JSON representation | |
---|---|
{
"languageCode": string,
"maxAlternatives": number,
"filterProfanity": boolean,
"speechContexts": [
{
object( |
Fields | |
---|---|
languageCode |
Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes. |
maxAlternatives |
Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of |
filterProfanity |
Optional If set to |
speechContexts[] |
Optional A means to provide context to assist the speech recognition. |
audioTracks[] |
Optional For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0. |
SpeechContext
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
JSON representation | |
---|---|
{ "phrases": [ string ] } |
Fields | |
---|---|
phrases[] |
Optional A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits. |