Train a video action recognition model

This page shows you how to train an AutoML action recognition model from a video dataset using either the Google Cloud console or the Vertex AI API.

Train an AutoML model

Google Cloud console

  1. In the Google Cloud console, in the Vertex AI section, go to the Datasets page.

    Go to the Datasets page

  2. Click the name of the dataset you want to use to train your model to open its details page.

  3. Click Train new model.

  4. Enter the display name for your new model.

  5. If you want manually set how your training data is split, expand Advanced options and select a data split option. Learn more.

  6. Click Continue.

  7. Select the model training method.

    • AutoML is a good choice for a wide range of use cases.
    • Seq2seq+ is a good choice for experimentation. The algorithm is likely to converge faster than AutoML because its architecture is simpler and it uses a smaller search space. Our experiments find that Seq2Seq+ performs well with a small time budget and on datasets smaller than 1 GB in size.
    Click Continue.

  8. Click Start Training.

    Model training can take many hours, depending on the size and complexity of your data and your training budget, if you specified one. You can close this tab and return to it later. You will receive an email when your model has completed training.

    Several minutes after training starts, you can check the training node hour estimation from the model's properties information. If you cancel the training, there is no charge on the current product.



Before using any of the request data, make the following replacements:

  • PROJECT: Your project ID.
  • LOCATION: Region where dataset is located and Model is created. For example, us-central1.
  • TRAINING_PIPELINE_DISPLAY_NAME: Required. A display name for the TrainingPipeline.
  • DATASET_ID: ID for the training Dataset.
  • TRAINING_FRACTION, TEST_FRACTION: The fractionSplit object is optional; you use it to control your data split. For more information about controlling data split, see see About data splits for AutoML models. For example:
    • {"trainingFraction": "0.8","validationFraction": "0","testFraction": "0.2"}
  • MODEL_DISPLAY_NAME: Display name of the trained Model.
  • MODEL_DESCRIPTION: A description for the Model.
  • MODEL_LABELS: Any set of key-value pairs to organize your models. For example:
    • "env": "prod"
    • "tier": "backend"
    • MOBILE_VERSATILE_1: general purpose usage
  • PROJECT_NUMBER: Your project's automatically generated project number

HTTP method and URL:


Request JSON body:

  "inputDataConfig": {
    "datasetId": "DATASET_ID",
    "fractionSplit": {
      "trainingFraction": "TRAINING_FRACTION",
      "validationFraction": "0",
      "testFraction": "TEST_FRACTION"
  "modelToUpload": {
    "displayName": "MODEL_DISPLAY_NAME",
    "description": "MODEL_DESCRIPTION",
    "labels": {
      "KEY": "VALUE"
  "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_object_tracking_1.0.0.yaml",
  "trainingTaskInputs": {
    "modelType": ["EDGE_MODEL_TYPE"],

To send your request, choose one of these options:


Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \


Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "" | Select-Object -Expand Content

The response contains information about specifications as well as the TRAININGPIPELINE_ID.


Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


public class CreateTrainingPipelineVideoActionRecognitionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "PROJECT";
    String displayName = "DISPLAY_NAME";
    String datasetId = "DATASET_ID";
    String modelDisplayName = "MODEL_DISPLAY_NAME";
        project, displayName, datasetId, modelDisplayName);

  static void createTrainingPipelineVideoActionRecognitionSample(
      String project, String displayName, String datasetId, String modelDisplayName)
      throws IOException {
    PipelineServiceSettings settings =
    String location = "us-central1";

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (PipelineServiceClient client = PipelineServiceClient.create(settings)) {
      AutoMlVideoActionRecognitionInputs trainingTaskInputs =

      InputDataConfig inputDataConfig =
      Model modelToUpload = Model.newBuilder().setDisplayName(modelDisplayName).build();
      TrainingPipeline trainingPipeline =
                      + "automl_video_action_recognition_1.0.0.yaml")
      LocationName parent = LocationName.of(project, location);
      TrainingPipeline response = client.createTrainingPipeline(parent, trainingPipeline);
      System.out.format("response: %s\n", response);
      System.out.format("Name: %s\n", response.getName());


To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.

from import aiplatform
from import trainingjob

def create_training_pipeline_video_action_recognition_sample(
    project: str,
    display_name: str,
    dataset_id: str,
    model_display_name: str,
    model_type: str,
    location: str = "us-central1",
    api_endpoint: str = "",
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.PipelineServiceClient(client_options=client_options)
    training_task_inputs = trainingjob.definition.AutoMlVideoActionRecognitionInputs(
        # modelType can be either 'CLOUD' or 'MOBILE_VERSATILE_1'

    training_pipeline = {
        "display_name": display_name,
        "training_task_definition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_action_recognition_1.0.0.yaml",
        "training_task_inputs": training_task_inputs,
        "input_data_config": {"dataset_id": dataset_id},
        "model_to_upload": {"display_name": model_display_name},
    parent = f"projects/{project}/locations/{location}"
    response = client.create_training_pipeline(
        parent=parent, training_pipeline=training_pipeline
    print("response:", response)

Control the data split using REST

You can control how your training data is split between the training, validation, and test sets. When using the Vertex AI API, use the Split object to determine your data split. The Split object can be included in the InputConfig object as one of several object types, each of which provides a different way to split the training data. You can select one method only.

  • FractionSplit:
    • TRAINING_FRACTION: The fraction of the training data to be used for the training set.
    • VALIDATION_FRACTION: The fraction of the training data to be used for the validation set. Not used for video data.
    • TEST_FRACTION: The fraction of the training data to be used for the test set.

    If any of the fractions are specified, all must be specified. The fractions must add up to 1.0. The default values for the fractions differ depending on your data type. Learn more.

    "fractionSplit": {
      "trainingFraction": TRAINING_FRACTION,
      "validationFraction": VALIDATION_FRACTION,
      "testFraction": TEST_FRACTION
  • FilterSplit:
    • TRAINING_FILTER: Data items that match this filter are used for the training set.
    • VALIDATION_FILTER: Data items that match this filter are used for the validation set. Must be "-" for video data.
    • TEST_FILTER: Data items that match this filter are used for the test set.

    These filters can be used with the ml_use label, or with any labels you apply to your data. Learn more about using the ml-use label and other labels to filter your data.

    The following example shows how to use the filterSplit object with the ml_use label, with the validation set included:

    "filterSplit": {
    "trainingFilter": "",
    "validationFilter": "",
    "testFilter": ""