Annotare un video utilizzando le librerie client

Questa guida rapida introduce l'API Video Intelligence. In questa guida rapida, configurerai il progetto Google Cloud e l'autorizzazione, quindi farai una richiesta a Video Intelligence per annotare un video.

Prima di iniziare

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Cloud Video Intelligence API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  5. Install the Google Cloud CLI.

  6. Se utilizzi un provider di identità (IdP) esterno, devi prima accedere a gcloud CLI con la tua identità federata.

  7. Per inizializzare gcloud CLI, esegui questo comando:

    gcloud init
  8. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  9. Verify that billing is enabled for your Google Cloud project.

  10. Enable the Cloud Video Intelligence API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  11. Install the Google Cloud CLI.

  12. Se utilizzi un provider di identità (IdP) esterno, devi prima accedere a gcloud CLI con la tua identità federata.

  13. Per inizializzare gcloud CLI, esegui questo comando:

    gcloud init
  14. installa la libreria client

    Go

    go get cloud.google.com/go/videointelligence/apiv1

    Java

    Node.js

    Prima di installare la libreria, assicurati di aver preparato l'ambiente per lo sviluppo Node.js.

    npm install @google-cloud/video-intelligence

    Python

    Prima di installare la libreria, assicurati di aver preparato l'ambiente per lo sviluppo Python.

    pip install --upgrade google-cloud-videointelligence

    Linguaggi aggiuntivi

    C#: Segui le istruzioni di configurazione di C# nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per .NET.

    PHP: Segui le istruzioni di configurazione di PHP nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per PHP.

    Ruby: Segui le istruzioni di configurazione di Ruby nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per Ruby.

    Configura l'autenticazione

    1. Installa Google Cloud CLI. Dopo l'installazione, inizializza Google Cloud CLI eseguendo il seguente comando:

      gcloud init

      Se utilizzi un provider di identità (IdP) esterno, devi prima accedere a gcloud CLI con la tua identità federata.

    2. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

      Viene visualizzata una schermata di accesso. Dopo aver eseguito l'accesso, le tue credenziali vengono archiviate nel file delle credenziali locali utilizzato da ADC.

    Rilevamento etichette

    Ora puoi utilizzare l'API Video Intelligence per richiedere informazioni da un video o da un segmento video, ad esempio il rilevamento delle etichette. Esegui il seguente codice per eseguire la tua prima richiesta di rilevamento delle etichette video:

    Go

    
    // Sample video_quickstart uses the Google Cloud Video Intelligence API to label a video.
    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    
    	"github.com/golang/protobuf/ptypes"
    
    	video "cloud.google.com/go/videointelligence/apiv1"
    	videopb "cloud.google.com/go/videointelligence/apiv1/videointelligencepb"
    )
    
    func main() {
    	ctx := context.Background()
    
    	// Creates a client.
    	client, err := video.NewClient(ctx)
    	if err != nil {
    		log.Fatalf("Failed to create client: %v", err)
    	}
    	defer client.Close()
    
    	op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
    		InputUri: "gs://cloud-samples-data/video/cat.mp4",
    		Features: []videopb.Feature{
    			videopb.Feature_LABEL_DETECTION,
    		},
    	})
    	if err != nil {
    		log.Fatalf("Failed to start annotation job: %v", err)
    	}
    
    	resp, err := op.Wait(ctx)
    	if err != nil {
    		log.Fatalf("Failed to annotate: %v", err)
    	}
    
    	// Only one video was processed, so get the first result.
    	result := resp.GetAnnotationResults()[0]
    
    	for _, annotation := range result.SegmentLabelAnnotations {
    		fmt.Printf("Description: %s\n", annotation.Entity.Description)
    
    		for _, category := range annotation.CategoryEntities {
    			fmt.Printf("\tCategory: %s\n", category.Description)
    		}
    
    		for _, segment := range annotation.Segments {
    			start, _ := ptypes.Duration(segment.Segment.StartTimeOffset)
    			end, _ := ptypes.Duration(segment.Segment.EndTimeOffset)
    			fmt.Printf("\tSegment: %s to %s\n", start, end)
    			fmt.Printf("\tConfidence: %v\n", segment.Confidence)
    		}
    	}
    }
    

    Java

    
    import com.google.api.gax.longrunning.OperationFuture;
    import com.google.cloud.videointelligence.v1.AnnotateVideoProgress;
    import com.google.cloud.videointelligence.v1.AnnotateVideoRequest;
    import com.google.cloud.videointelligence.v1.AnnotateVideoResponse;
    import com.google.cloud.videointelligence.v1.Entity;
    import com.google.cloud.videointelligence.v1.Feature;
    import com.google.cloud.videointelligence.v1.LabelAnnotation;
    import com.google.cloud.videointelligence.v1.LabelSegment;
    import com.google.cloud.videointelligence.v1.VideoAnnotationResults;
    import com.google.cloud.videointelligence.v1.VideoIntelligenceServiceClient;
    import java.util.List;
    
    public class QuickstartSample {
    
      /** Demonstrates using the video intelligence client to detect labels in a video file. */
      public static void main(String[] args) throws Exception {
        // Instantiate a video intelligence client
        try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
          // The Google Cloud Storage path to the video to annotate.
          String gcsUri = "gs://cloud-samples-data/video/cat.mp4";
    
          // Create an operation that will contain the response when the operation completes.
          AnnotateVideoRequest request =
              AnnotateVideoRequest.newBuilder()
                  .setInputUri(gcsUri)
                  .addFeatures(Feature.LABEL_DETECTION)
                  .build();
    
          OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> response =
              client.annotateVideoAsync(request);
    
          System.out.println("Waiting for operation to complete...");
    
          List<VideoAnnotationResults> results = response.get().getAnnotationResultsList();
          if (results.isEmpty()) {
            System.out.println("No labels detected in " + gcsUri);
            return;
          }
          for (VideoAnnotationResults result : results) {
            System.out.println("Labels:");
            // get video segment label annotations
            for (LabelAnnotation annotation : result.getSegmentLabelAnnotationsList()) {
              System.out.println(
                  "Video label description : " + annotation.getEntity().getDescription());
              // categories
              for (Entity categoryEntity : annotation.getCategoryEntitiesList()) {
                System.out.println("Label Category description : " + categoryEntity.getDescription());
              }
              // segments
              for (LabelSegment segment : annotation.getSegmentsList()) {
                double startTime =
                    segment.getSegment().getStartTimeOffset().getSeconds()
                        + segment.getSegment().getStartTimeOffset().getNanos() / 1e9;
                double endTime =
                    segment.getSegment().getEndTimeOffset().getSeconds()
                        + segment.getSegment().getEndTimeOffset().getNanos() / 1e9;
                System.out.printf("Segment location : %.3f:%.3f\n", startTime, endTime);
                System.out.println("Confidence : " + segment.getConfidence());
              }
            }
          }
        }
      }
    }

    Node.js

    Prima di eseguire l'esempio, assicurati di aver preparato l'ambiente per lo sviluppo Node.js.

    // Imports the Google Cloud Video Intelligence library
    const videoIntelligence = require('@google-cloud/video-intelligence');
    
    // Creates a client
    const client = new videoIntelligence.VideoIntelligenceServiceClient();
    
    // The GCS uri of the video to analyze
    const gcsUri = 'gs://cloud-samples-data/video/cat.mp4';
    
    // Construct request
    const request = {
      inputUri: gcsUri,
      features: ['LABEL_DETECTION'],
    };
    
    // Execute request
    const [operation] = await client.annotateVideo(request);
    
    console.log(
      'Waiting for operation to complete... (this may take a few minutes)'
    );
    
    const [operationResult] = await operation.promise();
    
    // Gets annotations for video
    const annotations = operationResult.annotationResults[0];
    
    // Gets labels for video from its annotations
    const labels = annotations.segmentLabelAnnotations;
    labels.forEach(label => {
      console.log(`Label ${label.entity.description} occurs at:`);
      label.segments.forEach(segment => {
        segment = segment.segment;
        console.log(
          `\tStart: ${segment.startTimeOffset.seconds}` +
            `.${(segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
        );
        console.log(
          `\tEnd: ${segment.endTimeOffset.seconds}.` +
            `${(segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
        );
      });
    });

    Python

    Prima di eseguire l'esempio, assicurati di aver preparato l'ambiente per lo sviluppo Python.

    from google.cloud import videointelligence
    
    video_client = videointelligence.VideoIntelligenceServiceClient()
    features = [videointelligence.Feature.LABEL_DETECTION]
    operation = video_client.annotate_video(
        request={
            "features": features,
            "input_uri": "gs://cloud-samples-data/video/cat.mp4",
        }
    )
    print("\nProcessing video for label annotations:")
    
    result = operation.result(timeout=180)
    print("\nFinished processing.")
    
    # first result is retrieved because a single video was processed
    segment_labels = result.annotation_results[0].segment_label_annotations
    for i, segment_label in enumerate(segment_labels):
        print("Video label description: {}".format(segment_label.entity.description))
        for category_entity in segment_label.category_entities:
            print(
                "\tLabel category description: {}".format(category_entity.description)
            )
    
        for i, segment in enumerate(segment_label.segments):
            start_time = (
                segment.segment.start_time_offset.seconds
                + segment.segment.start_time_offset.microseconds / 1e6
            )
            end_time = (
                segment.segment.end_time_offset.seconds
                + segment.segment.end_time_offset.microseconds / 1e6
            )
            positions = "{}s to {}s".format(start_time, end_time)
            confidence = segment.confidence
            print("\tSegment {}: {}".format(i, positions))
            print("\tConfidence: {}".format(confidence))
        print("\n")

    Linguaggi aggiuntivi

    C#: Segui le istruzioni di configurazione di C# nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per .NET.

    PHP: Segui le istruzioni di configurazione di PHP nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per PHP.

    Ruby: Segui le istruzioni di configurazione di Ruby nella pagina delle librerie client e poi visita la documentazione di riferimento di Video Intelligence per Ruby.

    Complimenti! Hai inviato la tua prima richiesta all'API Video Intelligence.

    Com'è andata?

    Esegui la pulizia

    Per evitare che al tuo account Google Cloud vengano addebitati costi relativi alle risorse utilizzate in questa pagina, segui questi passaggi.

    Passaggi successivi