例如,如果某个视频中有一辆火车停在岔道口,则 Video Intelligence API 会返回“火车”、“运输”、“铁路岔道口”等标签。每个标签包括一个带有时区设定(时间戳)的时间段,用于表示实体相对于视频开始时的出现时间。每个注释还包含其他信息,包括实体 ID,您可以在 Google Knowledge Graph Search API 中使用该实体 ID 查找有关实体的更多信息。
返回的每个实体的 categoryEntities 字段中还可能包括关联的类别实体。例如,“Terier”实体标签的类别为“Dog”。类别实体具有层次结构。例如,根据层次结构,类别“狗”是类别“哺乳动物”的子级。如需查看 Video Intelligence 使用的常见类别实体的列表,请参阅 entry-level-categories.json。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# Analyze videos for labels\n\nThe Video Intelligence API can identify entities shown in video footage\nusing the [LABEL_DETECTION](/video-intelligence/docs/reference/rest/v1/videos/annotate#feature)\nfeature and annotate these entities with labels (tags). This feature identifies\nobjects, locations, activities, animal species, products, and more.\n\nLabel detection differs from [Object tracking](/video-intelligence/docs/object-tracking).\nUnlike object tracking, label detection provides labels for the entire frame\n(without bounding boxes).\n\nFor example, for a video of a train at a crossing, the Video Intelligence API\nreturns labels such as \"train\", \"transportation\", \"railroad crossing\",\nand so on. Each label includes a time segment with the time offset (timestamp)\nfor the entity's appearance from the beginning of the video.\nEach annotation also contains additional information including an entity\nid that you can use to find more information about the\nentity in the [Google Knowledge Graph Search API](https://developers.google.com/knowledge-graph/).\n\nEach entity returned can also include associated\ncategory entities in the `categoryEntities` field. For example the\n\"Terrier\" entity label has a category of \"Dog\". Category entities have a\nhierarchy. For example, the \"Dog\" category is a child of the \"Mammal\"\ncategory in the hierarchy. For a list of the common category entities that the\nVideo Intelligence uses, see\n[entry-level-categories.json](/static/video-intelligence/docs/entry-level-categories.json).\n\nThe analysis can be compartmentalized as follows:\n\n- Segment level: \n User-selected segments of a video can be specified for analysis by stipulating beginning and ending timestamps for the purposes of annotation (see [VideoSegment](/video-intelligence/docs/reference/rest/v1/videos/annotate#videosegment)). Entities are then identified and labeled within each segment. If no segments are specified, the whole video is treated as one segment.\n\n \u003cbr /\u003e\n\n \u003cbr /\u003e\n\n- Shot level: \n Shots (also known as a *scene* ) are automatically detected within every segment (or video). Entities are then identified and labeled within each scene. For details, see [Shot change detection](#shot-change)\n- Frame level: \n Entities are identified and labeled within each frame (with one frame per second sampling).\n\n\u003cbr /\u003e\n\nTo detect labels in a video, call the\n[`annotate`](/video-intelligence/docs/reference/rest/v1/videos/annotate)\nmethod and specify\n[`LABEL_DETECTION`](/video-intelligence/docs/reference/rest/v1/videos#Feature)\nin the `features` field.\n\nSee\n[Analyzing Videos for Labels](/video-intelligence/docs/analyze-labels) and\n[Label Detection Tutorial](/video-intelligence/docs/label-tutorial).\n\nVideo Intelligence API Visualizer\n=================================\n\nCheck out the [Video Intelligence API visualizer](https://zackakil.github.io/video-intelligence-api-visualiser/#Label%20Detection) to see this feature in action."]]