对象跟踪与标签检测不同。虽然标签检测为整个帧提供标签(没有边界框),而对象跟踪则检测单个对象,并提供标签与边界框,边界框用于描述每个对象在帧中的位置。例如,如果某个视频中有一车辆通过交叉路口,可能会产生“汽车”、“卡车”、“自行车”、“轮胎”、“灯”、“窗户”等标签。每个标签包含一系列边界框,用于显示该对象在帧中的位置。
每个边界框还带有包含时区设定(时间戳)的关联时间段,用于指示相对于视频开始时的时长偏移量。 注释还包含其他实体信息,包括实体 ID,您可以在 Google Knowledge Graph Search API 中使用该实体 ID 查找有关实体的更多信息。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# Object tracking\n\n*Object tracking* can track multiple objects detected in an input video or video\nsegments and return\nlabels (tags) associated with the detected entities along with the location of the entity in\nthe frame.\n| **Note:** There is a limit on the size of the detected objects. Very small objects in the video might not get detected.\n\nObject tracking differs from\n[label detection](/video-intelligence/docs/analyze-labels). While label\ndetection provides labels for the entire frame (without bounding boxes),\nobject tracking detects individual objects and provides a label along with\na bounding box that describes the location in the frame for each object. For example, a video\nof vehicles crossing an intersection may produce labels such as \"car\" , \"truck\",\n\"bike\", \"tires\", \"lights\", \"window\" and so on. Each label includes a series\nof bounding boxes showing the location of the object in the frame.\nEach bounding box also has an associated time segment\nwith a time offset (timestamp) that indicates the duration offset from\nthe beginning of the video. The annotation also contains additional entity\ninformation including an entity id that you can use to find more information\nabout that entity in the\n[Google Knowledge Graph Search API](https://developers.google.com/knowledge-graph/).\n\nTo make an object tracking request, call the\n[`annotate`](/video-intelligence/docs/reference/rest/v1p2beta1/videos/annotate)\nmethod and specify\n[`OBJECT_TRACKING`](/video-intelligence/docs/reference/rest/v1p2beta1/videos#Feature)\nin the `features` field.\n\nCheck out the [Video Intelligence API visualizer](https://zackakil.github.io/video-intelligence-api-visualiser/#Object%20Tracking) to see this feature in action.\n\nFor an example, see [Object Tracking](/video-intelligence/docs/object-tracking)\nand [Shot Change Detection](/video-intelligence/docs/shot-detection) tutorial."]]