Model evaluation metrics for video object tracking
problems. Evaluates prediction quality of both labeled bounding boxes
and labeled tracks (i.e. series of bounding boxes sharing same label and
instance ID).
Output only. The total number of bounding boxes (i.e. summed
over all frames) the ground truth used to create this
evaluation had.
Output only. The single metric for bounding boxes evaluation:
the mean_average_precision averaged over all
bounding_box_metrics_entries.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-19 UTC."],[],[]]