DATASET: the BigQuery dataset that
contains the model.
MODEL_NAME: The name of the model.
Output
ML.TRIAL_INFO returns one row per trial with the following columns:
trial_id: an INT64 value that contains the ID assigned to each trial in
the approximate order of trial execution. trial_id values start from 1.
hyperparameters: a STRUCT value that contains the hyperparameters used in
the trial.
hparam_tuning_evaluation_metrics: a STRUCT value that contains the
evaluation metrics appropriate to the hyperparameter tuning objective
specified by the
hparam_tuning_objectives argument
in the CREATE MODEL statement. Metrics are calculated from the evaluation
data. For more information about the datasets used in hyperparameter tuning,
see Data split.
training_loss: a FLOAT64 value that contains the loss of the trial during
the last iteration, calculated using the training data.
eval_loss: a FLOAT64 value that contains the loss of the trial during the
last iteration, calculated using the evaluation data.
status: a STRING value that contains the final status of the trial.
Possible values include the following:
SUCCEEDED: the trial succeeded.
FAILED: the trial failed.
INFEASIBLE: the trial was not run due to an invalid combination of
hyperparameters.
error_message: a STRING value that contains the error message that is
returned if the trial didn't succeed. For more information, see
Error handling.
is_optimal: a BOOL value that indicates whether the trial had the best
objective value. If multiple trials are marked as optimal, then the trial
with the smallest trial_id value is used as the default trial during model
serving.
Example
The following query retrieves information of all trials for the model
mydataset.mymodel in your default project:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eML.TRIAL_INFO\u003c/code\u003e function displays information about trials from a model that uses hyperparameter tuning.\u003c/p\u003e\n"],["\u003cp\u003eThe syntax for using \u003ccode\u003eML.TRIAL_INFO\u003c/code\u003e is \u003ccode\u003eML.TRIAL_INFO(MODEL project_id.dataset.model)\u003c/code\u003e, where you specify your project ID, dataset, and model name.\u003c/p\u003e\n"],["\u003cp\u003eThe function returns one row per trial, providing details such as \u003ccode\u003etrial_id\u003c/code\u003e, \u003ccode\u003ehyperparameters\u003c/code\u003e, \u003ccode\u003ehparam_tuning_evaluation_metrics\u003c/code\u003e, \u003ccode\u003etraining_loss\u003c/code\u003e, \u003ccode\u003eeval_loss\u003c/code\u003e, \u003ccode\u003estatus\u003c/code\u003e, \u003ccode\u003eerror_message\u003c/code\u003e, and \u003ccode\u003eis_optimal\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003ePossible status values for a trial include \u003ccode\u003eSUCCEEDED\u003c/code\u003e, \u003ccode\u003eFAILED\u003c/code\u003e, and \u003ccode\u003eINFEASIBLE\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# The ML.TRIAL_INFO function\n==========================\n\nThis document describes the `ML.TRIAL_INFO` function, which lets you display\ninformation about trials from a model that uses\n[hyperparameter tuning](/bigquery/docs/hp-tuning-overview).\n\nSyntax\n------\n\n```sql\nML.TRIAL_INFO(MODEL `PROJECT_ID.DATASET.MODEL_NAME`)\n```\n\n### Arguments\n\n`ML.TRIAL_INFO` takes the following arguments:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: your project ID.\n- \u003cvar translate=\"no\"\u003eDATASET\u003c/var\u003e: the BigQuery dataset that contains the model.\n- \u003cvar translate=\"no\"\u003eMODEL_NAME\u003c/var\u003e: The name of the model.\n\nOutput\n------\n\n`ML.TRIAL_INFO` returns one row per trial with the following columns:\n\n- `trial_id`: an `INT64` value that contains the ID assigned to each trial in the approximate order of trial execution. `trial_id` values start from `1`.\n- `hyperparameters`: a `STRUCT` value that contains the hyperparameters used in the trial.\n- `hparam_tuning_evaluation_metrics`: a `STRUCT` value that contains the evaluation metrics appropriate to the hyperparameter tuning objective specified by the [`hparam_tuning_objectives` argument](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create#hparam_tuning_objectives) in the `CREATE MODEL` statement. Metrics are calculated from the evaluation data. For more information about the datasets used in hyperparameter tuning, see [Data split](/bigquery/docs/hp-tuning-overview#data_split).\n- `training_loss`: a `FLOAT64` value that contains the loss of the trial during the last iteration, calculated using the training data.\n- `eval_loss`: a `FLOAT64` value that contains the loss of the trial during the last iteration, calculated using the evaluation data.\n- `status`: a `STRING` value that contains the final status of the trial.\n Possible values include the following:\n\n - `SUCCEEDED`: the trial succeeded.\n - `FAILED`: the trial failed.\n - `INFEASIBLE`: the trial was not run due to an invalid combination of hyperparameters.\n- `error_message`: a `STRING` value that contains the error message that is\n returned if the trial didn't succeed. For more information, see\n [Error handling](/bigquery/docs/hp-tuning-overview#error_handling).\n\n- `is_optimal`: a `BOOL` value that indicates whether the trial had the best\n objective value. If multiple trials are marked as optimal, then the trial\n with the smallest `trial_id` value is used as the default trial during model\n serving.\n\nExample\n-------\n\nThe following query retrieves information of all trials for the model\n`mydataset.mymodel` in your default project: \n\n```sql\nSELECT\n *\nFROM\n ML.TRIAL_INFO(MODEL `mydataset.mymodel`)\n```\n\nWhat's next\n-----------\n\n- For information about hyperparameter tuning, see [Hyperparameter tuning overview](/bigquery/docs/hp-tuning-overview).\n- For information about the supported SQL statements and functions for each model type, see [End-to-end user journey for each model](/bigquery/docs/e2e-journey)."]]