[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-17。"],[],[],null,["# End-to-end user journey for each model\n======================================\n\nBigQuery ML supports a variety of machine learning models and a complete machine\nlearning flow for each model, such as feature preprocessing, model creation,\nhyperparameter tuning, inference, evaluation, and model export. The machine\nlearning flow for the models are split into the following two tables:\n\n- [Model creation phase](#model_creation_phase)\n- [Model use phase](#model_use_phase)\n\n\u003cbr /\u003e\n\nModel creation phase\n--------------------\n\n^1^See [TRANSFORM\nclause for the feature engineering](/bigquery/docs/bigqueryml-transform) tutorial. For more information about\nthe preprocessing functions, see the [BQML - Feature Engineering Functions tutorial](https://github.com/GoogleCloudPlatform/bigquery-ml-utils/blob/master/notebooks/bqml-preprocessing-functions.ipynb).\n\n^2^See [use\nhyperparameter tuning to improve model performance](/bigquery/docs/hyperparameter-tuning-tutorial) tutorial.\n\n^3^Automatic feature engineering and hyperparameter tuning are\nembedded in the AutoML model training by default.\n\n^4^The auto.ARIMA algorithm performs hyperparameter tuning for the\ntrend module. Hyperparameter tuning is not supported for the entire modeling\npipeline. See the\n[modeling pipeline](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-time-series#time_series_modeling_pipeline) for more details.\n\n^5^BigQuery ML doesn't support functions that retrieve the weights for boosted trees, random forest, DNNs, Wide-and-deep, Autoencoder, or AutoML models. To see the weights of those models, you can export an existing model from BigQuery ML to Cloud Storage and then use the XGBoost library or the TensorFlow library to visualize the tree structure for the tree models or the graph structure for the neural networks. For more information, see the [EXPORT MODEL documentation](/bigquery/docs/exporting-models) and the [EXPORT MODEL tutorial](/bigquery/docs/export-model-tutorial).\n\n^6^Uses a\n[Vertex AI foundation model](/vertex-ai/docs/generative-ai/learn/models#foundation_models)\nor customizes it by using supervised tuning.\n\n^7^This is not a typical ML model but rather an artifact that\ntransforms raw data into features.\n\nModel use phase\n---------------\n\n^1^`ml.confusion_matrix` is only applicable to classification models.\n\n^2^`ml.roc_curve` is only applicable to binary classification models.\n\n^3^`ml.explain_predict` is an extended version of `ml.predict`.\nFor more information, see [Explainable AI overview](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-xai-overview).\nTo learn how `ml.explain_predict` is used, see [regression tutorial](/bigquery/docs/linear-regression-tutorial#explain_the_prediction_results) and [classification tutorial](/bigquery/docs/logistic-regression-prediction#explain_the_prediction_results).\n\n^4^For the difference between `ml.global_explain` and\n`ml.feature_importance`, see\n[Explainable AI overview](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-xai-overview).\n\n^5^See the [Export a\nBigQuery ML model for online prediction](/bigquery/docs/export-model-tutorial) tutorial. For more\ninformation about online serving, see the\n[BQML - Create Model with Inline Transpose tutorial](https://github.com/GoogleCloudPlatform/bigquery-ml-utils/blob/master/notebooks/bqml-feature-engineering.ipynb).\n\n^6^For `ARIMA_PLUS` or `ARIMA_PLUS_XREG` models, `ml.evaluate` can take new data as input to compute forecasting metrics such as mean absolute percentage error (MAPE). In the absence of new data, `ml.evaluate` has an extended version `ml.arima_evaluate` which outputs different evaluation information.\n\n^7^`ml.explain_forecast` is an extended version of `ml.forecast`.\nFor more information, see [Explainable AI overview](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-xai-overview).\nTo learn how `ml.explain_forecast` is used, see the visualize results steps of the [single time series forecasting](/bigquery/docs/arima-single-time-series-forecasting-tutorial#explain_the_forecasting_results) and [multiple time series forecasting](/bigquery/docs/arima-multiple-time-series-forecasting-tutorial#explain_the_forecasting_results) tutorials.\n\n^8^`ml.advanced_weights` is an extended version of `ml.weights`,\nsee [ml.advanced_weights](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-advanced-weights)\nfor more details.\n\n^9^Uses a\n[Vertex AI foundation model](/vertex-ai/docs/generative-ai/learn/models#foundation_models)\nor customizes it by using supervised tuning.\n\n^10^This is not a typical ML model but rather an artifact that\ntransforms raw data into features.\n\n^11^Not supported for all Vertex AI LLMs. For more information,\nsee\n[ml.evaluate](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-evaluate)."]]