Create a machine learning model in BigQuery ML by using the Google Cloud console
This document shows you how to use the Google Cloud console to create a BigQuery ML model.
Required roles
To create a model and run inference, you must be granted the following roles:
- BigQuery Data Editor (
roles/bigquery.dataEditor
) - BigQuery User (
roles/bigquery.user
)
- BigQuery Data Editor (
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the BigQuery and BigQuery Connection APIs.
Model-specific prerequisites
Before you create a model, make sure that you have addressed any prerequisites for the type of model that you are creating:
If you want to use a query to select training data for a model, you must have that query available as a saved query.
The following remote models require a Cloud resource connection:
- Remote models over Vertex AI and partner models
- Remote models over open models
- Remote models over Cloud AI services
- Remote models over custom models in Vertex AI
The connection's service account must also be granted certain roles, depending on the type of remote model.
To import a model, you must have that model uploaded to a Cloud Storage bucket.
Create a dataset
Create a BigQuery dataset to contain your resources:
Console
In the Google Cloud console, go to the BigQuery page.
In the Explorer pane, click your project name.
Click
View actions > Create dataset.On the Create dataset page, do the following:
For Dataset ID, type a name for the dataset.
For Location type, select a location for the dataset.
Click Create dataset.
bq
Create a model
To create a model:
Go to the BigQuery page.
In the Explorer pane, click the dataset that you created.
Click
View actions next to the dataset, and then click Create BQML Model.The Create new model pane opens.
For Model name, type a name for the model.
Click Continue.
In the Creation method section, select a creation method for the model:
- Click the radio button for Train a Model in BigQuery to create an internally trained or externally trained model.
- Click the radio button for Connect to Vertex AI LLM service and Cloud AI services to create a remote model over a Vertex AI or partner model or Cloud AI services.
- Click the radio button for Connect to user managed Vertex AI endpoints to create a remote model over a custom model in Vertex AI.
- Click the radio button for Import model to import your own ONNX, TensorFlow, TensorFlow Lite, or XGBoost model into BigQuery.
Complete the model workflow:
If you chose Train a Model in BigQuery as the model creation method, do the following:
- In the Modeling objective section, select a modeling objective for the model.
- Click Continue.
On the Model options page, do the following:
- Select a model type. The types of models you can select from varies based on the modeling objective you chose.
In the Training data section, do one of the following:
- Select Table/View to get training data from a table or view, and then select the project, dataset, and view or table name.
- Select Query to get training data from a saved query, and then select the saved query.
In Selected input label columns, choose the columns from the table, view, or query that you want to use as input to the model.
If there is a Required options section, specify the requested column information:
- For classification and regression models, for INPUT_LABEL_COLS, select the column that contains the label data.
For matrix factorization models, select the following:
- For RATING_COL, select the column that contains the rating data.
- For USER_COL, select the column that contains the user data.
- For ITEM_COL, select the column that contains the item data.
For times series forecasting models, select the following:
- For TIME_SERIES_TIMESTAMP_COL, select the column that contains the time points to use when training the model.
- For TIME_SERIES_DATA_COL, select the column that contains the data to forecast.
Optional: In the Optional section, specify values for additional model tuning arguments. The arguments available vary based on the type of model you are creating.
If you chose Connect to Vertex AI LLM service and Cloud AI services as the model creation method, do the following on the Model options page:
- Select a model type. The types of models you can select from varies based on the modeling objective you chose.
In the Remote connection section, do the following:
- For Project, select the project that contains the connection that you want to use.
- For Location, select the location used by the connection.
For Connection, select the connection to use for the remote model, or select Create new connection to create a new connection.
In the Required options section, do one of the following:
- For remote models over Google models and partner models,
specify the endpoint to use. This is the name of the model, for
example
gemini-2.0-flash
. For more information about supported models, seeENDPOINT
. - For remote models over open models,
specify the endpoint to use. This is the shared public endpoint
of a model deployed to Vertex AI, in the format
https://location-aiplatform.googleapis.com/v1/projects/project/locations/location/endpoints/endpoint_id
. For more information, seeENDPOINT
. - For remote models over Cloud AI services, select the remote service type to use.
- For remote models over Google models and partner models,
specify the endpoint to use. This is the name of the model, for
example
In the Optional section, specify document processor information if you are using the
CLOUD_AI_DOCUMENT_V1
service. Optionally, you can specify speech recognizer information if you are using theCLOUD_AI_SPEECH_TO_TEXT_V2
service.
If you chose Connect to user managed Vertex AI endpoints as the model creation method, do the following on the Model options page:
In the Remote connection section, do the following:
- For Project, select the project that contains the connection that you want to use.
- For Location, select the location used by the connection.
For Connection, select the connection to use for the remote model, or select Create new connection to create a new connection.
In the Required options section specify the endpoint to use. This is the shared public endpoint of a model deployed to Vertex AI, in the format
https://location-aiplatform.googleapis.com/v1/projects/project/locations/location/endpoints/endpoint_id
. For more information, seeENDPOINT
.
If you chose Import model as the model creation method, do the following on the Model options page:
- For Model type, select the type of model to import.
- For GCS path, select the Cloud Storage bucket that contains the model.
Click Create model.
When model creation is complete, click Go to model to view model details.