ML.PREDICT is a table-valued function that helps to access registered
machine learning (ML) models and use them to generate ML predictions.
This function applies ML computations defined by a model to each row of an
input relation, and then, it returns the results of the predictions.
Supported Argument Types
input_model: The model to use for predictions. Replace model_name with
the name of the model. To create a model, see
CREATE_MODEL.
input_relation: A table or subquery upon which to apply ML computations.
The set of columns of the input relation must include all input columns of
the input model; otherwise, the input won't have enough data to generate
predictions and the query won't compile. Additionally, the set can also
include arbitrary pass-through columns that will be included in the output.
The order of the columns in the input relation doesn't matter. The columns
of the input relation and model must be coercible.
input_table: The table containing the input data for predictions, for
example, a set of features. Replace table_name with the name of the table.
input_subquery: The subquery that's used to generate the prediction input
data.
model_parameters: A STRUCT value that contains parameters supported by
model_name. These parameters are passed to the model inference.
Return Type
A table with the following columns:
Model outputs
Pass-through columns from the input relation
Examples
The examples in this section reference a model called DiamondAppraise and
an input table called Diamonds with the following columns:
DiamondAppraise model:
Input columns
Output columns
value FLOAT64
value FLOAT64
carat FLOAT64
lower_bound FLOAT64
cut STRING
upper_bound FLOAT64
color STRING(1)
Diamonds table:
Columns
Id INT64
Carat FLOAT64
Cut STRING
Color STRING
The following query predicts the value of a diamond based on the diamond's
carat, cut, and color.
You can include model-specific parameters. For example, in the following query,
the maxOutputTokens parameter specifies that content, the model inference,
can contain 10 or fewer tokens. This query succeeds because the model
TextBison contains a parameter called maxOutputTokens.
SELECTprompt,contentFROMML.PREDICT(MODELTextBison,(SELECT"Is 13 prime?"asprompt),STRUCT(10ASmaxOutputTokens));+----------------+---------------------+|prompt|content|+----------------+---------------------+|"Is 13 prime?"|"Yes, 13 is prime."|+----------------+---------------------+
You can use ML.PREDICT in any DQL/DML statements, such as INSERT or
UPDATE. For example:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Machine learning functions in GoogleSQL\n\nGoogleSQL for Spanner supports the following machine learning (ML) functions.\n\nFunction list\n-------------\n\n`ML.PREDICT`\n------------\n\n ML.PREDICT(input_model, input_relation[, model_parameters])\n\n input_model:\n MODEL model_name\n\n input_relation:\n { input_table | input_subquery }\n\n input_table:\n TABLE table_name\n\n model_parameters:\n STRUCT(parameter_value AS parameter_name[, ...])\n\n**Description**\n\n`ML.PREDICT` is a table-valued function that helps to access registered\nmachine learning (ML) models and use them to generate ML predictions.\nThis function applies ML computations defined by a model to each row of an\ninput relation, and then, it returns the results of the predictions.\n| **Note:** Make sure that Spanner has access to the referenced Vertex AI endpoint as described in [Model endpoint access control](/spanner/docs/reference/standard-sql/data-definition-language#create_model_permissions).\n\n**Supported Argument Types**\n\n- `input_model`: The model to use for predictions. Replace `model_name` with the name of the model. To create a model, see [CREATE_MODEL](/spanner/docs/reference/standard-sql/data-definition-language#create_model).\n- `input_relation`: A table or subquery upon which to apply ML computations. The set of columns of the input relation must include all input columns of the input model; otherwise, the input won't have enough data to generate predictions and the query won't compile. Additionally, the set can also include arbitrary pass-through columns that will be included in the output. The order of the columns in the input relation doesn't matter. The columns of the input relation and model must be coercible.\n- `input_table`: The table containing the input data for predictions, for example, a set of features. Replace `table_name` with the name of the table.\n- `input_subquery`: The subquery that's used to generate the prediction input data.\n- `model_parameters`: A `STRUCT` value that contains parameters supported by `model_name`. These parameters are passed to the model inference.\n\n**Return Type**\n\nA table with the following columns:\n\n- Model outputs\n- Pass-through columns from the input relation\n\n| **Note:** If a column of the input relation has the same name as one of the output columns, the value of the output column is returned.\n\n**Examples**\n\nThe examples in this section reference a model called `DiamondAppraise` and\nan input table called `Diamonds` with the following columns:\n\n- `DiamondAppraise` model:\n\n- `Diamonds` table:\n\nThe following query predicts the value of a diamond based on the diamond's\ncarat, cut, and color. \n\n SELECT id, color, value\n FROM ML.PREDICT(MODEL DiamondAppraise, TABLE Diamonds);\n\n +----+-------+-------+\n | id | color | value |\n +----+-------+-------+\n | 1 | I | 280 |\n | 2 | G | 447 |\n +----+-------+-------+\n\nYou can include model-specific parameters. For example, in the following query,\nthe `maxOutputTokens` parameter specifies that `content`, the model inference,\ncan contain 10 or fewer tokens. This query succeeds because the model\n`TextBison` contains a parameter called `maxOutputTokens`. \n\n SELECT prompt, content\n FROM ML.PREDICT(\n MODEL TextBison,\n (SELECT \"Is 13 prime?\" as prompt), STRUCT(10 AS maxOutputTokens));\n\n +----------------+---------------------+\n | prompt | content |\n +----------------+---------------------+\n | \"Is 13 prime?\" | \"Yes, 13 is prime.\" |\n +----------------+---------------------+\n\nYou can use `ML.PREDICT` in any DQL/DML statements, such as `INSERT` or\n`UPDATE`. For example: \n\n INSERT INTO AppraisedDiamond (id, color, carat, value)\n SELECT\n 1 AS id,\n color,\n carat,\n value\n FROM\n ML.PREDICT(MODEL DiamondAppraise,\n (\n SELECT\n @carat AS carat,\n @cut AS cut,\n @color AS color\n ));"]]