This page describes model endpoint management. Model endpoint management lets you experiment with registering an AI model endpoint and invoking predictions. To use AI models in production environments, see Build generative AI applications using Cloud SQL and Invoke online predictions from Cloud SQL instances.
After the model endpoints are added and registered in model endpoint management, you can reference them using the model ID to invoke predictions.
Before you begin
Make sure that you complete the following actions:
- Register your model endpoint with model endpoint management. For more information, see Register and call remote AI models using model endpoint management.
- Create or update your Cloud SQL instance so that the instance can integrate with Vertex AI. For more information, see Enable database integration with Vertex AI.
Invoke predictions for generic models
Use the google_ml.predict_row()
SQL function to call a registered generic model endpoint to invoke
predictions. You can use google_ml.predict_row()
function with any model type.
SELECT
google_ml.predict_row(
model_id => 'MODEL_ID',
request_body => 'REQUEST_BODY');
Replace the following:
MODEL_ID
: the model ID you defined when registering the model endpointREQUEST_BODY
: the parameters to the prediction function, in JSON format
Examples
To generate predictions for a registered gemini-pro
model endpoint, run the following statement:
SELECT
json_array_elements(
google_ml.predict_row(
model_id => 'gemini-pro',
request_body => '{
"contents": [
{
"role": "user",
"parts": [
{
"text": "For TPCH database schema as mentioned here https://www.tpc.org/TPC_Documents_Current_Versions/pdf/TPC-H_v3.0.1.pdf , generate a SQL query to find all supplier names which are located in the India nation."
}
]
}
]
}'))-> 'candidates' -> 0 -> 'content' -> 'parts' -> 0 -> 'text';