This page describes how to register an AI model endpoint and invoking predictions with model endpoint management in Cloud SQL. To use AI models in production environments, see Build generative AI applications using Cloud SQL and Work with vector embeddings.
Overview
Model endpoint management lets you register a model endpoint, manage model endpoint metadata in your
Cloud SQL instance, and then interact with the models using SQL queries. Cloud SQL provides the google_ml_integration
extension that
includes functions to add and register the model endpoint metadata related to the models. You can use these models to generate vector embeddings or invoke predictions.
You can register the following model types by using model endpoint management:
- Vertex AI text embedding models.
- Custom-hosted text embedding models hosted in networks within Google Cloud.
- Generic models with a JSON-based API. Examples of these models include the following:
- The
gemini-pro
model from the Vertex AI Model Garden - The
open_ai
model for OpenAI models - Models hosted in networks within Google Cloud
- The
How it works
You can use model endpoint management to register a model endpoint that complies to the following:
- The model input and output support the JSON format.
- You can use the REST protocol to call the model.
When you register a model endpoint with model endpoint management, model endpoint management registers each endpoint with a unique model ID as a reference to the model. You can use this model ID to query models, as follows:
Generate embeddings to translate text prompts to numerical vectors. You can store generated embeddings as vector data when the
pgvector
extension is enabled in the database. For more information, see Query and index embeddings with pgvector.Invoke predictions to call a model using SQL within a transaction.
Your applications can manage their model endpoints using the google_ml_integration
extension. This extension provides the following SQL functions:
google_ml.create_model()
: registers the model endpoint that's used in the prediction or embedding functiongoogle_ml.create_sm_secret()
: uses secrets in Google Cloud Secret Manager, where the API keys are storedgoogle_ml.embedding()
: generates text embeddingsgoogle_ml.predict_row()
: generates predictions when you call generic models that support the JSON input and output formats
Key concepts
Before you start using model endpoint management, understand the concepts required to connect to and use the models.
Model provider
Model provider is the supported model hosting providers. The following table shows the model provider value you must set based on the model provider you use:
Model provider | Set in function as… |
---|---|
Vertex AI (includes Gemini) | google |
OpenAI | open_ai |
Other models hosted outside of Vertex AI | custom |
The default model provider is custom
.
Model types
Model types are the types of the AI model. When you register a model endpoint, you can set
the text-embedding
or generic
model types for the endpoint.
- Text embedding models with built-in support
- Model endpoint management provides built-in support for all versions of the
textembedding-gecko
model. To register these model endpoints, use thegoogle_ml.create_model()
function. Cloud SQL sets up default transform functions for these models automatically. - The model type for these models is
text-embedding
. - Other text embedding models
- For other text embedding models, you need to create transform functions to handle the input and output formats that the model supports. Optionally, you can use the HTTP header generation function that generates custom headers required by your model.
- The model type for these models is
text-embedding
. - Generic models
- Model endpoint management also supports
registering of all other model types apart from text embedding models. To
invoke predictions for generic models, use the
google_ml.predict_row()
function. You can set model endpoint metadata, such as a request endpoint and HTTP headers that are specific to your model. - You cannot pass transform functions when you register a generic model endpoint. Ensure that when you invoke predictions the input to the function is in the JSON format, and that you parse the JSON output to derive the final output.
- The model type for these models is
generic
. Becausegeneric
is the default model type, if you register model endpoints for this type, then setting the model type is optional.
Authentication methods
You can use the google_ml_integration
extension to specify different authentication methods to access your model. Setting these methods is optional and is required only if you need to authenticate to access your model.
For Vertex AI models, the Cloud SQL service account is used for authentication. For other models, the
API key or bearer token that is stored as a secret in the
Secret Manager can be used with the google_ml.create_sm_secret()
SQL
function.
The following table shows the authentication methods that you can set:
Authentication method | Set in function as… | Model provider |
---|---|---|
Cloud SQL service agent | cloudsql_service_agent_iam |
Vertex AI provider |
Secret Manager | secret_manager |
Models hosted outside of Vertex AI |
Prediction functions
The google_ml_integration
extension includes the following prediction functions:
google_ml.embedding()
- Calls a registered text embedding model endpoint to
generate embeddings. It includes built-in support for the
textembedding-gecko
model by Vertex AI. - For text embedding models without built-in support, the input and output parameters are unique to a model and need to be transformed for the function to call the model. Create a transform input function to transform input of the prediction function to the model specific input, and a transform output function to transform model specific output to the prediction function output.
google_ml.predict_row()
- Calls a registered generic model endpoint, if the endpoint supports JSON-based APIs to invoke predictions.
Transform functions
Transform functions modify the input to a format that the model understands, and
convert the model response to the format that the prediction function expects. The
transform functions are used when registering the text-embedding
model endpoint without
built-in support. The signature of the transform functions depends on the
prediction function for the model type.
You cannot use transform functions when registering a generic
model endpoint.
The following shows the signatures for the prediction function for text embedding models:
// define custom model specific input/output transform functions.
CREATE OR REPLACE FUNCTION input_transform_function(model_id VARCHAR(100), input_text TEXT) RETURNS JSON;
CREATE OR REPLACE FUNCTION output_transform_function(model_id VARCHAR(100), response_json JSON) RETURNS real[];
For more information about how to create transform functions, see Transform functions example.
HTTP header generation function
The HTTP header generation function generates the output in JSON key value pairs that are used as HTTP headers. The signature of the prediction function defines the signatures of the header generation function.
The following example shows the signature for the google_ml.embedding()
prediction function:
CREATE OR REPLACE FUNCTION generate_headers(model_id VARCHAR(100), input TEXT) RETURNS JSON;
For the google_ml.predict_row()
prediction function, the signature is as follows:
CREATE OR REPLACE FUNCTION generate_headers(model_id VARCHAR(100), input JSON) RETURNS JSON;
For more information about how to create a header generation function, see Header generation function example.
Limitations
- To use AI models with your Cloud SQL instance, the maintenance version of your instance must be
R20240910.01_02
or later. To upgrade your instance to this version, see Self-service maintenance.
What's next
- Set up authentication for model providers.
- Register a model endpoint with model endpoint management.
- Learn about the model endpoint management reference.