This page lists functions and their parameters, which are provided by the
google_ml_integration
extension to register and manage model endpoints, and manage secrets with
model endpoint management.
Before you begin (: #before-you-begin)
You must set the google_ml_integration.enable_model_support
database flag to
on
before you can start using the extension.
For more information, see Register and call remote AI models in AlloyDB overview.
Schemas
The following lists the schemas and functions available in a schema.
public
schema- The
embedding()
function in thepublic
schema can be used with any Vertex AI embedding model without registering the endpoint. For more information, see Run similarity search. google_ml
schema- The
google_ml
schema provides the following functions:
- The
google_ml.create_model()
SQL function, which is used to register the model endpoint that is used in the prediction or embedding function. - The
google_ml.create_sm_secret()
SQL function, which uses secrets in the Secret Manager, where the API keys are stored. -
The
google_ml.embedding()
SQL function, which is a prediction function that generates text embeddings. The return type of the embedding function is REAL[]. - The
google_ml.text_embedding
(Preview) SQL function in the AlloyDB AI query engine, which generates single text embedding for a given query when using the multimodal endpoint. -
The
google_ml.image_embedding
(Preview) SQL function in the AlloyDB AI query engine, which generates a single embedding for the input image using a multimodal embedding model. The return type of the embedding function is REAL[]. -
The
google_ml.rank()
(Preview) SQL function in the AlloyDB AI query engine, which improves the order of search results by ranking or ordering a collection of records in relation to a given query (a search string). -
The
google_ml.predict_row()
SQL function that generates predictions when you call generic models that support JSON input and output format. - Other helper functions that handle generating custom URL, generating HTTP headers, or passing transform functions.
- Functions to manage the registered model endpoints and secrets.
- Filters and join with
ai.if
(Preview) SQL function in the AlloyDB AI query engine, provides support to evaluate the conditions specified using natural language. - The
ai.rank
(Preview) SQL function in the AlloyDB AI query engine, to order a list of items in a query based on a criteria stated using natural language. - The
ai.generate
(Preview) SQL function in the AlloyDB AI query engine, to generate text based on prompts specified in natural language.
- The
ai
schema- AlloyDB reserves the
ai
schema, and tries to create this schema when you install thegoogle_ml_integration
extension. If the schema creation fails, then use the functions with the same name in thegoogle_ml
schema.
Theai
schema provides the following functions to generate embeddings using the models registered under theai
schema:
- The
ai.text_embedding
(Preview) SQL function in the AlloyDB AI query engine, which generates single text embedding for a given query when using multimodal endpoint. -
The
ai.image_embedding
(Preview) SQL function, which generates a single embedding for the input image. The return type of the embedding function is REAL[]. -
The
ai.video_embedding
(Preview) SQL function in the AlloyDB AI query engine, which generates an array with one entry for eachinterval_seconds
segment of the video defined when creating the multimodal endpoint. - Filters and join with
ai.if
(Preview) SQL function in the AlloyDB AI query engine, provides support to evaluate the conditions specified using natural language. - The
ai.rank
(Preview) SQL function in the AlloyDB AI query engine, to order a list of items in a query based on a criteria stated using natural language. - The
ai.generate
(Preview) SQL function in the AlloyDB AI query engine, to generate text based on prompts specified in natural language.
- The
Model provider
The following table shows the model provider value you can set based on the model provider you use:
Model provider | Set in function as… |
---|---|
Vertex AI | google |
Hugging Face models | hugging_face |
Anthropic models | anthropic |
OpenAI | open_ai |
Other models | custom |
The default model provider is custom
.
Based on the provider type, the supported authentication method differs. The Vertex AI models use the AlloyDB service account to authenticate, while other providers can use the Secret Manager or pass authentication details through headers. For more information, see Set up authentication.
Model type
The following table shows the model type value that you can set:
- Pre-registered Vertex AI models
- The
model endpoint management supports some text embedding and generic Vertex AI models as pre-registered model IDs. You can directly use the model ID to generate embeddings or invoke predictions, based on the model type.
For more information about supported pre-registered models, see Pre-registered Vertex AI models.
For example, to call the pre-registeredtextembedding-gecko
model, you can directly call the model using the embedding function:SELECT google_ml.embedding( model_id => 'textembedding-gecko', content => 'AlloyDB is a managed, cloud-hosted SQL database service');
- Models with built-in support
- The model endpoint management provides built-in support for some models by Vertex AI, Anthropic, and OpenAI. For text embedding, multimodal (Preview), and ranking models (Preview) with built-in support, AlloyDB automatically sets up default transform functions.
- When you register these model endpoints, set the qualified name explicitly. For more information about a list of model with built-in support, see Models with built-in support.
- The model type for these models can be
text-embedding
orgeneric
. - Other text embedding models
- To register a text embedding model endpoint without built-in support, we recommend that you create transform functions to handle the input and output formats that the model supports. Optionally, depending on the model requirements, you might also need to create custom header function to specify the header.
- The model type for these models is
text-embedding
. - Generic models
- The model endpoint management also supports
registering of all other model types apart from text embedding models. To
invoke predictions for generic models, use the
google_ml.predict_row()
function. You can set model endpoint metadata, such as a request endpoint and HTTP headers that are specific to your model. - You cannot pass transform functions when you are registering a generic model endpoint. Ensure that when you invoke predictions the input to the function is in the JSON format, and that you parse the JSON output to derive the final output.
- The model type for these models is
generic
. - Multimodal embedding models (Preview)
- Multimodal embeddings allow using vector representation for images, text and videos interchangeably. Multimodal embedding models generate the embeddings for different modalities in the same semantic space. This means semantically similar information, regardless of its original form, is located close together in the vector space.
- You don't need to register the Vertex AI multimodal model as you can use the qualified name of the Vertex AI multimodal model as the model ID in your queries.
- The model type for these models is
multimodal_embedding
. - Ranking models (Preview)
- Ranking models take a list of documents and rank those documents based on how relevant the documents are to a given query (a search string). The
ai.rank()
function can be used to return scores for how well a document answers a given query or to identify ranking of the items in a query based on a criteria specified in natural language. - The model type for these models is
reranking
.
Authentication
For Vertex AI models, the AlloyDB service account is used for authentication. For other models,
API key or bearer token that is stored as a secret in the
Secret Manager can be used with the google_ml.create_sm_secret()
SQL
function. If you are passing authentication through headers, then you can skip setting the authentication method.
The following table shows the auth types that you can set:
Authentication method | Set in function as… | Model provider |
---|---|---|
AlloyDB service agent | alloydb_service_agent_iam |
Vertex AI provider |
Secret Manager | secret_manager |
third-party providers, such as Anthropic, Hugging Face, or OpenAI |
Models
Use this reference to understand parameters for functions that let you manage model endpoints.
google_ml.create_model()
function
The following shows how to call the google_ml.create_model()
SQL function used
to register model endpoint metadata:
CALL
google_ml.create_model(
model_id => 'MODEL_ID',
model_request_url => 'REQUEST_URL',
model_provider => 'PROVIDER_ID',
model_type => 'MODEL_TYPE',
model_qualified_name => 'MODEL_QUALIFIED_NAME',
model_auth_type => 'AUTH_TYPE',
model_auth_id => 'AUTH_ID',
generate_headers_fn => 'GENERATE_HEADER_FUNCTION',
model_in_transform_fn => 'INPUT_TRANSFORM_FUNCTION',
model_out_transform_fn => 'OUTPUT_TRANSFORM_FUNCTION');
Parameter | Required | Description |
---|---|---|
MODEL_ID |
required for all model endpoints | A unique ID for the model endpoint that you define. |
REQUEST_URL |
optional for other text embedding model endpoints with built-in support | The model-specific endpoint when adding other text embedding and generic model endpoints. For AlloyDB for PostgreSQL, provide an https URL.The request URL that the function generates for built-in model endpoints refers to your cluster's project and region or location. If you want to refer to another project, then ensure that you specify the model_request_url explicitly.For a list of request URLs for Vertex AI model endpoints, see Vertex AI model endpoints request URL. For custom hosted model endpoints, ensure that the model endpoint is accessible from the network where AlloyDB is located. |
PROVIDER_ID |
required for text embedding model endpoints with built-in support | The provider of the model endpoint. The default value is custom .Set to one of the following:
|
MODEL_TYPE |
optional for generic model endpoints | The model type. Set to one of the following: |
MODEL_QUALIFIED_NAME |
required for text embedding models with built-in support; optional for other model endpoints | The fully qualified name for text embedding models with built-in support. For Vertex AI qualified names that you must use for pre-registered models, see Pre-registered Vertex AI models. For qualified names that you must use for OpenAI models with built-in support, see Models with built-in support |
AUTH_TYPE |
optional unless the model endpoint has specific authentication requirement | The authentication type used by the model endpoint. You can set it to either alloydb_service_agent_iam for Vertex AI models or secret_manager for other providers, if they use Secret Manager for authentication. You don't need to set this value if you are using authentication headers. |
AUTH_ID |
don't set for Vertex AI model endpoints; required for all other model endpoints that store secrets in Secret Manager | The secret ID that you set and is subsequently used when registering a model endpoint. |
GENERATE_HEADER_FUNCTION |
optional | The name of the function that generates custom headers. For Anthropic models, model endpoint management provides a google_ml.anthropic_claude_header_gen_fn function that you can use for default versions. The signature of this function depends on the prediction function that you use. See Header generation function. |
INPUT_TRANSFORM_FUNCTION |
optional for text embedding model endpoints with built-in support; don't set for generic model endpoints | The function to transform input of the corresponding prediction function to the model-specific input. See Transform functions. |
OUTPUT_TRANSFORM_FUNCTION |
optional for text embedding model endpoints with built-in support; don't set for generic model endpoints | The function to transform model specific output to the prediction function output. See Transform functions. |
google_ml.alter_model()
The following shows how to call the google_ml.alter_model()
SQL function used
to update model endpoint metadata:
CALL
google_ml.alter_model(
model_id => 'MODEL_ID',
model_request_url => 'REQUEST_URL',
model_provider => 'PROVIDER_ID',
model_type => 'MODEL_TYPE',
model_qualified_name => 'MODEL_QUALIFIED_NAME',
model_auth_type => 'AUTH_TYPE',
model_auth_id => 'AUTH_ID',
generate_headers_fn => 'GENERATE_HEADER_FUNCTION',
model_in_transform_fn => 'INPUT_TRANSFORM_FUNCTION',
model_out_transform_fn => 'OUTPUT_TRANSFORM_FUNCTION');
For information about the values that you must set for each parameter, see Create a model.
google_ml.drop_model()
function
The following shows how to call the google_ml.drop_model()
SQL function used
to drop a model endpoint:
CALL google_ml.drop_model('MODEL_ID');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you defined. |
google_ml.list_model()
function
The following shows how to call the google_ml.list_model()
SQL function used
to list model endpoint information:
SELECT google_ml.list_model('MODEL_ID');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you defined. |
google_ml.model_info_view
view
The following shows how to call the google_ml.model_info_view
view that is
used to list model endpoint information for all model endpoints:
SELECT * FROM google_ml.model_info_view;
Secrets
Use this reference to understand parameters for functions that let you manage secrets.
google_ml.create_sm_secret()
function
The following shows how to call the google_ml.create_sm_secret()
SQL function
used to add the secret created in Secret Manager:
CALL
google_ml.create_sm_secret(
secret_id => 'SECRET_ID',
secret_path => 'projects/project-id/secrets/SECRET_MANAGER_SECRET_ID/versions/VERSION_NUMBER');
Parameter | Description |
---|---|
SECRET_ID |
The secret ID that you set and is subsequently used when registering a model endpoint. |
PROJECT_ID |
The ID of your Google Cloud project that contains the secret. This project can be different from the project that contains your AlloyDB for PostgreSQL cluster. |
SECRET_MANAGER_SECRET_ID |
The secret ID set in Secret Manager when you created the secret. |
VERSION_NUMBER |
The version number of the secret ID. |
google_ml.alter_sm_secret()
function
The following shows how to call the google_ml.alter_sm_secret()
SQL function
used to update secret information:
CALL
google_ml.alter_sm_secret(
secret_id => 'SECRET_ID',
secret_path => 'projects/project-id/secrets/SECRET_MANAGER_SECRET_ID/versions/VERSION_NUMBER');
For information about the values that you must set for each parameter, see Create a secret.
google_ml.drop_sm_secret()
function
The following shows how to call the google_ml.drop_sm_secret()
SQL function
used to drop a secret:
CALL google_ml.drop_sm_secret('SECRET_ID');
Parameter | Description |
---|---|
SECRET_ID |
The secret ID that you set and was subsequently used when registering a model endpoint. |
Prediction functions
Use this reference to understand parameters for functions that let you generate embeddings or invoke predictions.
google_ml.embedding()
function
For text embedding models without built-in support, the input and output
parameters are unique to a model and need to be transformed for the function to
call the model. You must create a transform input function to transform input of
the prediction function to the model specific input, and a transform output
function to transform model specific output to the prediction function
output.The function is also available in the ai
schema
The following shows how to generate embeddings:
SELECT
google_ml.embedding(
model_id => 'MODEL_ID',
content => 'CONTENT');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
CONTENT |
The text to translate into a vector embedding. |
For example SQL queries to generate text embeddings, see Examples.
google_ml.predict_row()
function
The following shows how to invoke predictions that is used to call a registered
generic model endpoint, as long as the model supports a JSON-based API, to
invoke predictions. The function is also available in the ai
schema:
SELECT
google_ml.predict_row(
model_id => 'MODEL_ID',
request_body => 'REQUEST_BODY');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
REQUEST_BODY |
The parameters to the prediction function, in JSON format. |
For example SQL queries to invoke predictions, see Examples.
Multimodal prediction functions
Use this reference to understand parameters for functions that let you generate multimodal embeddings.
AlloyDB reserves the ai
schema, and tries to create
this schema when you install the google_ml_integration
extension.
If the schema creation fails, then use the functions in the google_ml
schema.
ai.text_embedding()
/google_ml.text_embedding()
function
The following shows how to generate text embeddings:
SELECT
ai.text_embedding(
model_id => 'MODEL_ID',
content => 'CONTENT');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
CONTENT |
The text to translate into a vector embedding. |
For example SQL queries to generate multimodal text embeddings, see Examples.
ai.image_embedding()
/google_ml.image_embedding()
function
The following shows how to generate multimodal image embeddings:
SELECT
ai.image_embedding(
model_id => 'MODEL_ID',
image => 'IMAGE_PATH_OR_TEXT',
mimetype => 'MIMETYPE');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
IMAGE_PATH_OR_TEXT |
The Cloud Storage path to the image to translate into a vector embedding or base64 string of the image. |
MIMETYPE |
The mimetype of the image. |
For example SQL queries to generate multimodal image embeddings, see Examples.
ai.video_embedding()
/google_ml.video_embedding()
function
The following shows how to generate multimodal video embeddings:
SELECT
ai.video_embedding(
model_id => 'MODEL_ID',
video => 'VIDEO_URI',
start_offset_seconds => 'START_OFFSET_SEC',
end_offset_seconds => 'END_OFFSET_SEC',
interval_seconds => 'INTERVAL_SEC');
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
VIDEO_URI |
The Cloud Storage URI of the target video to get embeddings for. For example,gs://my-bucket/embeddings/supermarket-video.mp4 . |
START_OFFSET_SEC |
Offset timing to start processing video from. |
END_OFFSET_SEC |
Offset timing to stop processing video from. |
INTERVAL_SEC |
Interval seconds for which the embedding is generated. |
For example SQL queries to generate multimodal video embeddings, see Examples.
Ranking functions
Use this reference to understand parameters for functions that let you rank search results.
AlloyDB reserves the ai
schema, and tries to create
this schema when you install the google_ml_integration
extension.
If the schema creation fails, then use the functions in the google_ml
schema.
ai.rank()/google_ml.rank()
function
The following shows how to rank your search results :
SELECT
ai.rank(
model_id => 'MODEL_ID',
search_string => 'SEARCH_STRING',
documents => ARRAY['DOCUMENT_1', 'DOCUMENT_2', 'DOCUMENT_3']);
top_n => TOP_N
Parameter | Description |
---|---|
MODEL_ID |
A unique ID for the model endpoint that you define. |
SEARCH_STRING |
Search string against which the records are ranked and scored. |
DOCUMENTS |
A unique string that identifies the record. |
TOP_N |
The number of top results to return. Default value is NULL |
For example SQL queries to generate ranked search results, see Ranking.
Operator functions
Use this reference to understand parameters for functions provided by the AlloyDB AI query engine that let you interact with data in your database using natural language in SQL operators.
AlloyDB reserves the ai
schema, and tries to create
this schema when you install the google_ml_integration
extension.
If the schema creation fails, then use the functions in the
google_ml
schema.
ai.if()/google_ml.if()
function
The following shows how to perform filters and joins to evaluate a condition:
SELECT
ai.if(
model_id => 'MODEL_ID',
prompt => 'PROMPT');
Parameter | Description |
---|---|
MODEL_ID (Optiona) |
A unique ID for the model endpoint that you define. If not set, default Gemini model is used. |
PROMPT |
The natural language phrase that specifies the condition based on which the function retrieves information. For example, the following review talks about parking at the restaurant. review: |
For example SQL queries to perform filters and joins to evaluate a condition, see Use natural language in SQL operators
ai.rank()
/google_ml.rank()
function
The following shows how to get an score for items in the query based on a semantic criteria specified in natural language:
SELECT
ai.rank(
model_id => 'MODEL_ID',
prompt => 'PROMPT');
Parameter | Description |
---|---|
MODEL_ID (Optiona) |
A unique ID for the model endpoint that you define. If not set, default Gemini model is used. |
PROMPT |
The natural language phrase that specifies the criteria based on which the functions orders the documents. For example, Score the following review according to these rules: score of 8 to 10 if the review says the food is excellent, 4 to 7 if the review says the food is and 1 to 3 if the review says the food is not good. Here is the review: |
ai.generate()
/google_ml.generate()
function
The following shows how to generate embeddings using the default embedding model:
SELECT
ai.generate(
model_id => 'MODEL_ID',
prompt => 'PROMPT');
Parameter | Description |
---|---|
MODEL_ID (Optiona) |
A unique ID for the model endpoint that you define. If not set, default Gemini model is used. |
PROMPT |
The natural language phrase that specifies the criteria based on which the functions generates text embeddings. For example, Summarize the review in 20 words or less. Review: |
For example SQL queries to generate embeddings, see Use natural language in SQL operators
Transform functions
Use this reference to understand parameters for input and output transform functions.
Input transform function
The following shows the signature for the prediction function for text embedding model endpoints:
CREATE OR REPLACE FUNCTION INPUT_TRANSFORM_FUNCTION(model_id VARCHAR(100), input_text TEXT) RETURNS JSON;
Parameter | Description |
---|---|
INPUT_TRANSFORM_FUNCTION |
The function to transform input of the corresponding prediction function to the model endpoint-specific input. |
Output transform function
The following shows the signature for the prediction function for text embedding model endpoints:
CREATE OR REPLACE FUNCTION OUTPUT_TRANSFORM_FUNCTION(model_id VARCHAR(100), response_json JSON) RETURNS real[];
Parameter | Description |
---|---|
OUTPUT_TRANSFORM_FUNCTION |
The function to transform model endpoint-specific output to the prediction function output. |
Transform functions example
To better understand how to create transform functions for your model endpoint, consider a custom-hosted text embedding model endpoint that requires JSON input and output.
The following example curl request creates embeddings based on the prompt and the model endpoint:
curl -m 100 -X POST https://cymbal.com/models/text/embeddings/v1 \
-H "Content-Type: application/json"
-d '{"prompt": ["AlloyDB Embeddings"]}'
The following example response is returned:
[[ 0.3522231 -0.35932037 0.10156056 0.17734447 -0.11606089 -0.17266059
0.02509351 0.20305622 -0.09787305 -0.12154685 -0.17313677 -0.08075467
0.06821183 -0.06896557 0.1171584 -0.00931572 0.11875633 -0.00077482
0.25604948 0.0519384 0.2034983 -0.09952664 0.10347155 -0.11935943
-0.17872004 -0.08706985 -0.07056875 -0.05929353 0.4177883 -0.14381726
0.07934926 0.31368294 0.12543282 0.10758053 -0.30210832 -0.02951015
0.3908268 -0.03091059 0.05302926 -0.00114946 -0.16233777 0.1117468
-0.1315904 0.13947351 -0.29569918 -0.12330773 -0.04354299 -0.18068913
0.14445548 0.19481727]]
Based on this input and response, we can infer the following:
The model expects JSON input through the
prompt
field. This field accepts an array of inputs. As thegoogle_ml.embedding()
function is a row level function, it expects one text input at a time. Thus,you need to create an input transform function that builds an array with single element.The response from the model is an array of embeddings, one for each prompt input to the model. As the
google_ml.embedding()
function is a row level function, it returns single input at a time. Thus, you need to create an output transform function that can be used to extract the embedding from the array.
The following example shows the input and output transform functions that is used for this model endpoint when it is registered with model endpoint management:
input transform function
CREATE OR REPLACE FUNCTION cymbal_text_input_transform(model_id VARCHAR(100), input_text TEXT)
RETURNS JSON
LANGUAGE plpgsql
AS $$
DECLARE
transformed_input JSON;
model_qualified_name TEXT;
BEGIN
SELECT json_build_object('prompt', json_build_array(input_text))::JSON INTO transformed_input;
RETURN transformed_input;
END;
$$;
output transform function
CREATE OR REPLACE FUNCTION cymbal_text_output_transform(model_id VARCHAR(100), response_json JSON)
RETURNS REAL[]
LANGUAGE plpgsql
AS $$
DECLARE
transformed_output REAL[];
BEGIN
SELECT ARRAY(SELECT json_array_elements_text(response_json->0)) INTO transformed_output;
RETURN transformed_output;
END;
$$;
HTTP header generation function
The following shows signature for the header generation function that can be
used with the google_ml.embedding()
prediction function when registering other
text embedding model endpoints.
CREATE OR REPLACE FUNCTION GENERATE_HEADERS(model_id VARCHAR(100), input_text TEXT) RETURNS JSON;
For the google_ml.predict_row()
prediction function, the signature is as
follows:
CREATE OR REPLACE FUNCTION GENERATE_HEADERS(model_id TEXT, input JSON) RETURNS JSON;
Parameter | Description |
---|---|
GENERATE_HEADERS |
The function to generate custom headers. You can also pass the authorization header generated by the header generation function while registering the model endpoint. |
Header generation function example
To better understand how to create a function that generates output in JSON key value pairs that are used as HTTP headers, consider a custom-hosted text embedding model endpoint.
The following example curl request passes the version
HTTP header which is
used by the model endpoint:
curl -m 100 -X POST https://cymbal.com/models/text/embeddings/v1 \
-H "Content-Type: application/json" \
-H "version: 2024-01-01" \
-d '{"prompt": ["AlloyDB Embeddings"]}'
The model expects text input through the version
field and returns the version
value in JSON format. The following example shows the header generation function
that is used for this text embedding model endpoint when it is registered with model
endpoint management:
CREATE OR REPLACE FUNCTION header_gen_fn(model_id VARCHAR(100), input_text TEXT)
RETURNS JSON
LANGUAGE plpgsql
AS $$
BEGIN
RETURN json_build_object('version', '2024-01-01')::JSON;
END;
$$;
Header generation function using API Key
The following examples show how to set up authentication using the API key.
embedding model
CREATE OR REPLACE FUNCTION header_gen_func(
model_id VARCHAR(100),
input_text TEXT
)
RETURNS JSON
LANGUAGE plpgsql
AS $$
#variable_conflict use_variable
BEGIN
RETURN json_build_object('Authorization', 'API_KEY')::JSON;
END;
$$;
Replace the API_KEY
with the API key of the model provider.
generic model
CREATE OR REPLACE FUNCTION header_gen_func(
model_id VARCHAR(100),
response_json JSON
)
RETURNS JSON
LANGUAGE plpgsql
AS $$
#variable_conflict use_variable
DECLARE
transformed_output REAL[];
BEGIN
-- code to add Auth token to API request
RETURN json_build_object('x-api-key', 'API_KEY', 'anthropic-version', '2023-06-01')::JSON;
END;
$$;
Replace the API_KEY
with the API key of the model provider.
Request URL generation
Use the request URL generation function to infer the request URLs for the model endpoints with built-in support. The following shows the signature for this function:
CREATE OR REPLACE FUNCTION GENERATE_REQUEST_URL(provider google_ml.model_provider, model_type google_ml.MODEL_TYPE, model_qualified_name VARCHAR(100), model_region VARCHAR(100) DEFAULT NULL)
Parameter | Description |
---|---|
GENERATE_REQUEST_URL |
The function to generate request URL generated by the extension for model endpoints with built-in support. |
Supported models
You can use model endpoint management to register any text embedding or generic model endpoint. Model endpoint management also includes pre-registered Vertex AI models and models with built-in support. For more information about different model types, see Model type.
Pre-registered Vertex AI models
Model type | Model ID | Extension version |
---|---|---|
generic |
|
version 1.4.2 and later |
text_embedding |
|
version 1.3 and later version 1.3 and later |
Models with built-in support
Vertex AI
Qualified model name | Model type |
---|---|
text-embedding-gecko@001 |
text-embedding |
text-embedding-gecko@003 |
text-embedding |
text-embedding-004 |
text-embedding |
text-embedding-005 |
text-embedding |
text-embedding-preview-0815 |
text-embedding |
text-multilingual-embedding-002 |
text-embedding |
text-embedding-large-exp-03-07 * |
text-embedding |
multimodalembedding@001 |
multimodal_embedding |
- - The
text-embedding-large-exp-03-07
model is only available in theus-central1
region.
OpenAI
Qualified model name | Mode type |
---|---|
text-embedding-ada-002 |
text-embedding |
text-embedding-3-small |
text-embedding |
text-embedding-3-large |
text-embedding |
Anthropic
Qualified model name | Model type |
---|---|
claude-3-opus-20240229 |
generic |
claude-3-sonnet-20240229 |
generic |
claude-3-haiku-20240307 |
generic |