API for using Large Models that generate multimodal content and have additional capabilities beyond text generation. v1
Package
@google-ai/generativelanguageConstructors
(constructor)(opts, gaxInstance)
constructor(opts?: ClientOptions, gaxInstance?: typeof gax | typeof gax.fallback);
Construct an instance of GenerativeServiceClient.
Parameters | |
---|---|
Name | Description |
opts |
ClientOptions
|
gaxInstance |
typeof gax | typeof fallback
: loaded instance of |
Properties
apiEndpoint
get apiEndpoint(): string;
The DNS address for this API service.
apiEndpoint
static get apiEndpoint(): string;
The DNS address for this API service - same as servicePath.
auth
auth: gax.GoogleAuth;
descriptors
descriptors: Descriptors;
generativeServiceStub
generativeServiceStub?: Promise<{
[name: string]: Function;
}>;
innerApiCalls
innerApiCalls: {
[name: string]: Function;
};
pathTemplates
pathTemplates: {
[name: string]: gax.PathTemplate;
};
port
static get port(): number;
The port for this API service.
scopes
static get scopes(): never[];
The scopes needed to make gRPC calls for every method defined in this service.
servicePath
static get servicePath(): string;
The DNS address for this API service.
universeDomain
get universeDomain(): string;
warn
warn: (code: string, message: string, warnType?: string) => void;
Methods
batchEmbedContents(request, options)
batchEmbedContents(request?: protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse,
(protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | undefined),
{} | undefined
]>;
Generates multiple embeddings from the model given input text in a synchronous call.
Parameters | |
---|---|
Name | Description |
request |
IBatchEmbedContentsRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse,
(protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | undefined),
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing BatchEmbedContentsResponse. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The model's resource name. This serves as an ID for the Model to
* use.
* This name should match a model name returned by the `ListModels` method.
* Format: `models/{model}`
*/
// const model = 'abc123'
/**
* Required. Embed requests for the batch. The model in each of these requests
* must match the model specified `BatchEmbedContentsRequest.model`.
*/
// const requests = [1,2,3,4]
// Imports the Generativelanguage library
const {GenerativeServiceClient} = require('@google-ai/generativelanguage').v1;
// Instantiates a client
const generativelanguageClient = new GenerativeServiceClient();
async function callBatchEmbedContents() {
// Construct request
const request = {
model,
requests,
};
// Run request
const response = await generativelanguageClient.batchEmbedContents(request);
console.log(response);
}
callBatchEmbedContents();
batchEmbedContents(request, options, callback)
batchEmbedContents(request: protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse, protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IBatchEmbedContentsRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse, protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
batchEmbedContents(request, callback)
batchEmbedContents(request: protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest, callback: Callback<protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse, protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IBatchEmbedContentsRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IBatchEmbedContentsResponse, protos.google.ai.generativelanguage.v1.IBatchEmbedContentsRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
close()
close(): Promise<void>;
Terminate the gRPC channel and close the client.
The client will no longer be usable and all future behavior is undefined.
Returns | |
---|---|
Type | Description |
Promise<void> |
{Promise} A promise that resolves when the client is closed. |
countTokens(request, options)
countTokens(request?: protos.google.ai.generativelanguage.v1.ICountTokensRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1.ICountTokensResponse,
protos.google.ai.generativelanguage.v1.ICountTokensRequest | undefined,
{} | undefined
]>;
Runs a model's tokenizer on input content and returns the token count.
Parameters | |
---|---|
Name | Description |
request |
ICountTokensRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1.ICountTokensResponse,
protos.google.ai.generativelanguage.v1.ICountTokensRequest | undefined,
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing CountTokensResponse. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The model's resource name. This serves as an ID for the Model to
* use.
* This name should match a model name returned by the `ListModels` method.
* Format: `models/{model}`
*/
// const model = 'abc123'
/**
* Optional. The input given to the model as a prompt. This field is ignored
* when `generate_content_request` is set.
*/
// const contents = [1,2,3,4]
/**
* Optional. The overall input given to the model. CountTokens will count
* prompt, function calling, etc.
*/
// const generateContentRequest = {}
// Imports the Generativelanguage library
const {GenerativeServiceClient} = require('@google-ai/generativelanguage').v1;
// Instantiates a client
const generativelanguageClient = new GenerativeServiceClient();
async function callCountTokens() {
// Construct request
const request = {
model,
};
// Run request
const response = await generativelanguageClient.countTokens(request);
console.log(response);
}
callCountTokens();
countTokens(request, options, callback)
countTokens(request: protos.google.ai.generativelanguage.v1.ICountTokensRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1.ICountTokensResponse, protos.google.ai.generativelanguage.v1.ICountTokensRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
ICountTokensRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1.ICountTokensResponse, protos.google.ai.generativelanguage.v1.ICountTokensRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
countTokens(request, callback)
countTokens(request: protos.google.ai.generativelanguage.v1.ICountTokensRequest, callback: Callback<protos.google.ai.generativelanguage.v1.ICountTokensResponse, protos.google.ai.generativelanguage.v1.ICountTokensRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
ICountTokensRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1.ICountTokensResponse, protos.google.ai.generativelanguage.v1.ICountTokensRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
embedContent(request, options)
embedContent(request?: protos.google.ai.generativelanguage.v1.IEmbedContentRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1.IEmbedContentResponse,
protos.google.ai.generativelanguage.v1.IEmbedContentRequest | undefined,
{} | undefined
]>;
Generates an embedding from the model given an input Content
.
Parameters | |
---|---|
Name | Description |
request |
IEmbedContentRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1.IEmbedContentResponse,
protos.google.ai.generativelanguage.v1.IEmbedContentRequest | undefined,
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing EmbedContentResponse. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The model's resource name. This serves as an ID for the Model to
* use.
* This name should match a model name returned by the `ListModels` method.
* Format: `models/{model}`
*/
// const model = 'abc123'
/**
* Required. The content to embed. Only the `parts.text` fields will be
* counted.
*/
// const content = {}
/**
* Optional. Optional task type for which the embeddings will be used. Can
* only be set for `models/embedding-001`.
*/
// const taskType = {}
/**
* Optional. An optional title for the text. Only applicable when TaskType is
* `RETRIEVAL_DOCUMENT`.
* Note: Specifying a `title` for `RETRIEVAL_DOCUMENT` provides better quality
* embeddings for retrieval.
*/
// const title = 'abc123'
/**
* Optional. Optional reduced dimension for the output embedding. If set,
* excessive values in the output embedding are truncated from the end.
* Supported by newer models since 2024, and the earlier model
* (`models/embedding-001`) cannot specify this value.
*/
// const outputDimensionality = 1234
// Imports the Generativelanguage library
const {GenerativeServiceClient} = require('@google-ai/generativelanguage').v1;
// Instantiates a client
const generativelanguageClient = new GenerativeServiceClient();
async function callEmbedContent() {
// Construct request
const request = {
model,
content,
};
// Run request
const response = await generativelanguageClient.embedContent(request);
console.log(response);
}
callEmbedContent();
embedContent(request, options, callback)
embedContent(request: protos.google.ai.generativelanguage.v1.IEmbedContentRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1.IEmbedContentResponse, protos.google.ai.generativelanguage.v1.IEmbedContentRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IEmbedContentRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IEmbedContentResponse, protos.google.ai.generativelanguage.v1.IEmbedContentRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
embedContent(request, callback)
embedContent(request: protos.google.ai.generativelanguage.v1.IEmbedContentRequest, callback: Callback<protos.google.ai.generativelanguage.v1.IEmbedContentResponse, protos.google.ai.generativelanguage.v1.IEmbedContentRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IEmbedContentRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IEmbedContentResponse, protos.google.ai.generativelanguage.v1.IEmbedContentRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
generateContent(request, options)
generateContent(request?: protos.google.ai.generativelanguage.v1.IGenerateContentRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1.IGenerateContentResponse,
(protos.google.ai.generativelanguage.v1.IGenerateContentRequest | undefined),
{} | undefined
]>;
Generates a response from the model given an input GenerateContentRequest
.
Input capabilities differ between models, including tuned models. See the [model guide](https://ai.google.dev/models/gemini) and [tuning guide](https://ai.google.dev/docs/model_tuning_guidance) for details.
Parameters | |
---|---|
Name | Description |
request |
IGenerateContentRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1.IGenerateContentResponse,
(protos.google.ai.generativelanguage.v1.IGenerateContentRequest | undefined),
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing . Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The name of the `Model` to use for generating the completion.
* Format: `name=models/{model}`.
*/
// const model = 'abc123'
/**
* Required. The content of the current conversation with the model.
* For single-turn queries, this is a single instance. For multi-turn queries,
* this is a repeated field that contains conversation history + latest
* request.
*/
// const contents = [1,2,3,4]
/**
* Optional. A list of unique `SafetySetting` instances for blocking unsafe
* content.
* This will be enforced on the `GenerateContentRequest.contents` and
* `GenerateContentResponse.candidates`. There should not be more than one
* setting for each `SafetyCategory` type. The API will block any contents and
* responses that fail to meet the thresholds set by these settings. This list
* overrides the default settings for each `SafetyCategory` specified in the
* safety_settings. If there is no `SafetySetting` for a given
* `SafetyCategory` provided in the list, the API will use the default safety
* setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH,
* HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT,
* HARM_CATEGORY_HARASSMENT are supported.
*/
// const safetySettings = [1,2,3,4]
/**
* Optional. Configuration options for model generation and outputs.
*/
// const generationConfig = {}
// Imports the Generativelanguage library
const {GenerativeServiceClient} = require('@google-ai/generativelanguage').v1;
// Instantiates a client
const generativelanguageClient = new GenerativeServiceClient();
async function callGenerateContent() {
// Construct request
const request = {
model,
contents,
};
// Run request
const response = await generativelanguageClient.generateContent(request);
console.log(response);
}
callGenerateContent();
generateContent(request, options, callback)
generateContent(request: protos.google.ai.generativelanguage.v1.IGenerateContentRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1.IGenerateContentResponse, protos.google.ai.generativelanguage.v1.IGenerateContentRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IGenerateContentRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IGenerateContentResponse, protos.google.ai.generativelanguage.v1.IGenerateContentRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
generateContent(request, callback)
generateContent(request: protos.google.ai.generativelanguage.v1.IGenerateContentRequest, callback: Callback<protos.google.ai.generativelanguage.v1.IGenerateContentResponse, protos.google.ai.generativelanguage.v1.IGenerateContentRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IGenerateContentRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1.IGenerateContentResponse, protos.google.ai.generativelanguage.v1.IGenerateContentRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
getProjectId()
getProjectId(): Promise<string>;
Returns | |
---|---|
Type | Description |
Promise<string> |
getProjectId(callback)
getProjectId(callback: Callback<string, undefined, undefined>): void;
Parameter | |
---|---|
Name | Description |
callback |
Callback<string, undefined, undefined>
|
Returns | |
---|---|
Type | Description |
void |
initialize()
initialize(): Promise<{
[name: string]: Function;
}>;
Initialize the client. Performs asynchronous operations (such as authentication) and prepares the client. This function will be called automatically when any class method is called for the first time, but if you need to initialize it before calling an actual method, feel free to call initialize() directly.
You can await on this method if you want to make sure the client is initialized.
Returns | |
---|---|
Type | Description |
Promise<{
[name: string]: Function;
}> |
{Promise} A promise that resolves to an authenticated service stub. |
matchModelFromModelName(modelName)
matchModelFromModelName(modelName: string): string | number;
Parse the model from Model resource.
Parameter | |
---|---|
Name | Description |
modelName |
string
A fully-qualified path representing Model resource. |
Returns | |
---|---|
Type | Description |
string | number |
{string} A string representing the model. |
modelPath(model)
modelPath(model: string): string;
Return a fully-qualified model resource name string.
Parameter | |
---|---|
Name | Description |
model |
string
|
Returns | |
---|---|
Type | Description |
string |
{string} Resource name string. |
streamGenerateContent(request, options)
streamGenerateContent(request?: protos.google.ai.generativelanguage.v1.IGenerateContentRequest, options?: CallOptions): gax.CancellableStream;
Generates a streamed response from the model given an input GenerateContentRequest
.
Parameters | |
---|---|
Name | Description |
request |
IGenerateContentRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
gax.CancellableStream |
{Stream} An object stream which emits on 'data' event. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The name of the `Model` to use for generating the completion.
* Format: `name=models/{model}`.
*/
// const model = 'abc123'
/**
* Required. The content of the current conversation with the model.
* For single-turn queries, this is a single instance. For multi-turn queries,
* this is a repeated field that contains conversation history + latest
* request.
*/
// const contents = [1,2,3,4]
/**
* Optional. A list of unique `SafetySetting` instances for blocking unsafe
* content.
* This will be enforced on the `GenerateContentRequest.contents` and
* `GenerateContentResponse.candidates`. There should not be more than one
* setting for each `SafetyCategory` type. The API will block any contents and
* responses that fail to meet the thresholds set by these settings. This list
* overrides the default settings for each `SafetyCategory` specified in the
* safety_settings. If there is no `SafetySetting` for a given
* `SafetyCategory` provided in the list, the API will use the default safety
* setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH,
* HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT,
* HARM_CATEGORY_HARASSMENT are supported.
*/
// const safetySettings = [1,2,3,4]
/**
* Optional. Configuration options for model generation and outputs.
*/
// const generationConfig = {}
// Imports the Generativelanguage library
const {GenerativeServiceClient} = require('@google-ai/generativelanguage').v1;
// Instantiates a client
const generativelanguageClient = new GenerativeServiceClient();
async function callStreamGenerateContent() {
// Construct request
const request = {
model,
contents,
};
// Run request
const stream = await generativelanguageClient.streamGenerateContent(request);
stream.on('data', (response) => { console.log(response) });
stream.on('error', (err) => { throw(err) });
stream.on('end', () => { /* API call completed */ });
}
callStreamGenerateContent();