An API for using Generative Language Models (GLMs) in dialog applications.
Also known as large language models (LLMs), this API provides models that are trained for multi-turn dialog. v1beta2
Package
@google-ai/generativelanguageConstructors
(constructor)(opts, gaxInstance)
constructor(opts?: ClientOptions, gaxInstance?: typeof gax | typeof gax.fallback);
Construct an instance of DiscussServiceClient.
Parameters | |
---|---|
Name | Description |
opts |
ClientOptions
|
gaxInstance |
typeof gax | typeof fallback
: loaded instance of |
Properties
apiEndpoint
get apiEndpoint(): string;
The DNS address for this API service.
apiEndpoint
static get apiEndpoint(): string;
The DNS address for this API service - same as servicePath.
auth
auth: gax.GoogleAuth;
descriptors
descriptors: Descriptors;
discussServiceStub
discussServiceStub?: Promise<{
[name: string]: Function;
}>;
innerApiCalls
innerApiCalls: {
[name: string]: Function;
};
pathTemplates
pathTemplates: {
[name: string]: gax.PathTemplate;
};
port
static get port(): number;
The port for this API service.
scopes
static get scopes(): never[];
The scopes needed to make gRPC calls for every method defined in this service.
servicePath
static get servicePath(): string;
The DNS address for this API service.
universeDomain
get universeDomain(): string;
warn
warn: (code: string, message: string, warnType?: string) => void;
Methods
close()
close(): Promise<void>;
Terminate the gRPC channel and close the client.
The client will no longer be usable and all future behavior is undefined.
Returns | |
---|---|
Type | Description |
Promise<void> |
{Promise} A promise that resolves when the client is closed. |
countMessageTokens(request, options)
countMessageTokens(request?: protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse,
(protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | undefined),
{} | undefined
]>;
Runs a model's tokenizer on a string and returns the token count.
Parameters | |
---|---|
Name | Description |
request |
ICountMessageTokensRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse,
(protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | undefined),
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing CountMessageTokensResponse. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The model's resource name. This serves as an ID for the Model to
* use.
* This name should match a model name returned by the `ListModels` method.
* Format: `models/{model}`
*/
// const model = 'abc123'
/**
* Required. The prompt, whose token count is to be returned.
*/
// const prompt = {}
// Imports the Generativelanguage library
const {DiscussServiceClient} = require('@google-ai/generativelanguage').v1beta2;
// Instantiates a client
const generativelanguageClient = new DiscussServiceClient();
async function callCountMessageTokens() {
// Construct request
const request = {
model,
prompt,
};
// Run request
const response = await generativelanguageClient.countMessageTokens(request);
console.log(response);
}
callCountMessageTokens();
countMessageTokens(request, options, callback)
countMessageTokens(request: protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse, protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
ICountMessageTokensRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse, protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
countMessageTokens(request, callback)
countMessageTokens(request: protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest, callback: Callback<protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse, protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
ICountMessageTokensRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensResponse, protos.google.ai.generativelanguage.v1beta2.ICountMessageTokensRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
generateMessage(request, options)
generateMessage(request?: protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest, options?: CallOptions): Promise<[
protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse,
(protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | undefined),
{} | undefined
]>;
Generates a response from the model given an input MessagePrompt
.
Parameters | |
---|---|
Name | Description |
request |
IGenerateMessageRequest
The request object that will be sent. |
options |
CallOptions
Call options. See CallOptions for more details. |
Returns | |
---|---|
Type | Description |
Promise<[
protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse,
(protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | undefined),
{} | undefined
]> |
{Promise} - The promise which resolves to an array. The first element of the array is an object representing GenerateMessageResponse. Please see the documentation for more details and examples. |
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The name of the model to use.
* Format: `name=models/{model}`.
*/
// const model = 'abc123'
/**
* Required. The structured textual input given to the model as a prompt.
* Given a
* prompt, the model will return what it predicts is the next message in the
* discussion.
*/
// const prompt = {}
/**
* Optional. Controls the randomness of the output.
* Values can range over `[0.0,1.0]`,
* inclusive. A value closer to `1.0` will produce responses that are more
* varied, while a value closer to `0.0` will typically result in
* less surprising responses from the model.
*/
// const temperature = 1234
/**
* Optional. The number of generated response messages to return.
* This value must be between
* `[1, 8]`, inclusive. If unset, this will default to `1`.
*/
// const candidateCount = 1234
/**
* Optional. The maximum cumulative probability of tokens to consider when
* sampling.
* The model uses combined Top-k and nucleus sampling.
* Nucleus sampling considers the smallest set of tokens whose probability
* sum is at least `top_p`.
*/
// const topP = 1234
/**
* Optional. The maximum number of tokens to consider when sampling.
* The model uses combined Top-k and nucleus sampling.
* Top-k sampling considers the set of `top_k` most probable tokens.
*/
// const topK = 1234
// Imports the Generativelanguage library
const {DiscussServiceClient} = require('@google-ai/generativelanguage').v1beta2;
// Instantiates a client
const generativelanguageClient = new DiscussServiceClient();
async function callGenerateMessage() {
// Construct request
const request = {
model,
prompt,
};
// Run request
const response = await generativelanguageClient.generateMessage(request);
console.log(response);
}
callGenerateMessage();
generateMessage(request, options, callback)
generateMessage(request: protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest, options: CallOptions, callback: Callback<protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse, protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IGenerateMessageRequest
|
options |
CallOptions
|
callback |
Callback<protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse, protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
generateMessage(request, callback)
generateMessage(request: protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest, callback: Callback<protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse, protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | null | undefined, {} | null | undefined>): void;
Parameters | |
---|---|
Name | Description |
request |
IGenerateMessageRequest
|
callback |
Callback<protos.google.ai.generativelanguage.v1beta2.IGenerateMessageResponse, protos.google.ai.generativelanguage.v1beta2.IGenerateMessageRequest | null | undefined, {} | null | undefined>
|
Returns | |
---|---|
Type | Description |
void |
getProjectId()
getProjectId(): Promise<string>;
Returns | |
---|---|
Type | Description |
Promise<string> |
getProjectId(callback)
getProjectId(callback: Callback<string, undefined, undefined>): void;
Parameter | |
---|---|
Name | Description |
callback |
Callback<string, undefined, undefined>
|
Returns | |
---|---|
Type | Description |
void |
initialize()
initialize(): Promise<{
[name: string]: Function;
}>;
Initialize the client. Performs asynchronous operations (such as authentication) and prepares the client. This function will be called automatically when any class method is called for the first time, but if you need to initialize it before calling an actual method, feel free to call initialize() directly.
You can await on this method if you want to make sure the client is initialized.
Returns | |
---|---|
Type | Description |
Promise<{
[name: string]: Function;
}> |
{Promise} A promise that resolves to an authenticated service stub. |
matchModelFromModelName(modelName)
matchModelFromModelName(modelName: string): string | number;
Parse the model from Model resource.
Parameter | |
---|---|
Name | Description |
modelName |
string
A fully-qualified path representing Model resource. |
Returns | |
---|---|
Type | Description |
string | number |
{string} A string representing the model. |
modelPath(model)
modelPath(model: string): string;
Return a fully-qualified model resource name string.
Parameter | |
---|---|
Name | Description |
model |
string
|
Returns | |
---|---|
Type | Description |
string |
{string} Resource name string. |