Return a list of tokens based on the input text.
HTTP request
POST https://{service-endpoint}/v1beta1/{endpoint}:computeTokens
Where {service-endpoint}
is one of the supported service endpoints.
Path parameters
Parameters | |
---|---|
endpoint |
Required. The name of the Endpoint requested to get lists of tokens and token ids. |
Request body
The request body contains data with the following structure:
JSON representation |
---|
{
"instances": [
value
],
"model": string,
"contents": [
{
object ( |
Fields | |
---|---|
instances[] |
Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models. |
model |
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/* |
contents[] |
Optional. Input content. |
Response body
If successful, the response body contains an instance of ComputeTokensResponse
.
Authorization scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview.
IAM Permissions
Requires the following IAM permission on the endpoint
resource:
aiplatform.endpoints.predict
For more information, see the IAM documentation.