REST Resource: projects.locations.endpoints

Resource: Endpoint

This type has no fields.

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

Methods

countTokens

Perform a token counting.

fetchPredictOperation

Fetch an asynchronous online prediction operation.

generateContent

Generate content with multimodal inputs.

getIamPolicy

Gets the access control policy for a resource.

predict

Perform an online prediction.

predictLongRunning

rawPredict

Perform an online prediction with an arbitrary HTTP payload.

serverStreamingPredict

Perform a server-side streaming online prediction request for Vertex LLM streaming.

setIamPolicy

Sets the access control policy on the specified resource.

streamGenerateContent

Generate content with multimodal inputs with streaming support.

streamRawPredict

Perform a streaming online prediction with an arbitrary HTTP payload.

testIamPermissions

Returns permissions that a caller has on the specified resource.