Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# Method: endpoints.serverStreamingPredict\n\n**Full name**: projects.locations.endpoints.serverStreamingPredict\n\nPerform a server-side streaming online prediction request for Vertex LLM streaming. \n\n### Endpoint\n\npost `https:``/``/aiplatform.googleapis.com``/v1``/{endpoint}:serverStreamingPredict` \n\n### Path parameters\n\n`endpoint` `string` \nRequired. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}`\n\n### Request body\n\nThe request body contains data with the following structure:\nFields `inputs[]` `object (`[Tensor](/vertex-ai/generative-ai/docs/reference/rest/v1/Tensor)`)` \nThe prediction input.\n`parameters` `object (`[Tensor](/vertex-ai/generative-ai/docs/reference/rest/v1/Tensor)`)` \nThe parameters that govern the prediction. \n\n### Response body\n\nIf successful, the response body contains a stream of [StreamingPredictResponse](/vertex-ai/generative-ai/docs/reference/rest/v1/StreamingPredictResponse) instances."]]