[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# Method: models.serverStreamingPredict\n\n**Full name**: projects.locations.publishers.models.serverStreamingPredict\n\nPerform a server-side streaming online prediction request for Vertex LLM streaming. \n\n### Endpoint\n\npost `https:``/``/{service-endpoint}``/v1beta1``/{endpoint}:serverStreamingPredict` \nWhere `{service-endpoint}` is one of the [supported service endpoints](/vertex-ai/docs/reference/rest#rest_endpoints).\n\n### Path parameters\n\n`endpoint` `string` \nRequired. The name of the Endpoint requested to serve the prediction. Format: `projects/{project}/locations/{location}/endpoints/{endpoint}`\n\n### Request body\n\nThe request body contains data with the following structure:\nFields `inputs[]` `object (`[Tensor](/vertex-ai/docs/reference/rest/v1beta1/Tensor)`)` \nThe prediction input.\n`parameters` `object (`[Tensor](/vertex-ai/docs/reference/rest/v1beta1/Tensor)`)` \nThe parameters that govern the prediction. \n\n### Response body\n\nIf successful, the response body contains a stream of [StreamingPredictResponse](/vertex-ai/docs/reference/rest/v1beta1/StreamingPredictResponse) instances."]]