Class ChatSession (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
ChatSession(
model: vertexai.language_models.ChatModel,
context: typing.Optional[str] = None,
examples: typing.Optional[
typing.List[vertexai.language_models.InputOutputTextPair]
] = None,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
message_history: typing.Optional[
typing.List[vertexai.language_models.ChatMessage]
] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
)
ChatSession represents a chat session with a language model.
Within a chat session, the model keeps context and remembers the previous conversation.
Properties
message_history
List of previous messages.
Methods
send_message
send_message(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None,
grounding_source: typing.Optional[
typing.Union[
vertexai.language_models._language_models.WebSearch,
vertexai.language_models._language_models.VertexAISearch,
vertexai.language_models._language_models.InlineContext,
]
] = None
) -> vertexai.language_models.MultiCandidateTextGenerationResponse
Sends message to the language model and gets a response.
send_message_async
send_message_async(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None,
grounding_source: typing.Optional[
typing.Union[
vertexai.language_models._language_models.WebSearch,
vertexai.language_models._language_models.VertexAISearch,
vertexai.language_models._language_models.InlineContext,
]
] = None
) -> vertexai.language_models.MultiCandidateTextGenerationResponse
Asynchronously sends message to the language model and gets a response.
send_message_streaming
send_message_streaming(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.Iterator[vertexai.language_models.TextGenerationResponse]
Sends message to the language model and gets a streamed response.
The response is only added to the history once it's fully read.
send_message_streaming_async
send_message_streaming_async(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.AsyncIterator[vertexai.language_models.TextGenerationResponse]
Asynchronously sends message to the language model and gets a streamed response.
The response is only added to the history once it's fully read.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Class ChatSession (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.language_models.ChatSession)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.language_models.ChatSession)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.language_models.ChatSession)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.language_models.ChatSession)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.language_models.ChatSession)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.language_models.ChatSession)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.language_models.ChatSession)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.language_models.ChatSession)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.language_models.ChatSession)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.language_models.ChatSession)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.language_models.ChatSession)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.language_models.ChatSession)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.language_models.ChatSession)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.language_models.ChatSession)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.language_models.ChatSession)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.language_models.ChatSession)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.language_models.ChatSession)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.language_models.ChatSession)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.language_models.ChatSession)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.language_models.ChatSession)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.language_models.ChatSession)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.language_models.ChatSession)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.language_models.ChatSession)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.language_models.ChatSession)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.language_models.ChatSession)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.language_models.ChatSession)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.language_models.ChatSession)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.language_models.ChatSession)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.language_models.ChatSession)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.language_models.ChatSession)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.language_models.ChatSession)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.language_models.ChatSession)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.language_models.ChatSession)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.language_models.ChatSession)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.language_models.ChatSession) \n\n ChatSession(\n model: vertexai.language_models.ChatModel,\n context: typing.Optional[str] = None,\n examples: typing.Optional[\n typing.List[vertexai.language_models.InputOutputTextPair]\n ] = None,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n top_k: typing.Optional[int] = None,\n top_p: typing.Optional[float] = None,\n message_history: typing.Optional[\n typing.List[vertexai.language_models.ChatMessage]\n ] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None,\n )\n\nChatSession represents a chat session with a language model.\n\nWithin a chat session, the model keeps context and remembers the previous conversation.\n\nProperties\n----------\n\n### message_history\n\nList of previous messages.\n\nMethods\n-------\n\n### send_message\n\n send_message(\n message: str,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n top_k: typing.Optional[int] = None,\n top_p: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None,\n candidate_count: typing.Optional[int] = None,\n grounding_source: typing.Optional[\n typing.Union[\n vertexai.language_models._language_models.WebSearch,\n vertexai.language_models._language_models.VertexAISearch,\n vertexai.language_models._language_models.InlineContext,\n ]\n ] = None\n ) -\u003e vertexai.language_models.MultiCandidateTextGenerationResponse\n\nSends message to the language model and gets a response.\n\n### send_message_async\n\n send_message_async(\n message: str,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n top_k: typing.Optional[int] = None,\n top_p: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None,\n candidate_count: typing.Optional[int] = None,\n grounding_source: typing.Optional[\n typing.Union[\n vertexai.language_models._language_models.WebSearch,\n vertexai.language_models._language_models.VertexAISearch,\n vertexai.language_models._language_models.InlineContext,\n ]\n ] = None\n ) -\u003e vertexai.language_models.MultiCandidateTextGenerationResponse\n\nAsynchronously sends message to the language model and gets a response.\n\n### send_message_streaming\n\n send_message_streaming(\n message: str,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n top_k: typing.Optional[int] = None,\n top_p: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None\n ) -\u003e typing.Iterator[vertexai.language_models.TextGenerationResponse]\n\nSends message to the language model and gets a streamed response.\n\nThe response is only added to the history once it's fully read.\n\n### send_message_streaming_async\n\n send_message_streaming_async(\n message: str,\n *,\n max_output_tokens: typing.Optional[int] = None,\n temperature: typing.Optional[float] = None,\n top_k: typing.Optional[int] = None,\n top_p: typing.Optional[float] = None,\n stop_sequences: typing.Optional[typing.List[str]] = None\n ) -\u003e typing.AsyncIterator[vertexai.language_models.TextGenerationResponse]\n\nAsynchronously sends message to the language model and gets a streamed response.\n\nThe response is only added to the history once it's fully read."]]