- 1.31.0 (latest)
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class LlmParser.
Specifies the advanced parsing for RagFiles.
Generated from protobuf message google.cloud.aiplatform.v1.RagFileParsingConfig.LlmParser
Namespace
Google \ Cloud \ AIPlatform \ V1 \ RagFileParsingConfigMethods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
data |
array
Optional. Data for populating the Message object. |
↳ model_name |
string
The name of a LLM model used for parsing. Format: * * |
↳ max_parsing_requests_per_min |
int
The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used. |
↳ custom_parsing_prompt |
string
The prompt to use for parsing. If not specified, a default prompt will be used. |
getModelName
The name of a LLM model used for parsing.
Format:
projects/{project_id}/locations/{location}/publishers/{publisher}/models/{model}
Returns | |
---|---|
Type | Description |
string |
setModelName
The name of a LLM model used for parsing.
Format:
projects/{project_id}/locations/{location}/publishers/{publisher}/models/{model}
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getMaxParsingRequestsPerMin
The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used.
Returns | |
---|---|
Type | Description |
int |
setMaxParsingRequestsPerMin
The maximum number of requests the job is allowed to make to the LLM model per minute. Consult https://cloud.google.com/vertex-ai/generative-ai/docs/quotas and your document size to set an appropriate value here. If unspecified, a default value of 5000 QPM would be used.
Parameter | |
---|---|
Name | Description |
var |
int
|
Returns | |
---|---|
Type | Description |
$this |
getCustomParsingPrompt
The prompt to use for parsing. If not specified, a default prompt will be used.
Returns | |
---|---|
Type | Description |
string |
setCustomParsingPrompt
The prompt to use for parsing. If not specified, a default prompt will be used.
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |