Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::ImportRagFilesConfig (v0.61.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ImportRagFilesConfig.

Config for importing RagFiles.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#gcs_source

def gcs_source() -> ::Google::Cloud::AIPlatform::V1::GcsSource
Returns
  • (::Google::Cloud::AIPlatform::V1::GcsSource) —

    Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats:

    • gs://bucket_name/my_directory/object_name/my_file.txt
    • gs://bucket_name/my_directory

#gcs_source=

def gcs_source=(value) -> ::Google::Cloud::AIPlatform::V1::GcsSource
Parameter
  • value (::Google::Cloud::AIPlatform::V1::GcsSource) —

    Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats:

    • gs://bucket_name/my_directory/object_name/my_file.txt
    • gs://bucket_name/my_directory
Returns
  • (::Google::Cloud::AIPlatform::V1::GcsSource) —

    Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats:

    • gs://bucket_name/my_directory/object_name/my_file.txt
    • gs://bucket_name/my_directory

#google_drive_source

def google_drive_source() -> ::Google::Cloud::AIPlatform::V1::GoogleDriveSource
Returns

#google_drive_source=

def google_drive_source=(value) -> ::Google::Cloud::AIPlatform::V1::GoogleDriveSource
Parameter
Returns

#jira_source

def jira_source() -> ::Google::Cloud::AIPlatform::V1::JiraSource
Returns

#jira_source=

def jira_source=(value) -> ::Google::Cloud::AIPlatform::V1::JiraSource
Parameter
Returns

#max_embedding_requests_per_min

def max_embedding_requests_per_min() -> ::Integer
Returns
  • (::Integer) — Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.

#max_embedding_requests_per_min=

def max_embedding_requests_per_min=(value) -> ::Integer
Parameter
  • value (::Integer) — Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.
Returns
  • (::Integer) — Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.

#partial_failure_bigquery_sink

def partial_failure_bigquery_sink() -> ::Google::Cloud::AIPlatform::V1::BigQueryDestination
Returns
  • (::Google::Cloud::AIPlatform::V1::BigQueryDestination) — The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink.

#partial_failure_bigquery_sink=

def partial_failure_bigquery_sink=(value) -> ::Google::Cloud::AIPlatform::V1::BigQueryDestination
Parameter
  • value (::Google::Cloud::AIPlatform::V1::BigQueryDestination) — The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink.
Returns
  • (::Google::Cloud::AIPlatform::V1::BigQueryDestination) — The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink.

#partial_failure_gcs_sink

def partial_failure_gcs_sink() -> ::Google::Cloud::AIPlatform::V1::GcsDestination
Returns

#partial_failure_gcs_sink=

def partial_failure_gcs_sink=(value) -> ::Google::Cloud::AIPlatform::V1::GcsDestination
Parameter
Returns

#rag_file_transformation_config

def rag_file_transformation_config() -> ::Google::Cloud::AIPlatform::V1::RagFileTransformationConfig
Returns

#rag_file_transformation_config=

def rag_file_transformation_config=(value) -> ::Google::Cloud::AIPlatform::V1::RagFileTransformationConfig
Parameter
Returns

#share_point_sources

def share_point_sources() -> ::Google::Cloud::AIPlatform::V1::SharePointSources
Returns

#share_point_sources=

def share_point_sources=(value) -> ::Google::Cloud::AIPlatform::V1::SharePointSources
Parameter
Returns

#slack_source

def slack_source() -> ::Google::Cloud::AIPlatform::V1::SlackSource
Returns

#slack_source=

def slack_source=(value) -> ::Google::Cloud::AIPlatform::V1::SlackSource
Parameter
Returns