Index
DlpService
(interface)Action
(message)Action.Deidentify
(message)Action.JobNotificationEmails
(message)Action.PublishFindingsToCloudDataCatalog
(message)Action.PublishSummaryToCscc
(message)Action.PublishToPubSub
(message)Action.PublishToStackdriver
(message)Action.SaveFindings
(message)ActionDetails
(message)ActivateJobTriggerRequest
(message)AllOtherDatabaseResources
(message)AllOtherResources
(message)AmazonS3Bucket
(message)AmazonS3BucketConditions
(message)AmazonS3BucketConditions.BucketType
(enum)AmazonS3BucketConditions.ObjectStorageClass
(enum)AmazonS3BucketRegex
(message)AnalyzeDataSourceRiskDetails
(message)AnalyzeDataSourceRiskDetails.CategoricalStatsResult
(message)AnalyzeDataSourceRiskDetails.CategoricalStatsResult.CategoricalStatsHistogramBucket
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult.DeltaPresenceEstimationHistogramBucket
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult.DeltaPresenceEstimationQuasiIdValues
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult.KAnonymityEquivalenceClass
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult.KAnonymityHistogramBucket
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult.KMapEstimationHistogramBucket
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult.KMapEstimationQuasiIdValues
(message)AnalyzeDataSourceRiskDetails.LDiversityResult
(message)AnalyzeDataSourceRiskDetails.LDiversityResult.LDiversityEquivalenceClass
(message)AnalyzeDataSourceRiskDetails.LDiversityResult.LDiversityHistogramBucket
(message)AnalyzeDataSourceRiskDetails.NumericalStatsResult
(message)AnalyzeDataSourceRiskDetails.RequestedRiskAnalysisOptions
(message)AwsAccount
(message)AwsAccountRegex
(message)BigQueryDiscoveryTarget
(message)BigQueryField
(message)BigQueryKey
(message)BigQueryOptions
(message)BigQueryOptions.SampleMethod
(enum)BigQueryRegex
(message)BigQueryRegexes
(message)BigQuerySchemaModification
(enum)BigQueryTable
(message)BigQueryTableCollection
(message)BigQueryTableModification
(enum)BigQueryTableType
(enum)BigQueryTableTypeCollection
(enum)BigQueryTableTypes
(message)BoundingBox
(message)BucketingConfig
(message)BucketingConfig.Bucket
(message)ByteContentItem
(message)ByteContentItem.BytesType
(enum)CancelDlpJobRequest
(message)CharacterMaskConfig
(message)CharsToIgnore
(message)CharsToIgnore.CommonCharsToIgnore
(enum)CloudSqlDiscoveryTarget
(message)CloudSqlIamCredential
(message)CloudSqlProperties
(message)CloudSqlProperties.DatabaseEngine
(enum)CloudStorageDiscoveryTarget
(message)CloudStorageFileSet
(message)CloudStorageOptions
(message)CloudStorageOptions.FileSet
(message)CloudStorageOptions.SampleMethod
(enum)CloudStoragePath
(message)CloudStorageRegex
(message)CloudStorageRegexFileSet
(message)CloudStorageResourceReference
(message)Color
(message)ColumnDataProfile
(message)ColumnDataProfile.ColumnDataType
(enum)ColumnDataProfile.ColumnPolicyState
(enum)ColumnDataProfile.State
(enum)Connection
(message)ConnectionState
(enum)Container
(message)ContentItem
(message)ContentLocation
(message)ContentOption
(enum)CreateConnectionRequest
(message)CreateDeidentifyTemplateRequest
(message)CreateDiscoveryConfigRequest
(message)CreateDlpJobRequest
(message)CreateInspectTemplateRequest
(message)CreateJobTriggerRequest
(message)CreateStoredInfoTypeRequest
(message)CryptoDeterministicConfig
(message)CryptoHashConfig
(message)CryptoKey
(message)CryptoReplaceFfxFpeConfig
(message)CryptoReplaceFfxFpeConfig.FfxCommonNativeAlphabet
(enum)CustomInfoType
(message)CustomInfoType.DetectionRule
(message)CustomInfoType.DetectionRule.HotwordRule
(message)CustomInfoType.DetectionRule.LikelihoodAdjustment
(message)CustomInfoType.DetectionRule.Proximity
(message)CustomInfoType.Dictionary
(message)CustomInfoType.Dictionary.WordList
(message)CustomInfoType.ExclusionType
(enum)CustomInfoType.Regex
(message)CustomInfoType.SurrogateType
(message)DataProfileAction
(message)DataProfileAction.EventType
(enum)DataProfileAction.Export
(message)DataProfileAction.PubSubNotification
(message)DataProfileAction.PubSubNotification.DetailLevel
(enum)DataProfileAction.PublishToChronicle
(message)DataProfileAction.PublishToSecurityCommandCenter
(message)DataProfileAction.TagResources
(message)DataProfileAction.TagResources.TagCondition
(message)DataProfileAction.TagResources.TagValue
(message)DataProfileBigQueryRowSchema
(message)DataProfileConfigSnapshot
(message)DataProfileFinding
(message)DataProfileFindingLocation
(message)DataProfileFindingRecordLocation
(message)DataProfileJobConfig
(message)DataProfileLocation
(message)DataProfilePubSubCondition
(message)DataProfilePubSubCondition.ProfileScoreBucket
(enum)DataProfilePubSubCondition.PubSubCondition
(message)DataProfilePubSubCondition.PubSubExpressions
(message)DataProfilePubSubCondition.PubSubExpressions.PubSubLogicalOperator
(enum)DataProfilePubSubMessage
(message)DataProfileUpdateFrequency
(enum)DataRiskLevel
(message)DataRiskLevel.DataRiskLevelScore
(enum)DataSourceType
(message)DatabaseResourceCollection
(message)DatabaseResourceReference
(message)DatabaseResourceRegex
(message)DatabaseResourceRegexes
(message)DatastoreKey
(message)DatastoreOptions
(message)DateShiftConfig
(message)DateTime
(message)DateTime.TimeZone
(message)DeidentifyConfig
(message)DeidentifyContentRequest
(message)DeidentifyContentResponse
(message)DeidentifyDataSourceDetails
(message)DeidentifyDataSourceDetails.RequestedDeidentifyOptions
(message)DeidentifyDataSourceStats
(message)DeidentifyTemplate
(message)DeleteConnectionRequest
(message)DeleteDeidentifyTemplateRequest
(message)DeleteDiscoveryConfigRequest
(message)DeleteDlpJobRequest
(message)DeleteFileStoreDataProfileRequest
(message)DeleteInspectTemplateRequest
(message)DeleteJobTriggerRequest
(message)DeleteStoredInfoTypeRequest
(message)DeleteTableDataProfileRequest
(message)Disabled
(message)DiscoveryBigQueryConditions
(message)DiscoveryBigQueryConditions.OrConditions
(message)DiscoveryBigQueryFilter
(message)DiscoveryBigQueryFilter.AllOtherBigQueryTables
(message)DiscoveryCloudSqlConditions
(message)DiscoveryCloudSqlConditions.DatabaseEngine
(enum)DiscoveryCloudSqlConditions.DatabaseResourceType
(enum)DiscoveryCloudSqlFilter
(message)DiscoveryCloudSqlGenerationCadence
(message)DiscoveryCloudSqlGenerationCadence.SchemaModifiedCadence
(message)DiscoveryCloudSqlGenerationCadence.SchemaModifiedCadence.CloudSqlSchemaModification
(enum)DiscoveryCloudStorageConditions
(message)DiscoveryCloudStorageConditions.CloudStorageBucketAttribute
(enum)DiscoveryCloudStorageConditions.CloudStorageObjectAttribute
(enum)DiscoveryCloudStorageFilter
(message)DiscoveryCloudStorageGenerationCadence
(message)DiscoveryConfig
(message)DiscoveryConfig.OrgConfig
(message)DiscoveryConfig.Status
(enum)DiscoveryFileStoreConditions
(message)DiscoveryGenerationCadence
(message)DiscoveryInspectTemplateModifiedCadence
(message)DiscoveryOtherCloudConditions
(message)DiscoveryOtherCloudFilter
(message)DiscoveryOtherCloudGenerationCadence
(message)DiscoverySchemaModifiedCadence
(message)DiscoveryStartingLocation
(message)DiscoveryTableModifiedCadence
(message)DiscoveryTarget
(message)DlpJob
(message)DlpJob.JobState
(enum)DlpJobType
(enum)DocumentLocation
(message)EncryptionStatus
(enum)EntityId
(message)Error
(message)Error.ErrorExtraInfo
(enum)ExcludeByHotword
(message)ExcludeInfoTypes
(message)ExclusionRule
(message)FieldId
(message)FieldTransformation
(message)FileClusterSummary
(message)FileClusterType
(message)FileClusterType.Cluster
(enum)FileExtensionInfo
(message)FileStoreCollection
(message)FileStoreDataProfile
(message)FileStoreDataProfile.State
(enum)FileStoreInfoTypeSummary
(message)FileStoreRegex
(message)FileStoreRegexes
(message)FileType
(enum)Finding
(message)FinishDlpJobRequest
(message)FixedSizeBucketingConfig
(message)GetColumnDataProfileRequest
(message)GetConnectionRequest
(message)GetDeidentifyTemplateRequest
(message)GetDiscoveryConfigRequest
(message)GetDlpJobRequest
(message)GetFileStoreDataProfileRequest
(message)GetInspectTemplateRequest
(message)GetJobTriggerRequest
(message)GetProjectDataProfileRequest
(message)GetStoredInfoTypeRequest
(message)GetTableDataProfileRequest
(message)HybridContentItem
(message)HybridFindingDetails
(message)HybridInspectDlpJobRequest
(message)HybridInspectJobTriggerRequest
(message)HybridInspectResponse
(message)HybridInspectStatistics
(message)HybridOptions
(message)ImageLocation
(message)ImageTransformations
(message)ImageTransformations.ImageTransformation
(message)ImageTransformations.ImageTransformation.AllInfoTypes
(message)ImageTransformations.ImageTransformation.AllText
(message)ImageTransformations.ImageTransformation.SelectedInfoTypes
(message)InfoType
(message)InfoTypeCategory
(message)InfoTypeCategory.IndustryCategory
(enum)InfoTypeCategory.LocationCategory
(enum)InfoTypeCategory.TypeCategory
(enum)InfoTypeDescription
(message)InfoTypeStats
(message)InfoTypeSummary
(message)InfoTypeSupportedBy
(enum)InfoTypeTransformations
(message)InfoTypeTransformations.InfoTypeTransformation
(message)InspectConfig
(message)InspectConfig.FindingLimits
(message)InspectConfig.FindingLimits.InfoTypeLimit
(message)InspectConfig.InfoTypeLikelihood
(message)InspectContentRequest
(message)InspectContentResponse
(message)InspectDataSourceDetails
(message)InspectDataSourceDetails.RequestedOptions
(message)InspectDataSourceDetails.Result
(message)InspectJobConfig
(message)InspectResult
(message)InspectTemplate
(message)InspectionRule
(message)InspectionRuleSet
(message)JobTrigger
(message)JobTrigger.Status
(enum)JobTrigger.Trigger
(message)Key
(message)Key.PathElement
(message)KindExpression
(message)KmsWrappedCryptoKey
(message)LargeCustomDictionaryConfig
(message)LargeCustomDictionaryStats
(message)Likelihood
(enum)ListColumnDataProfilesRequest
(message)ListColumnDataProfilesResponse
(message)ListConnectionsRequest
(message)ListConnectionsResponse
(message)ListDeidentifyTemplatesRequest
(message)ListDeidentifyTemplatesResponse
(message)ListDiscoveryConfigsRequest
(message)ListDiscoveryConfigsResponse
(message)ListDlpJobsRequest
(message)ListDlpJobsResponse
(message)ListFileStoreDataProfilesRequest
(message)ListFileStoreDataProfilesResponse
(message)ListInfoTypesRequest
(message)ListInfoTypesResponse
(message)ListInspectTemplatesRequest
(message)ListInspectTemplatesResponse
(message)ListJobTriggersRequest
(message)ListJobTriggersResponse
(message)ListProjectDataProfilesRequest
(message)ListProjectDataProfilesResponse
(message)ListStoredInfoTypesRequest
(message)ListStoredInfoTypesResponse
(message)ListTableDataProfilesRequest
(message)ListTableDataProfilesResponse
(message)Location
(message)Manual
(message)MatchingType
(enum)MetadataLocation
(message)MetadataType
(enum)NullPercentageLevel
(enum)OtherCloudDiscoveryStartingLocation
(message)OtherCloudDiscoveryStartingLocation.AwsDiscoveryStartingLocation
(message)OtherCloudDiscoveryTarget
(message)OtherCloudResourceCollection
(message)OtherCloudResourceRegex
(message)OtherCloudResourceRegexes
(message)OtherCloudSingleResourceReference
(message)OtherInfoTypeSummary
(message)OutputStorageConfig
(message)OutputStorageConfig.OutputSchema
(enum)PartitionId
(message)PrimitiveTransformation
(message)PrivacyMetric
(message)PrivacyMetric.CategoricalStatsConfig
(message)PrivacyMetric.DeltaPresenceEstimationConfig
(message)PrivacyMetric.KAnonymityConfig
(message)PrivacyMetric.KMapEstimationConfig
(message)PrivacyMetric.KMapEstimationConfig.AuxiliaryTable
(message)PrivacyMetric.KMapEstimationConfig.AuxiliaryTable.QuasiIdField
(message)PrivacyMetric.KMapEstimationConfig.TaggedField
(message)PrivacyMetric.LDiversityConfig
(message)PrivacyMetric.NumericalStatsConfig
(message)ProfileGeneration
(enum)ProfileStatus
(message)ProjectDataProfile
(message)QuasiId
(message)QuoteInfo
(message)Range
(message)RecordCondition
(message)RecordCondition.Condition
(message)RecordCondition.Conditions
(message)RecordCondition.Expressions
(message)RecordCondition.Expressions.LogicalOperator
(enum)RecordKey
(message)RecordLocation
(message)RecordSuppression
(message)RecordTransformation
(message)RecordTransformations
(message)RedactConfig
(message)RedactImageRequest
(message)RedactImageRequest.ImageRedactionConfig
(message)RedactImageResponse
(message)ReidentifyContentRequest
(message)ReidentifyContentResponse
(message)RelationalOperator
(enum)ReplaceDictionaryConfig
(message)ReplaceValueConfig
(message)ReplaceWithInfoTypeConfig
(message)ResourceVisibility
(enum)RiskAnalysisJobConfig
(message)Schedule
(message)SearchConnectionsRequest
(message)SearchConnectionsResponse
(message)SecretManagerCredential
(message)SecretsDiscoveryTarget
(message)SensitivityScore
(message)SensitivityScore.SensitivityScoreLevel
(enum)StatisticalTable
(message)StatisticalTable.QuasiIdentifierField
(message)StorageConfig
(message)StorageConfig.TimespanConfig
(message)StorageMetadataLabel
(message)StoredInfoType
(message)StoredInfoTypeConfig
(message)StoredInfoTypeState
(enum)StoredInfoTypeStats
(message)StoredInfoTypeVersion
(message)StoredType
(message)Table
(message)Table.Row
(message)TableDataProfile
(message)TableDataProfile.State
(enum)TableLocation
(message)TableOptions
(message)TableReference
(message)TimePartConfig
(message)TimePartConfig.TimePart
(enum)TransformationConfig
(message)TransformationContainerType
(enum)TransformationDescription
(message)TransformationDetails
(message)TransformationDetailsStorageConfig
(message)TransformationErrorHandling
(message)TransformationErrorHandling.LeaveUntransformed
(message)TransformationErrorHandling.ThrowError
(message)TransformationLocation
(message)TransformationOverview
(message)TransformationResultStatus
(message)TransformationResultStatusType
(enum)TransformationSummary
(message)TransformationSummary.SummaryResult
(message)TransformationSummary.TransformationResultCode
(enum)TransformationType
(enum)TransientCryptoKey
(message)UniquenessScoreLevel
(enum)UnwrappedCryptoKey
(message)UpdateConnectionRequest
(message)UpdateDeidentifyTemplateRequest
(message)UpdateDiscoveryConfigRequest
(message)UpdateInspectTemplateRequest
(message)UpdateJobTriggerRequest
(message)UpdateStoredInfoTypeRequest
(message)Value
(message)ValueFrequency
(message)VersionDescription
(message)
DlpService
Sensitive Data Protection provides access to a powerful sensitive data inspection, classification, and de-identification platform that works on text, images, and Google Cloud storage repositories. To learn more about concepts and find how-to guides see https://cloud.google.com/sensitive-data-protection/docs/.
ActivateJobTrigger |
---|
Activate a job trigger. Causes the immediate execute of a trigger instead of waiting on the trigger event to occur.
|
CancelDlpJob |
---|
Starts asynchronous cancellation on a long-running DlpJob. The server makes a best effort to cancel the DlpJob, but success is not guaranteed. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
CreateConnection |
---|
Create a Connection to an external data source.
|
CreateDeidentifyTemplate |
---|
Creates a DeidentifyTemplate for reusing frequently used configuration for de-identifying content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
CreateDiscoveryConfig |
---|
Creates a config for discovery to scan and profile storage.
|
CreateDlpJob |
---|
Creates a new job to inspect storage or calculate risk metrics. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more. When no InfoTypes or CustomInfoTypes are specified in inspect jobs, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.
|
CreateInspectTemplate |
---|
Creates an InspectTemplate for reusing frequently used configuration for inspecting content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
CreateJobTrigger |
---|
Creates a job trigger to run DLP actions such as scanning storage for sensitive information on a set schedule. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
CreateStoredInfoType |
---|
Creates a pre-built stored infoType to be used for inspection. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
DeidentifyContent |
---|
De-identifies potentially sensitive info from a ContentItem. This method has limits on input size and output size. See https://cloud.google.com/sensitive-data-protection/docs/deidentify-sensitive-data to learn more. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.
|
DeleteConnection |
---|
Delete a Connection.
|
DeleteDeidentifyTemplate |
---|
Deletes a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
DeleteDiscoveryConfig |
---|
Deletes a discovery configuration.
|
DeleteDlpJob |
---|
Deletes a long-running DlpJob. This method indicates that the client is no longer interested in the DlpJob result. The job will be canceled if possible. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
DeleteFileStoreDataProfile |
---|
Delete a FileStoreDataProfile. Will not prevent the profile from being regenerated if the resource is still included in a discovery configuration.
|
DeleteInspectTemplate |
---|
Deletes an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
DeleteJobTrigger |
---|
Deletes a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
DeleteStoredInfoType |
---|
Deletes a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
DeleteTableDataProfile |
---|
Delete a TableDataProfile. Will not prevent the profile from being regenerated if the table is still included in a discovery configuration.
|
FinishDlpJob |
---|
Finish a running hybrid DlpJob. Triggers the finalization steps and running of any enabled actions that have not yet run.
|
GetColumnDataProfile |
---|
Gets a column data profile.
|
GetConnection |
---|
Get a Connection by name.
|
GetDeidentifyTemplate |
---|
Gets a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
GetDiscoveryConfig |
---|
Gets a discovery configuration.
|
GetDlpJob |
---|
Gets the latest state of a long-running DlpJob. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
GetFileStoreDataProfile |
---|
Gets a file store data profile.
|
GetInspectTemplate |
---|
Gets an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
GetJobTrigger |
---|
Gets a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
GetProjectDataProfile |
---|
Gets a project data profile.
|
GetStoredInfoType |
---|
Gets a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
GetTableDataProfile |
---|
Gets a table data profile.
|
HybridInspectDlpJob |
---|
Inspect hybrid content and store findings to a job. To review the findings, inspect the job. Inspection will occur asynchronously.
|
HybridInspectJobTrigger |
---|
Inspect hybrid content and store findings to a trigger. The inspection will be processed asynchronously. To review the findings monitor the jobs within the trigger.
|
InspectContent |
---|
Finds potentially sensitive info in content. This method has limits on input size, processing time, and output size. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated. For how to guides, see https://cloud.google.com/sensitive-data-protection/docs/inspecting-images and https://cloud.google.com/sensitive-data-protection/docs/inspecting-text,
|
ListColumnDataProfiles |
---|
Lists column data profiles for an organization.
|
ListConnections |
---|
Lists Connections in a parent. Use SearchConnections to see all connections within an organization.
|
ListDeidentifyTemplates |
---|
Lists DeidentifyTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
ListDiscoveryConfigs |
---|
Lists discovery configurations.
|
ListDlpJobs |
---|
Lists DlpJobs that match the specified filter in the request. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
ListFileStoreDataProfiles |
---|
Lists file store data profiles for an organization.
|
ListInfoTypes |
---|
Returns a list of the sensitive information types that the DLP API supports. See https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference to learn more.
|
ListInspectTemplates |
---|
Lists InspectTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
ListJobTriggers |
---|
Lists job triggers. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
ListProjectDataProfiles |
---|
Lists project data profiles for an organization.
|
ListStoredInfoTypes |
---|
Lists stored infoTypes. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
ListTableDataProfiles |
---|
Lists table data profiles for an organization.
|
RedactImage |
---|
Redacts potentially sensitive info from an image. This method has limits on input size, processing time, and output size. See https://cloud.google.com/sensitive-data-protection/docs/redacting-sensitive-data-images to learn more. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.
|
ReidentifyContent |
---|
Re-identifies content that has been de-identified. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization#re-identification_in_free_text_code_example to learn more.
|
SearchConnections |
---|
Searches for Connections in a parent.
|
UpdateConnection |
---|
Update a Connection.
|
UpdateDeidentifyTemplate |
---|
Updates the DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
UpdateDiscoveryConfig |
---|
Updates a discovery configuration.
|
UpdateInspectTemplate |
---|
Updates the InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
UpdateJobTrigger |
---|
Updates a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
UpdateStoredInfoType |
---|
Updates the stored infoType by creating a new version. The existing version will continue to be used until the new version is ready. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
Action
A task to execute on the completion of a job. See https://cloud.google.com/sensitive-data-protection/docs/concepts-actions to learn more.
Fields | |
---|---|
Union field action . Extra events to execute after the job has finished. action can be only one of the following: |
|
save_ |
Save resulting findings in a provided location. |
pub_ |
Publish a notification to a Pub/Sub topic. |
publish_ |
Publish summary to Cloud Security Command Center (Alpha). |
publish_ |
Publish findings to Cloud Datahub. |
deidentify |
Create a de-identified copy of the input data. |
job_ |
Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts. |
publish_ |
Enable Stackdriver metric dlp.googleapis.com/finding_count. |
Deidentify
Create a de-identified copy of the requested table or files.
A TransformationDetail will be created for each transformation.
If any rows in BigQuery are skipped during de-identification (transformation errors or row size exceeds BigQuery insert API limits) they are placed in the failure output table. If the original row exceeds the BigQuery insert API limit it will be truncated when written to the failure output table. The failure output table can be set in the action.deidentify.output.big_query_output.deidentified_failure_output_table field, if no table is set, a table will be automatically created in the same project and dataset as the original table.
Compatible with: Inspect
Fields | |
---|---|
transformation_ |
User specified deidentify templates and configs for structured, unstructured, and image files. |
transformation_ |
Config for storing transformation details. This is separate from the de-identified content, and contains metadata about the successful transformations and/or failures that occurred while de-identifying. This needs to be set in order for users to access information about the status of each transformation (see |
file_ |
List of user-specified file type groups to transform. If specified, only the files with these file types will be transformed. If empty, all supported files will be transformed. Supported types may be automatically added over time. If a file type is set in this field that isn't supported by the Deidentify action then the job will fail and will not be successfully created/started. Currently the only file types supported are: IMAGES, TEXT_FILES, CSV, TSV. |
Union field output . Where to store the output. output can be only one of the following: |
|
cloud_ |
Required. User settable Cloud Storage bucket and folders to store de-identified files. This field must be set for Cloud Storage deidentification. The output Cloud Storage bucket must be different from the input bucket. De-identified files will overwrite files in the output path. Form of: gs://bucket/folder/ or gs://bucket |
JobNotificationEmails
This type has no fields.
Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.
PublishFindingsToCloudDataCatalog
This type has no fields.
Publish findings of a DlpJob to Data Catalog. In Data Catalog, tag templates are applied to the resource that Cloud DLP scanned. Data Catalog tag templates are stored in the same project and region where the BigQuery table exists. For Cloud DLP to create and apply the tag template, the Cloud DLP service agent must have the roles/datacatalog.tagTemplateOwner
permission on the project. The tag template contains fields summarizing the results of the DlpJob. Any field values previously written by another DlpJob are deleted. InfoType naming patterns
are strictly enforced when using this feature.
Findings are persisted in Data Catalog storage and are governed by service-specific policies for Data Catalog. For more information, see Service Specific Terms.
Only a single instance of this action can be specified. This action is allowed only if all resources being scanned are BigQuery tables. Compatible with: Inspect
PublishSummaryToCscc
This type has no fields.
Publish the result summary of a DlpJob to Security Command Center. This action is available for only projects that belong to an organization. This action publishes the count of finding instances and their infoTypes. The summary of findings are persisted in Security Command Center and are governed by service-specific policies for Security Command Center. Only a single instance of this action can be specified. Compatible with: Inspect
PublishToPubSub
Publish a message into a given Pub/Sub topic when DlpJob has completed. The message contains a single field, DlpJobName
, which is equal to the finished job's DlpJob.name
. Compatible with: Inspect, Risk
Fields | |
---|---|
topic |
Cloud Pub/Sub topic to send notifications to. The topic must have given publishing access rights to the DLP API service account executing the long running DlpJob sending the notifications. Format is projects/{project}/topics/{topic}. |
PublishToStackdriver
This type has no fields.
Enable Stackdriver metric dlp.googleapis.com/finding_count. This will publish a metric to stack driver on each infotype requested and how many findings were found for it. CustomDetectors will be bucketed as 'Custom' under the Stackdriver label 'info_type'.
SaveFindings
If set, the detailed findings will be persisted to the specified OutputStorageConfig. Only a single instance of this action can be specified. Compatible with: Inspect, Risk
Fields | |
---|---|
output_ |
Location to store findings outside of DLP. |
ActionDetails
The results of an Action
.
Fields | |
---|---|
Union field details . Summary of what occurred in the actions. details can be only one of the following: |
|
deidentify_ |
Outcome of a de-identification action. |
ActivateJobTriggerRequest
Request message for ActivateJobTrigger.
Fields | |
---|---|
name |
Required. Resource name of the trigger to activate, for example Authorization requires one or more of the following IAM permissions on the specified resource
|
AllOtherDatabaseResources
This type has no fields.
Match database resources not covered by any other filter.
AllOtherResources
This type has no fields.
Match discovery resources not covered by any other filter.
AmazonS3Bucket
Amazon S3 bucket.
Fields | |
---|---|
aws_ |
The AWS account. |
bucket_ |
Required. The bucket name. |
AmazonS3BucketConditions
Amazon S3 bucket conditions.
Fields | |
---|---|
bucket_ |
Optional. Bucket types that should be profiled. Optional. Defaults to TYPE_ALL_SUPPORTED if unspecified. |
object_ |
Optional. Object classes that should be profiled. Optional. Defaults to ALL_SUPPORTED_CLASSES if unspecified. |
BucketType
Supported Amazon S3 bucket types. Defaults to TYPE_ALL_SUPPORTED.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unused. |
TYPE_ALL_SUPPORTED |
All supported classes. |
TYPE_GENERAL_PURPOSE |
A general purpose Amazon S3 bucket. |
ObjectStorageClass
Supported Amazon S3 object storage classes. Defaults to ALL_SUPPORTED_CLASSES.
Enums | |
---|---|
UNSPECIFIED |
Unused. |
ALL_SUPPORTED_CLASSES |
All supported classes. |
STANDARD |
Standard object class. |
STANDARD_INFREQUENT_ACCESS |
Standard - infrequent access object class. |
GLACIER_INSTANT_RETRIEVAL |
Glacier - instant retrieval object class. |
INTELLIGENT_TIERING |
Objects in the S3 Intelligent-Tiering access tiers. |
AmazonS3BucketRegex
Amazon S3 bucket regex.
Fields | |
---|---|
aws_ |
The AWS account regex. |
bucket_ |
Optional. Regex to test the bucket name against. If empty, all buckets match. |
AnalyzeDataSourceRiskDetails
Result of a risk analysis operation request.
Fields | |
---|---|
requested_ |
Privacy metric to compute. |
requested_ |
Input dataset to compute metrics over. |
requested_ |
The configuration used for this job. |
Union field result . Values associated with this metric. result can be only one of the following: |
|
numerical_ |
Numerical stats result |
categorical_ |
Categorical stats result |
k_ |
K-anonymity result |
l_ |
L-divesity result |
k_ |
K-map result |
delta_ |
Delta-presence result |
CategoricalStatsResult
Result of the categorical stats computation.
Fields | |
---|---|
value_ |
Histogram of value frequencies in the column. |
CategoricalStatsHistogramBucket
Histogram of value frequencies in the column.
Fields | |
---|---|
value_ |
Lower bound on the value frequency of the values in this bucket. |
value_ |
Upper bound on the value frequency of the values in this bucket. |
bucket_ |
Total number of values in this bucket. |
bucket_ |
Sample of value frequencies in this bucket. The total number of values returned per bucket is capped at 20. |
bucket_ |
Total number of distinct values in this bucket. |
DeltaPresenceEstimationResult
Result of the δ-presence computation. Note that these results are an estimation, not exact values.
Fields | |
---|---|
delta_ |
The intervals [min_probability, max_probability) do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_probability: 0, max_probability: 0.1, frequency: 17} {min_probability: 0.2, max_probability: 0.3, frequency: 42} {min_probability: 0.3, max_probability: 0.4, frequency: 99} mean that there are no record with an estimated probability in [0.1, 0.2) nor larger or equal to 0.4. |
DeltaPresenceEstimationHistogramBucket
A DeltaPresenceEstimationHistogramBucket message with the following values: min_probability: 0.1 max_probability: 0.2 frequency: 42 means that there are 42 records for which δ is in [0.1, 0.2). An important particular case is when min_probability = max_probability = 1: then, every individual who shares this quasi-identifier combination is in the dataset.
Fields | |
---|---|
min_ |
Between 0 and 1. |
max_ |
Always greater than or equal to min_probability. |
bucket_ |
Number of records within these probability bounds. |
bucket_ |
Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_ |
Total number of distinct quasi-identifier tuple values in this bucket. |
DeltaPresenceEstimationQuasiIdValues
A tuple of values for the quasi-identifier columns.
Fields | |
---|---|
quasi_ |
The quasi-identifier values. |
estimated_ |
The estimated probability that a given individual sharing these quasi-identifier values is in the dataset. This value, typically called δ, is the ratio between the number of records in the dataset with these quasi-identifier values, and the total number of individuals (inside and outside the dataset) with these quasi-identifier values. For example, if there are 15 individuals in the dataset who share the same quasi-identifier values, and an estimated 100 people in the entire population with these values, then δ is 0.15. |
KAnonymityResult
Result of the k-anonymity computation.
Fields | |
---|---|
equivalence_ |
Histogram of k-anonymity equivalence classes. |
KAnonymityEquivalenceClass
The set of columns' values that share the same ldiversity value
Fields | |
---|---|
quasi_ |
Set of values defining the equivalence class. One value per quasi-identifier column in the original KAnonymity metric message. The order is always the same as the original request. |
equivalence_ |
Size of the equivalence class, for example number of rows with the above set of values. |
KAnonymityHistogramBucket
Histogram of k-anonymity equivalence classes.
Fields | |
---|---|
equivalence_ |
Lower bound on the size of the equivalence classes in this bucket. |
equivalence_ |
Upper bound on the size of the equivalence classes in this bucket. |
bucket_ |
Total number of equivalence classes in this bucket. |
bucket_ |
Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_ |
Total number of distinct equivalence classes in this bucket. |
KMapEstimationResult
Result of the reidentifiability analysis. Note that these results are an estimation, not exact values.
Fields | |
---|---|
k_ |
The intervals [min_anonymity, max_anonymity] do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_anonymity: 1, max_anonymity: 1, frequency: 17} {min_anonymity: 2, max_anonymity: 3, frequency: 42} {min_anonymity: 5, max_anonymity: 10, frequency: 99} mean that there are no record with an estimated anonymity of 4, 5, or larger than 10. |
KMapEstimationHistogramBucket
A KMapEstimationHistogramBucket message with the following values: min_anonymity: 3 max_anonymity: 5 frequency: 42 means that there are 42 records whose quasi-identifier values correspond to 3, 4 or 5 people in the overlying population. An important particular case is when min_anonymity = max_anonymity = 1: the frequency field then corresponds to the number of uniquely identifiable records.
Fields | |
---|---|
min_ |
Always positive. |
max_ |
Always greater than or equal to min_anonymity. |
bucket_ |
Number of records within these anonymity bounds. |
bucket_ |
Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_ |
Total number of distinct quasi-identifier tuple values in this bucket. |
KMapEstimationQuasiIdValues
A tuple of values for the quasi-identifier columns.
Fields | |
---|---|
quasi_ |
The quasi-identifier values. |
estimated_ |
The estimated anonymity for these quasi-identifier values. |
LDiversityResult
Result of the l-diversity computation.
Fields | |
---|---|
sensitive_ |
Histogram of l-diversity equivalence class sensitive value frequencies. |
LDiversityEquivalenceClass
The set of columns' values that share the same ldiversity value.
Fields | |
---|---|
quasi_ |
Quasi-identifier values defining the k-anonymity equivalence class. The order is always the same as the original request. |
equivalence_ |
Size of the k-anonymity equivalence class. |
num_ |
Number of distinct sensitive values in this equivalence class. |
top_ |
Estimated frequencies of top sensitive values. |
LDiversityHistogramBucket
Histogram of l-diversity equivalence class sensitive value frequencies.
Fields | |
---|---|
sensitive_ |
Lower bound on the sensitive value frequencies of the equivalence classes in this bucket. |
sensitive_ |
Upper bound on the sensitive value frequencies of the equivalence classes in this bucket. |
bucket_ |
Total number of equivalence classes in this bucket. |
bucket_ |
Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_ |
Total number of distinct equivalence classes in this bucket. |
NumericalStatsResult
Result of the numerical stats computation.
Fields | |
---|---|
min_ |
Minimum value appearing in the column. |
max_ |
Maximum value appearing in the column. |
quantile_ |
List of 99 values that partition the set of field values into 100 equal sized buckets. |
RequestedRiskAnalysisOptions
Risk analysis options.
Fields | |
---|---|
job_ |
The job config for the risk job. |
AwsAccount
AWS account.
Fields | |
---|---|
account_ |
Required. AWS account ID. |
AwsAccountRegex
AWS account regex.
Fields | |
---|---|
account_ |
Optional. Regex to test the AWS account ID against. If empty, all accounts match. |
BigQueryDiscoveryTarget
Target used to match against for discovery with BigQuery tables
Fields | |
---|---|
filter |
Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table. |
conditions |
In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field frequency . The generation rule includes the logic on how frequently to update the data profiles. If not specified, discovery will re-run and update no more than once a month if new columns appear in the table. frequency can be only one of the following: |
|
cadence |
How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Tables that match this filter will not have profiles created. |
BigQueryField
Message defining a field of a BigQuery table.
Fields | |
---|---|
table |
Source table of the field. |
field |
Designated field in the BigQuery table. |
BigQueryKey
Row key for identifying a record in BigQuery table.
Fields | |
---|---|
table_ |
Complete BigQuery table reference. |
row_ |
Row number inferred at the time the table was scanned. This value is nondeterministic, cannot be queried, and may be null for inspection jobs. To locate findings within a table, specify |
BigQueryOptions
Options defining BigQuery table and row identifiers.
Fields | |
---|---|
table_ |
Complete BigQuery table reference. |
identifying_ |
Table fields that may uniquely identify a row within the table. When |
rows_ |
Max number of rows to scan. If the table has more rows than this value, the rest of the rows are omitted. If not set, or if set to 0, all rows will be scanned. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig. |
rows_ |
Max percentage of rows to scan. The rest are omitted. The number of rows scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig. Caution: A known issue is causing the |
sample_ |
How to sample the data. |
excluded_ |
References to fields excluded from scanning. This allows you to skip inspection of entire columns which you know have no findings. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used. |
included_ |
Limit scanning only to these fields. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used. |
SampleMethod
How to sample rows if not all rows are scanned. Meaningful only when used in conjunction with either rows_limit or rows_limit_percent. If not specified, rows are scanned in the order BigQuery reads them.
Enums | |
---|---|
SAMPLE_METHOD_UNSPECIFIED |
No sampling. |
TOP |
Scan groups of rows in the order BigQuery provides (default). Multiple groups of rows may be scanned in parallel, so results may not appear in the same order the rows are read. |
RANDOM_START |
Randomly pick groups of rows to scan. |
BigQueryRegex
A pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.
Fields | |
---|---|
project_ |
For organizations, if unset, will match all projects. Has no effect for data profile configurations created within a project. |
dataset_ |
If unset, this property matches all datasets. |
table_ |
If unset, this property matches all tables. |
BigQueryRegexes
A collection of regular expressions to determine what tables to match against.
Fields | |
---|---|
patterns[] |
A single BigQuery regular expression pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. |
BigQuerySchemaModification
Attributes evaluated to determine if a schema has been modified. New values may be added at a later time.
Enums | |
---|---|
SCHEMA_MODIFICATION_UNSPECIFIED |
Unused |
SCHEMA_NEW_COLUMNS |
Profiles should be regenerated when new columns are added to the table. Default. |
SCHEMA_REMOVED_COLUMNS |
Profiles should be regenerated when columns are removed from the table. |
BigQueryTable
Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: <project_id>:<dataset_id>.<table_id>
or <project_id>.<dataset_id>.<table_id>
.
Fields | |
---|---|
project_ |
The Google Cloud project ID of the project containing the table. If omitted, project ID is inferred from the API call. |
dataset_ |
Dataset ID of the table. |
table_ |
Name of the table. |
BigQueryTableCollection
Specifies a collection of BigQuery tables. Used for Discovery.
Fields | |
---|---|
Union field pattern . Maximum of 100 entries. The first filter containing a pattern that matches a table will be used. pattern can be only one of the following: |
|
include_ |
A collection of regular expressions to match a BigQuery table against. |
BigQueryTableModification
Attributes evaluated to determine if a table has been modified. New values may be added at a later time.
Enums | |
---|---|
TABLE_MODIFICATION_UNSPECIFIED |
Unused. |
TABLE_MODIFIED_TIMESTAMP |
A table will be considered modified when the last_modified_time from BigQuery has been updated. |
BigQueryTableType
Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.
Enums | |
---|---|
BIG_QUERY_TABLE_TYPE_UNSPECIFIED |
Unused. |
BIG_QUERY_TABLE_TYPE_TABLE |
A normal BigQuery table. |
BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE |
A table that references data stored in Cloud Storage. |
BIG_QUERY_TABLE_TYPE_SNAPSHOT |
A snapshot of a BigQuery table. |
BigQueryTableTypeCollection
Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.
Enums | |
---|---|
BIG_QUERY_COLLECTION_UNSPECIFIED |
Unused. |
BIG_QUERY_COLLECTION_ALL_TYPES |
Automatically generate profiles for all tables, even if the table type is not yet fully supported for analysis. Profiles for unsupported tables will be generated with errors to indicate their partial support. When full support is added, the tables will automatically be profiled during the next scheduled run. |
BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES |
Only those types fully supported will be profiled. Will expand automatically as Cloud DLP adds support for new table types. Unsupported table types will not have partial profiles generated. |
BigQueryTableTypes
The types of BigQuery tables supported by Cloud DLP.
Fields | |
---|---|
types[] |
A set of BigQuery table types. |
BoundingBox
Bounding box encompassing detected text within an image.
Fields | |
---|---|
top |
Top coordinate of the bounding box. (0,0) is upper left. |
left |
Left coordinate of the bounding box. (0,0) is upper left. |
width |
Width of the bounding box in pixels. |
height |
Height of the bounding box in pixels. |
BucketingConfig
Generalization function that buckets values based on ranges. The ranges and replacement values are dynamically provided by the user for custom behavior, such as 1-30 -> LOW, 31-65 -> MEDIUM, 66-100 -> HIGH.
This can be used on data of type: number, long, string, timestamp.
If the bound Value
type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing. See https://cloud.google.com/sensitive-data-protection/docs/concepts-bucketing to learn more.
Fields | |
---|---|
buckets[] |
Set of buckets. Ranges must be non-overlapping. |
Bucket
Bucket is represented as a range, along with replacement values.
Fields | |
---|---|
min |
Lower bound of the range, inclusive. Type should be the same as max if used. |
max |
Upper bound of the range, exclusive; type must match min. |
replacement_ |
Required. Replacement value for this bucket. |
ByteContentItem
Container for bytes to inspect or redact.
Fields | |
---|---|
type |
The type of data stored in the bytes string. Default will be TEXT_UTF8. |
data |
Content data to inspect or redact. |
BytesType
The type of data being sent for inspection. To learn more, see Supported file types.
Enums | |
---|---|
BYTES_TYPE_UNSPECIFIED |
Unused |
IMAGE |
Any image type. |
IMAGE_JPEG |
jpeg |
IMAGE_BMP |
bmp |
IMAGE_PNG |
png |
IMAGE_SVG |
svg |
TEXT_UTF8 |
plain text |
WORD_DOCUMENT |
docx, docm, dotx, dotm |
PDF |
|
POWERPOINT_DOCUMENT |
pptx, pptm, potx, potm, pot |
EXCEL_DOCUMENT |
xlsx, xlsm, xltx, xltm |
AVRO |
avro |
CSV |
csv |
TSV |
tsv |
AUDIO |
Audio file types. Only used for profiling. |
VIDEO |
Video file types. Only used for profiling. |
EXECUTABLE |
Executable file types. Only used for profiling. |
CancelDlpJobRequest
The request message for canceling a DLP job.
Fields | |
---|---|
name |
Required. The name of the DlpJob resource to be cancelled. Authorization requires the following IAM permission on the specified resource
|
CharacterMaskConfig
Partially mask a string by replacing a given number of characters with a fixed character. Masking can start from the beginning or end of the string. This can be used on data of any type (numbers, longs, and so on) and when de-identifying structured data we'll attempt to preserve the original data's type. (This allows you to take a long like 123 and modify it to a string like **3.
Fields | |
---|---|
masking_ |
Character to use to mask the sensitive values—for example, |
number_ |
Number of characters to mask. If not set, all matching chars will be masked. Skipped characters do not count towards this tally. If
The resulting de-identified string is |
reverse_ |
Mask characters in reverse order. For example, if |
characters_ |
When masking a string, items in this list will be skipped when replacing characters. For example, if the input string is |
CharsToIgnore
Characters to skip when doing deidentification of a value. These will be left alone and skipped.
Fields | |
---|---|
Union field characters . Type of characters to skip. characters can be only one of the following: |
|
characters_ |
Characters to not transform when masking. |
common_ |
Common characters to not transform when masking. Useful to avoid removing punctuation. |
CommonCharsToIgnore
Convenience enum for indicating common characters to not transform.
Enums | |
---|---|
COMMON_CHARS_TO_IGNORE_UNSPECIFIED |
Unused. |
NUMERIC |
0-9 |
ALPHA_UPPER_CASE |
A-Z |
ALPHA_LOWER_CASE |
a-z |
PUNCTUATION |
US Punctuation, one of !"#$%&'()*+,-./:;<=>?@[]^_`{|}~ |
WHITESPACE |
Whitespace character, one of [ \t\n\x0B\f\r] |
CloudSqlDiscoveryTarget
Target used to match against for discovery with Cloud SQL tables.
Fields | |
---|---|
filter |
Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table. |
conditions |
In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field cadence . Type of schedule. cadence can be only one of the following: |
|
generation_ |
How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Disable profiling for database resources that match this filter. |
CloudSqlIamCredential
This type has no fields.
Use IAM authentication to connect. This requires the Cloud SQL IAM feature to be enabled on the instance, which is not the default for Cloud SQL. See https://cloud.google.com/sql/docs/postgres/authentication and https://cloud.google.com/sql/docs/mysql/authentication.
CloudSqlProperties
Cloud SQL connection properties.
Fields | |
---|---|
connection_ |
Optional. Immutable. The Cloud SQL instance for which the connection is defined. Only one connection per instance is allowed. This can only be set at creation time, and cannot be updated. It is an error to use a connection_name from different project or region than the one that holds the connection. For example, a Connection resource for Cloud SQL connection_name |
max_ |
Required. The DLP API will limit its connections to max_connections. Must be 2 or greater. |
database_ |
Required. The database engine used by the Cloud SQL instance that this connection configures. |
Union field credential . How to authenticate to the instance. credential can be only one of the following: |
|
username_ |
A username and password stored in Secret Manager. |
cloud_ |
Built-in IAM authentication (must be configured in Cloud SQL). |
DatabaseEngine
Database engine of a Cloud SQL instance. New values may be added over time.
Enums | |
---|---|
DATABASE_ENGINE_UNKNOWN |
An engine that is not currently supported by Sensitive Data Protection. |
DATABASE_ENGINE_MYSQL |
Cloud SQL for MySQL instance. |
DATABASE_ENGINE_POSTGRES |
Cloud SQL for PostgreSQL instance. |
CloudStorageDiscoveryTarget
Target used to match against for discovery with Cloud Storage buckets.
Fields | |
---|---|
filter |
Required. The buckets the generation_cadence applies to. The first target with a matching filter will be the one to apply to a bucket. |
conditions |
Optional. In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field cadence . How often and when to update profiles. cadence can be only one of the following: |
|
generation_ |
Optional. How often and when to update profiles. New buckets that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Optional. Disable profiling for buckets that match this filter. |
CloudStorageFileSet
Message representing a set of files in Cloud Storage.
Fields | |
---|---|
url |
The url, in the format |
CloudStorageOptions
Options defining a file or a set of files within a Cloud Storage bucket.
Fields | |
---|---|
file_ |
The set of one or more files to scan. |
bytes_ |
Max number of bytes to scan from a file. If a scanned file's size is bigger than this value then the rest of the bytes are omitted. Only one of |
bytes_ |
Max percentage of bytes to scan from a file. The rest are omitted. The number of bytes scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file. |
file_ |
List of file type groups to include in the scan. If empty, all files are scanned and available data format processors are applied. In addition, the binary content of the selected files is always scanned as well. Images are scanned only as binary if the specified region does not support image inspection and no file_types were specified. Image inspection is restricted to 'global', 'us', 'asia', and 'europe'. |
sample_ |
How to sample the data. |
files_ |
Limits the number of files to scan to this percentage of the input FileSet. Number of files scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. |
FileSet
Set of files to scan.
Fields | |
---|---|
url |
The Cloud Storage url of the file(s) to scan, in the format If the url ends in a trailing slash, the bucket or directory represented by the url will be scanned non-recursively (content in sub-directories will not be scanned). This means that Exactly one of |
regex_ |
The regex-filtered set of files to scan. Exactly one of |
SampleMethod
How to sample bytes if not all bytes are scanned. Meaningful only when used in conjunction with bytes_limit_per_file. If not specified, scanning would start from the top.
Enums | |
---|---|
SAMPLE_METHOD_UNSPECIFIED |
No sampling. |
TOP |
Scan from the top (default). |
RANDOM_START |
For each file larger than bytes_limit_per_file, randomly pick the offset to start scanning. The scanned bytes are contiguous. |
CloudStoragePath
Message representing a single file or path in Cloud Storage.
Fields | |
---|---|
path |
A URL representing a file or path (no wildcards) in Cloud Storage. Example: |
CloudStorageRegex
A pattern to match against one or more file stores. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.
Fields | |
---|---|
project_ |
Optional. For organizations, if unset, will match all projects. |
bucket_ |
Optional. Regex to test the bucket name against. If empty, all buckets match. Example: "marketing2021" or "(marketing)\d{4}" will both match the bucket gs://marketing2021 |
CloudStorageRegexFileSet
Message representing a set of files in a Cloud Storage bucket. Regular expressions are used to allow fine-grained control over which files in the bucket to include.
Included files are those that match at least one item in include_regex
and do not match any items in exclude_regex
. Note that a file that matches items from both lists will not be included. For a match to occur, the entire file path (i.e., everything in the url after the bucket name) must match the regular expression.
For example, given the input {bucket_name: "mybucket", include_regex:
["directory1/.*"], exclude_regex:
["directory1/excluded.*"]}
:
gs://mybucket/directory1/myfile
will be includedgs://mybucket/directory1/directory2/myfile
will be included (.*
matches across/
)gs://mybucket/directory0/directory1/myfile
will not be included (the full path doesn't match any items ininclude_regex
)gs://mybucket/directory1/excludedfile
will not be included (the path matches an item inexclude_regex
)
If include_regex
is left empty, it will match all files by default (this is equivalent to setting include_regex: [".*"]
).
Some other common use cases:
{bucket_name: "mybucket", exclude_regex: [".*\.pdf"]}
will include all files inmybucket
except for .pdf files{bucket_name: "mybucket", include_regex: ["directory/[^/]+"]}
will include all files directly undergs://mybucket/directory/
, without matching across/
Fields | |
---|---|
bucket_ |
The name of a Cloud Storage bucket. Required. |
include_ |
A list of regular expressions matching file paths to include. All files in the bucket that match at least one of these regular expressions will be included in the set of files, except for those that also match an item in Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub. |
exclude_ |
A list of regular expressions matching file paths to exclude. All files in the bucket that match at least one of these regular expressions will be excluded from the scan. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub. |
CloudStorageResourceReference
Identifies a single Cloud Storage bucket.
Fields | |
---|---|
bucket_ |
Required. The bucket to scan. |
project_ |
Required. If within a project-level config, then this must match the config's project id. |
Color
Represents a color in the RGB color space.
Fields | |
---|---|
red |
The amount of red in the color as a value in the interval [0, 1]. |
green |
The amount of green in the color as a value in the interval [0, 1]. |
blue |
The amount of blue in the color as a value in the interval [0, 1]. |
ColumnDataProfile
The profile for a scanned column within a table.
Fields | |
---|---|
name |
The name of the profile. |
profile_ |
Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated. |
state |
State of a profile. |
profile_ |
The last time the profile was generated. |
table_ |
The resource name of the table data profile. |
table_ |
The resource name of the resource this column is within. |
dataset_ |
The Google Cloud project ID that owns the profiled resource. |
dataset_ |
If supported, the location where the dataset's data is stored. See https://cloud.google.com/bigquery/docs/locations for supported BigQuery locations. |
dataset_ |
The BigQuery dataset ID, if the resource profiled is a BigQuery table. |
table_ |
The table ID. |
column |
The name of the column. |
sensitivity_ |
The sensitivity of this column. |
data_ |
The data risk level for this column. |
column_ |
If it's been determined this column can be identified as a single type, this will be set. Otherwise the column either has unidentifiable content or mixed types. |
other_ |
Other types found within this column. List will be unordered. |
estimated_ |
Approximate percentage of entries being null in the column. |
estimated_ |
Approximate uniqueness of the column. |
free_ |
The likelihood that this column contains free-form text. A value close to 1 may indicate the column is likely to contain free-form or natural language text. Range in 0-1. |
column_ |
The data type of a given column. |
policy_ |
Indicates if a policy tag has been applied to the column. |
ColumnDataType
Data types of the data in a column. Types may be added over time.
Enums | |
---|---|
COLUMN_DATA_TYPE_UNSPECIFIED |
Invalid type. |
TYPE_INT64 |
Encoded as a string in decimal format. |
TYPE_BOOL |
Encoded as a boolean "false" or "true". |
TYPE_FLOAT64 |
Encoded as a number, or string "NaN", "Infinity" or "-Infinity". |
TYPE_STRING |
Encoded as a string value. |
TYPE_BYTES |
Encoded as a base64 string per RFC 4648, section 4. |
TYPE_TIMESTAMP |
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z |
TYPE_DATE |
Encoded as RFC 3339 full-date format string: 1985-04-12 |
TYPE_TIME |
Encoded as RFC 3339 partial-time format string: 23:20:50.52 |
TYPE_DATETIME |
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52 |
TYPE_GEOGRAPHY |
Encoded as WKT |
TYPE_NUMERIC |
Encoded as a decimal string. |
TYPE_RECORD |
Container of ordered fields, each with a type and field name. |
TYPE_BIGNUMERIC |
Decimal type. |
TYPE_JSON |
Json type. |
TYPE_INTERVAL |
Interval type. |
TYPE_RANGE_DATE |
Range<Date> type. |
TYPE_RANGE_DATETIME |
Range<Datetime> type. |
TYPE_RANGE_TIMESTAMP |
Range<Timestamp> type. |
ColumnPolicyState
The possible policy states for a column.
Enums | |
---|---|
COLUMN_POLICY_STATE_UNSPECIFIED |
No policy tags. |
COLUMN_POLICY_TAGGED |
Column has policy tag applied. |
State
Possible states of a profile. New items may be added.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unused. |
RUNNING |
The profile is currently running. Once a profile has finished it will transition to DONE. |
DONE |
The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed. |
Connection
A data connection to allow the DLP API to profile data in locations that require additional configuration.
Fields | |
---|---|
name |
Output only. Name of the connection: |
state |
Required. The connection's state in its lifecycle. |
errors[] |
Output only. Set if status == ERROR, to provide additional details. Will store the last 10 errors sorted with the most recent first. |
Union field properties . Type of connection. properties can be only one of the following: |
|
cloud_ |
Connect to a Cloud SQL instance. |
ConnectionState
State of the connection. New values may be added over time.
Enums | |
---|---|
CONNECTION_STATE_UNSPECIFIED |
Unused |
MISSING_CREDENTIALS |
The DLP API automatically created this connection during an initial scan, and it is awaiting full configuration by a user. |
AVAILABLE |
A configured connection that has not encountered any errors. |
ERROR |
A configured connection that encountered errors during its last use. It will not be used again until it is set to AVAILABLE. If the resolution requires external action, then the client must send a request to set the status to AVAILABLE when the connection is ready for use. If the resolution doesn't require external action, then any changes to the connection properties will automatically mark it as AVAILABLE. |
Container
Represents a container that may contain DLP findings. Examples of a container include a file, table, or database record.
Fields | |
---|---|
type |
Container type, for example BigQuery or Cloud Storage. |
project_ |
Project where the finding was found. Can be different from the project that owns the finding. |
full_ |
A string representation of the full container name. Examples: - BigQuery: 'Project:DataSetId.TableId' - Cloud Storage: 'gs://Bucket/folders/filename.txt' |
root_ |
The root of the container. Examples:
|
relative_ |
The rest of the path after the root. Examples:
|
update_ |
Findings container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated. |
version |
Findings container version, if available ("generation" for Cloud Storage). |
ContentItem
Type of content to inspect.
Fields | |
---|---|
Union field data_item . Data of the item either in the byte array or UTF-8 string form, or table. data_item can be only one of the following: |
|
value |
String data to inspect or redact. |
table |
Structured content for inspection. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-text#inspecting_a_table to learn more. |
byte_ |
Content data to inspect or redact. Replaces |
ContentLocation
Precise location of the finding within a document, record, image, or metadata container.
Fields | |
---|---|
container_ |
Name of the container where the finding is located. The top level name is the source file name or table name. Names of some common storage containers are formatted as follows:
Nested names could be absent if the embedded object has no string identifier (for example, an image contained within a document). |
container_ |
Finding container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated. |
container_ |
Finding container version, if available ("generation" for Cloud Storage). |
Union field location . Type of the container within the file with location of the finding. location can be only one of the following: |
|
record_ |
Location within a row or record of a database table. |
image_ |
Location within an image's pixels. |
document_ |
Location data for document files. |
metadata_ |
Location within the metadata for inspected content. |
ContentOption
Deprecated and unused.
Enums | |
---|---|
CONTENT_UNSPECIFIED |
Includes entire content of a file or a data stream. |
CONTENT_TEXT |
Text content within the data, excluding any metadata. |
CONTENT_IMAGE |
Images found in the data. |
CreateConnectionRequest
Request message for CreateConnection.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization):
Authorization requires the following IAM permission on the specified resource
|
connection |
Required. The connection resource. |
CreateDeidentifyTemplateRequest
Request message for CreateDeidentifyTemplate.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
deidentify_ |
Required. The DeidentifyTemplate to create. |
template_ |
The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_ |
Deprecated. This field has no effect. |
CreateDiscoveryConfigRequest
Request message for CreateDiscoveryConfig.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization):
The following example
Authorization requires the following IAM permission on the specified resource
|
discovery_ |
Required. The DiscoveryConfig to create. |
config_ |
The config ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
CreateDlpJobRequest
Request message for CreateDlpJobRequest. Used to initiate long running jobs such as calculating risk metrics or inspecting Google Cloud Storage.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
job_ |
The job id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_ |
Deprecated. This field has no effect. |
Union field job . The configuration details for the specific type of job to run. job can be only one of the following: |
|
inspect_ |
An inspection job scans a storage repository for InfoTypes. |
risk_ |
A risk analysis job calculates re-identification risk metrics for a BigQuery table. |
CreateInspectTemplateRequest
Request message for CreateInspectTemplate.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
inspect_ |
Required. The InspectTemplate to create. |
template_ |
The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_ |
Deprecated. This field has no effect. |
CreateJobTriggerRequest
Request message for CreateJobTrigger.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on whether you have specified a processing location:
The following example
Authorization requires one or more of the following IAM permissions on the specified resource
|
job_ |
Required. The JobTrigger to create. |
trigger_ |
The trigger id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_ |
Deprecated. This field has no effect. |
CreateStoredInfoTypeRequest
Request message for CreateStoredInfoType.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
config |
Required. Configuration of the storedInfoType to create. |
stored_ |
The storedInfoType ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_ |
Deprecated. This field has no effect. |
CryptoDeterministicConfig
Pseudonymization method that generates deterministic encryption for the given input. Outputs a base64 encoded representation of the encrypted output. Uses AES-SIV based on the RFC https://tools.ietf.org/html/rfc5297.
Fields | |
---|---|
crypto_ |
The key used by the encryption function. For deterministic encryption using AES-SIV, the provided key is internally expanded to 64 bytes prior to use. |
surrogate_ |
The custom info type to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom info type followed by the number of characters comprising the surrogate. The following scheme defines the format: {info type name}({surrogate character count}):{surrogate} For example, if the name of custom info type is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc' This annotation identifies the surrogate when inspecting content using the custom info type 'Surrogate'. This facilitates reversal of the surrogate when it occurs in free text. Note: For record transformations where the entire cell in a table is being transformed, surrogates are not mandatory. Surrogates are used to denote the location of the token and are necessary for re-identification in free form text. In order for inspection to work properly, the name of this info type must not occur naturally anywhere in your data; otherwise, inspection may either
Therefore, choose your custom info type name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE. |
context |
A context may be used for higher security and maintaining referential integrity such that the same identifier in two different contexts will be given a distinct surrogate. The context is appended to plaintext value being encrypted. On decryption the provided context is validated against the value used during encryption. If a context was provided during encryption, same context must be provided during decryption as well. If the context is not set, plaintext would be used as is for encryption. If the context is set but:
plaintext would be used as is for encryption. Note that case (1) is expected when an |
CryptoHashConfig
Pseudonymization method that generates surrogates via cryptographic hashing. Uses SHA-256. The key size must be either 32 or 64 bytes. Outputs a base64 encoded representation of the hashed output (for example, L7k0BHmF1ha5U3NfGykjro4xWi1MPVQPjhMAZbSV9mM=). Currently, only string and integer values can be hashed. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.
Fields | |
---|---|
crypto_ |
The key used by the hash function. |
CryptoKey
This is a data encryption key (DEK) (as opposed to a key encryption key (KEK) stored by Cloud Key Management Service (Cloud KMS). When using Cloud KMS to wrap or unwrap a DEK, be sure to set an appropriate IAM policy on the KEK to ensure an attacker cannot unwrap the DEK.
Fields | |
---|---|
Union field source . Sources of crypto keys. source can be only one of the following: |
|
transient |
Transient crypto key |
unwrapped |
Unwrapped crypto key |
kms_ |
Key wrapped using Cloud KMS |
CryptoReplaceFfxFpeConfig
Replaces an identifier with a surrogate using Format Preserving Encryption (FPE) with the FFX mode of operation; however when used in the ReidentifyContent
API method, it serves the opposite function by reversing the surrogate back into the original identifier. The identifier must be encoded as ASCII. For a given crypto key and context, the same identifier will be replaced with the same surrogate. Identifiers must be at least two characters long. In the case that the identifier is the empty string, it will be skipped. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.
Note: We recommend using CryptoDeterministicConfig for all use cases which do not require preserving the input alphabet space and size, plus warrant referential integrity.
Fields | |
---|---|
crypto_ |
Required. The key used by the encryption algorithm. |
context |
The 'tweak', a context may be used for higher security since the same identifier in two different contexts won't be given the same surrogate. If the context is not set, a default tweak will be used. If the context is set but:
a default tweak will be used. Note that case (1) is expected when an The tweak is constructed as a sequence of bytes in big endian byte order such that:
|
surrogate_ |
The custom infoType to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom infoType followed by the number of characters comprising the surrogate. The following scheme defines the format: info_type_name(surrogate_character_count):surrogate For example, if the name of custom infoType is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc' This annotation identifies the surrogate when inspecting content using the custom infoType In order for inspection to work properly, the name of this infoType must not occur naturally anywhere in your data; otherwise, inspection may find a surrogate that does not correspond to an actual identifier. Therefore, choose your custom infoType name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE |
Union field alphabet . Choose an alphabet which the data being transformed will be made up of. alphabet can be only one of the following: |
|
common_ |
Common alphabets. |
custom_ |
This is supported by mapping these to the alphanumeric characters that the FFX mode natively supports. This happens before/after encryption/decryption. Each character listed must appear only once. Number of characters must be in the range [2, 95]. This must be encoded as ASCII. The order of characters does not matter. The full list of allowed characters is: |
radix |
The native way to select the alphabet. Must be in the range [2, 95]. |
FfxCommonNativeAlphabet
These are commonly used subsets of the alphabet that the FFX mode natively supports. In the algorithm, the alphabet is selected using the "radix". Therefore each corresponds to a particular radix.
Enums | |
---|---|
FFX_COMMON_NATIVE_ALPHABET_UNSPECIFIED |
Unused. |
NUMERIC |
[0-9] (radix of 10) |
HEXADECIMAL |
[0-9A-F] (radix of 16) |
UPPER_CASE_ALPHA_NUMERIC |
[0-9A-Z] (radix of 36) |
ALPHA_NUMERIC |
[0-9A-Za-z] (radix of 62) |
CustomInfoType
Custom information type provided by the user. Used to find domain-specific sensitive information configurable to the data in question.
Fields | |
---|---|
info_ |
CustomInfoType can either be a new infoType, or an extension of built-in infoType, when the name matches one of existing infoTypes and that infoType is specified in |
likelihood |
Likelihood to return for this CustomInfoType. This base value can be altered by a detection rule if the finding meets the criteria specified by the rule. Defaults to |
detection_ |
Set of detection rules to apply to all findings of this CustomInfoType. Rules are applied in order that they are specified. Not supported for the |
exclusion_ |
If set to EXCLUSION_TYPE_EXCLUDE this infoType will not cause a finding to be returned. It still can be used for rules matching. |
sensitivity_ |
Sensitivity for this CustomInfoType. If this CustomInfoType extends an existing InfoType, the sensitivity here will take precedence over that of the original InfoType. If unset for a CustomInfoType, it will default to HIGH. This only applies to data profiling. |
Union field type . Type of custom detector. type can be only one of the following: |
|
dictionary |
A list of phrases to detect as a CustomInfoType. |
regex |
Regular expression based CustomInfoType. |
surrogate_ |
Message for detecting output from deidentification transformations that support reversing. |
stored_ |
Load an existing |
DetectionRule
Deprecated; use InspectionRuleSet
instead. Rule for modifying a CustomInfoType
to alter behavior under certain circumstances, depending on the specific details of the rule. Not supported for the surrogate_type
custom infoType.
Fields | |
---|---|
Union field type . Type of hotword rule. type can be only one of the following: |
|
hotword_ |
Hotword-based detection rule. |
HotwordRule
The rule that adjusts the likelihood of findings within a certain proximity of hotwords.
Fields | |
---|---|
hotword_ |
Regular expression pattern defining what qualifies as a hotword. |
proximity |
Range of characters within which the entire hotword must reside. The total length of the window cannot exceed 1000 characters. The finding itself will be included in the window, so that hotwords can be used to match substrings of the finding itself. Suppose you want Cloud DLP to promote the likelihood of the phone number regex "(\d{3}) \d{3}-\d{4}" if the area code is known to be the area code of a company's office. In this case, use the hotword regex "(xxx)", where "xxx" is the area code in question. For tabular data, if you want to modify the likelihood of an entire column of findngs, see Hotword example: Set the match likelihood of a table column. |
likelihood_ |
Likelihood adjustment to apply to all matching findings. |
LikelihoodAdjustment
Message for specifying an adjustment to the likelihood of a finding as part of a detection rule.
Fields | |
---|---|
Union field adjustment . How the likelihood will be modified. adjustment can be only one of the following: |
|
fixed_ |
Set the likelihood of a finding to a fixed value. |
relative_ |
Increase or decrease the likelihood by the specified number of levels. For example, if a finding would be |
Proximity
Message for specifying a window around a finding to apply a detection rule.
Fields | |
---|---|
window_ |
Number of characters before the finding to consider. For tabular data, if you want to modify the likelihood of an entire column of findngs, set this to 1. For more information, see Hotword example: Set the match likelihood of a table column. |
window_ |
Number of characters after the finding to consider. |
Dictionary
Custom information type based on a dictionary of words or phrases. This can be used to match sensitive information specific to the data, such as a list of employee IDs or job titles.
Dictionary words are case-insensitive and all characters other than letters and digits in the unicode Basic Multilingual Plane will be replaced with whitespace when scanning for matches, so the dictionary phrase "Sam Johnson" will match all three phrases "sam johnson", "Sam, Johnson", and "Sam (Johnson)". Additionally, the characters surrounding any match must be of a different type than the adjacent characters within the word, so letters must be next to non-letters and digits next to non-digits. For example, the dictionary word "jen" will match the first three letters of the text "jen123" but will return no matches for "jennifer".
Dictionary words containing a large number of characters that are not letters or digits may result in unexpected findings because such characters are treated as whitespace. The limits page contains details about the size limits of dictionaries. For dictionaries that do not fit within these constraints, consider using LargeCustomDictionaryConfig
in the StoredInfoType
API.
Fields | |
---|---|
Union field source . The potential places the data can be read from. source can be only one of the following: |
|
word_ |
List of words or phrases to search for. |
cloud_ |
Newline-delimited file of words in Cloud Storage. Only a single file is accepted. |
WordList
Message defining a list of words or phrases to search for in the data.
Fields | |
---|---|
words[] |
Words or phrases defining the dictionary. The dictionary must contain at least one phrase and every phrase must contain at least 2 characters that are letters or digits. [required] |
ExclusionType
Type of exclusion rule.
Enums | |
---|---|
EXCLUSION_TYPE_UNSPECIFIED |
A finding of this custom info type will not be excluded from results. |
EXCLUSION_TYPE_EXCLUDE |
A finding of this custom info type will be excluded from final results, but can still affect rule execution. |
Regex
Message defining a custom regular expression.
Fields | |
---|---|
pattern |
Pattern defining the regular expression. Its syntax (https://github.com/google/re2/wiki/Syntax) can be found under the google/re2 repository on GitHub. |
group_ |
The index of the submatch to extract as findings. When not specified, the entire match is returned. No more than 3 may be included. |
SurrogateType
This type has no fields.
Message for detecting output from deidentification transformations such as CryptoReplaceFfxFpeConfig
. These types of transformations are those that perform pseudonymization, thereby producing a "surrogate" as output. This should be used in conjunction with a field on the transformation such as surrogate_info_type
. This CustomInfoType does not support the use of detection_rules
.
DataProfileAction
A task to execute when a data profile has been generated.
Fields | |
---|---|
Union field action . Type of action to execute when a profile is generated. action can be only one of the following: |
|
export_ |
Export data profiles into a provided location. |
pub_ |
Publish a message into the Pub/Sub topic. |
publish_ |
Publishes generated data profiles to Google Security Operations. For more information, see Use Sensitive Data Protection data in context-aware analytics. |
publish_ |
Publishes findings to Security Command Center for each data profile. |
tag_ |
Tags the profiled resources with the specified tag values. |
EventType
Types of event that can trigger an action.
Enums | |
---|---|
EVENT_TYPE_UNSPECIFIED |
Unused. |
NEW_PROFILE |
New profile (not a re-profile). |
CHANGED_PROFILE |
One of the following profile metrics changed: Data risk score, Sensitivity score, Resource visibility, Encryption type, Predicted infoTypes, Other infoTypes |
SCORE_INCREASED |
Table data risk score or sensitivity score increased. |
ERROR_CHANGED |
A user (non-internal) error occurred. |
Export
If set, the detailed data profiles will be persisted to the location of your choice whenever updated.
Fields | |
---|---|
profile_ |
Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using streaming insert and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification. |
sample_ |
Store sample |
PubSubNotification
Send a Pub/Sub message into the given Pub/Sub topic to connect other systems to data profile generation. The message payload data will be the byte serialization of DataProfilePubSubMessage
.
Fields | |
---|---|
topic |
Cloud Pub/Sub topic to send notifications to. Format is projects/{project}/topics/{topic}. |
event |
The type of event that triggers a Pub/Sub. At most one |
pubsub_ |
Conditions (e.g., data risk or sensitivity level) for triggering a Pub/Sub. |
detail_ |
How much data to include in the Pub/Sub message. If the user wishes to limit the size of the message, they can use resource_name and fetch the profile fields they wish to. Per table profile (not per column). |
DetailLevel
The levels of detail that can be included in the Pub/Sub message.
Enums | |
---|---|
DETAIL_LEVEL_UNSPECIFIED |
Unused. |
TABLE_PROFILE |
The full table data profile. |
RESOURCE_NAME |
The name of the profiled resource. |
FILE_STORE_PROFILE |
The full file store data profile. |
PublishToChronicle
This type has no fields.
Message expressing intention to publish to Google Security Operations.
PublishToSecurityCommandCenter
This type has no fields.
If set, a summary finding will be created or updated in Security Command Center for each profile.
TagResources
If set, attaches the tags provided to profiled resources. Tags support access control. You can conditionally grant or deny access to a resource based on whether the resource has a specific tag.
Fields | |
---|---|
tag_ |
The tags to associate with different conditions. |
profile_ |
The profile generations for which the tag should be attached to resources. If you attach a tag to only new profiles, then if the sensitivity score of a profile subsequently changes, its tag doesn't change. By default, this field includes only new profiles. To include both new and updated profiles for tagging, this field should explicitly include both |
lower_ |
Whether applying a tag to a resource should lower the risk of the profile for that resource. For example, in conjunction with an IAM deny policy, you can deny all principals a permission if a tag value is present, mitigating the risk of the resource. This also lowers the data risk of resources at the lower levels of the resource hierarchy. For example, reducing the data risk of a table data profile also reduces the data risk of the constituent column data profiles. |
TagCondition
The tag to attach to profiles matching the condition. At most one TagCondition
can be specified per sensitivity level.
Fields | |
---|---|
tag |
The tag value to attach to resources. |
Union field type . The type of condition on which attaching the tag will be predicated. type can be only one of the following: |
|
sensitivity_ |
Conditions attaching the tag to a resource on its profile having this sensitivity score. |
TagValue
A value of a tag.
Fields | |
---|---|
Union field format . The format of the tag value. format can be only one of the following: |
|
namespaced_ |
The namespaced name for the tag value to attach to resources. Must be in the format |
DataProfileBigQueryRowSchema
The schema of data to be saved to the BigQuery table when the DataProfileAction
is enabled.
Fields | |
---|---|
Union field data_profile . Data profile type. data_profile can be only one of the following: |
|
table_ |
Table data profile column |
column_ |
Column data profile column |
file_ |
File store data profile column. |
DataProfileConfigSnapshot
Snapshot of the configurations used to generate the profile.
Fields | |
---|---|
inspect_ |
A copy of the inspection config used to generate this profile. This is a copy of the inspect_template specified in |
data_profile_job |
A copy of the configuration used to generate this profile. This is deprecated, and the DiscoveryConfig field is preferred moving forward. DataProfileJobConfig will still be written here for Discovery in BigQuery for backwards compatibility, but will not be updated with new fields, while DiscoveryConfig will. |
discovery_ |
A copy of the configuration used to generate this profile. |
inspect_ |
Name of the inspection template used to generate this profile |
inspect_ |
Timestamp when the template was modified |
DataProfileFinding
Details about a piece of potentially sensitive information that was detected when the data resource was profiled.
Fields | |
---|---|
quote |
The content that was found. Even if the content is not textual, it may be converted to a textual representation here. If the finding exceeds 4096 bytes in length, the quote may be omitted. |
infotype |
The type of content that might have been found. |
quote_ |
Contains data parsed from quotes. Currently supported infoTypes: DATE, DATE_OF_BIRTH, and TIME. |
data_ |
Resource name of the data profile associated with the finding. |
finding_ |
A unique identifier for the finding. |
timestamp |
Timestamp when the finding was detected. |
location |
Where the content was found. |
resource_ |
How broadly a resource has been shared. |
DataProfileFindingLocation
Location of a data profile finding within a resource.
Fields | |
---|---|
container_ |
Name of the container where the finding is located. The top-level name is the source file name or table name. Names of some common storage containers are formatted as follows:
|
Union field location_extra_details . Additional location details that may be provided for some types of profiles. At this time, only findings for table data profiles include such details. location_extra_details can be only one of the following: |
|
data_ |
Location of a finding within a resource that produces a table data profile. |
DataProfileFindingRecordLocation
Location of a finding within a resource that produces a table data profile.
Fields | |
---|---|
field |
Field ID of the column containing the finding. |
DataProfileJobConfig
Configuration for setting up a job to scan resources for profile generation. Only one data profile configuration may exist per organization, folder, or project.
The generated data profiles are retained according to the data retention policy.
Fields | |
---|---|
location |
The data to scan. |
project_ |
The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the DLP API must be enabled. |
other_ |
Must be set only when scanning other clouds. |
inspect_ |
Detection logic for profile generation. Not all template features are used by profiles. FindingLimits, include_quote and exclude_info_types have no impact on data profiling. Multiple templates may be provided if there is data in multiple regions. At most one template must be specified per-region (including "global"). Each region is scanned using the applicable template. If no region-specific template is specified, but a "global" template is specified, it will be copied to that region and used instead. If no global or region-specific template is provided for a region with data, that region's data will not be scanned. For more information, see https://cloud.google.com/sensitive-data-protection/docs/data-profiles#data-residency. |
data_ |
Actions to execute at the completion of the job. |
DataProfileLocation
The data that will be profiled.
Fields | |
---|---|
Union field location . The location to be scanned. location can be only one of the following: |
|
organization_ |
The ID of an organization to scan. |
folder_ |
The ID of the folder within an organization to scan. |
DataProfilePubSubCondition
A condition for determining whether a Pub/Sub should be triggered.
Fields | |
---|---|
expressions |
An expression. |
ProfileScoreBucket
Various score levels for resources.
Enums | |
---|---|
PROFILE_SCORE_BUCKET_UNSPECIFIED |
Unused. |
HIGH |
High risk/sensitivity detected. |
MEDIUM_OR_HIGH |
Medium or high risk/sensitivity detected. |
PubSubCondition
A condition consisting of a value.
Fields | |
---|---|
Union field value . The value for the condition to trigger. value can be only one of the following: |
|
minimum_ |
The minimum data risk score that triggers the condition. |
minimum_ |
The minimum sensitivity level that triggers the condition. |
PubSubExpressions
An expression, consisting of an operator and conditions.
Fields | |
---|---|
logical_ |
The operator to apply to the collection of conditions. |
conditions[] |
Conditions to apply to the expression. |
PubSubLogicalOperator
Logical operators for conditional checks.
Enums | |
---|---|
LOGICAL_OPERATOR_UNSPECIFIED |
Unused. |
OR |
Conditional OR. |
AND |
Conditional AND. |
DataProfilePubSubMessage
Pub/Sub topic message for a DataProfileAction.PubSubNotification event. To receive a message of protocol buffer schema type, convert the message data to an object of this proto class.
Fields | |
---|---|
profile |
If |
file_ |
If |
event |
The event that caused the Pub/Sub message to be sent. |
DataProfileUpdateFrequency
How frequently data profiles can be updated. New options can be added at a later time.
Enums | |
---|---|
UPDATE_FREQUENCY_UNSPECIFIED |
Unspecified. |
UPDATE_FREQUENCY_NEVER |
After the data profile is created, it will never be updated. |
UPDATE_FREQUENCY_DAILY |
The data profile can be updated up to once every 24 hours. |
UPDATE_FREQUENCY_MONTHLY |
The data profile can be updated up to once every 30 days. Default. |
DataRiskLevel
Score is a summary of all elements in the data profile. A higher number means more risk.
Fields | |
---|---|
score |
The score applied to the resource. |
DataRiskLevelScore
Various score levels for resources.
Enums | |
---|---|
RISK_SCORE_UNSPECIFIED |
Unused. |
RISK_LOW |
Low risk - Lower indication of sensitive data that appears to have additional access restrictions in place or no indication of sensitive data found. |
RISK_UNKNOWN |
Unable to determine risk. |
RISK_MODERATE |
Medium risk - Sensitive data may be present but additional access or fine grain access restrictions appear to be present. Consider limiting access even further or transform data to mask. |
RISK_HIGH |
High risk – SPII may be present. Access controls may include public ACLs. Exfiltration of data may lead to user data loss. Re-identification of users may be possible. Consider limiting usage and or removing SPII. |
DataSourceType
Message used to identify the type of resource being profiled.
Fields | |
---|---|
data_ |
Output only. An identifying string to the type of resource being profiled. Current values:
|
DatabaseResourceCollection
Match database resources using regex filters. Examples of database resources are tables, views, and stored procedures.
Fields | |
---|---|
Union field pattern . The first filter containing a pattern that matches a database resource will be used. pattern can be only one of the following: |
|
include_ |
A collection of regular expressions to match a database resource against. |
DatabaseResourceReference
Identifies a single database resource, like a table within a database.
Fields | |
---|---|
project_ |
Required. If within a project-level config, then this must match the config's project ID. |
instance |
Required. The instance where this resource is located. For example: Cloud SQL instance ID. |
database |
Required. Name of a database within the instance. |
database_ |
Required. Name of a database resource, for example, a table within the database. |
DatabaseResourceRegex
A pattern to match against one or more database resources. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.
Fields | |
---|---|
project_ |
For organizations, if unset, will match all projects. Has no effect for configurations created within a project. |
instance_ |
Regex to test the instance name against. If empty, all instances match. |
database_ |
Regex to test the database name against. If empty, all databases match. |
database_ |
Regex to test the database resource's name against. An example of a database resource name is a table's name. Other database resource names like view names could be included in the future. If empty, all database resources match. |
DatabaseResourceRegexes
A collection of regular expressions to determine what database resources to match against.
Fields | |
---|---|
patterns[] |
A group of regular expression patterns to match against one or more database resources. Maximum of 100 entries. The sum of all regular expression's length can't exceed 10 KiB. |
DatastoreKey
Record key for a finding in Cloud Datastore.
Fields | |
---|---|
entity_ |
Datastore entity key. |
DatastoreOptions
Options defining a data set within Google Cloud Datastore.
Fields | |
---|---|
partition_ |
A partition ID identifies a grouping of entities. The grouping is always by project and namespace, however the namespace ID may be empty. |
kind |
The kind to process. |
DateShiftConfig
Shifts dates by random number of days, with option to be consistent for the same context. See https://cloud.google.com/sensitive-data-protection/docs/concepts-date-shifting to learn more.
Fields | |
---|---|
upper_ |
Required. Range of shift in days. Actual shift will be selected at random within this range (inclusive ends). Negative means shift to earlier in time. Must not be more than 365250 days (1000 years) each direction. For example, 3 means shift date to at most 3 days into the future. |
lower_ |
Required. For example, -5 means shift date to at most 5 days back in the past. |
context |
Points to the field that contains the context, for example, an entity id. If set, must also set cryptoKey. If set, shift will be consistent for the given context. |
Union field method . Method for calculating shift that takes context into consideration. If set, must also set context. Can only be applied to table items. method can be only one of the following: |
|
crypto_ |
Causes the shift to be computed based on this key and the context. This results in the same shift for the same context and crypto_key. If set, must also set context. Can only be applied to table items. |
DateTime
Message for a date time object. e.g. 2018-01-01, 5th August.
Fields | |
---|---|
date |
One or more of the following must be set. Must be a valid date or time value. |
day_ |
Day of week |
time |
Time of day |
time_ |
Time zone |
TimeZone
Time zone of the date time object.
Fields | |
---|---|
offset_ |
Set only if the offset can be determined. Positive for time ahead of UTC. E.g. For "UTC-9", this value is -540. |
DeidentifyConfig
The configuration that controls how the data will change.
Fields | |
---|---|
transformation_ |
Mode for handling transformation errors. If left unspecified, the default mode is |
Union field transformation . Type of transformation transformation can be only one of the following: |
|
info_ |
Treat the dataset as free-form text and apply the same free text transformation everywhere. |
record_ |
Treat the dataset as structured. Transformations can be applied to specific locations within structured datasets, such as transforming a column within a table. |
image_ |
Treat the dataset as an image and redact. |
DeidentifyContentRequest
Request to de-identify a ContentItem.
Fields | |
---|---|
parent |
Parent resource name. The format of this value varies depending on whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
deidentify_ |
Configuration for the de-identification of the content item. Items specified here will override the template referenced by the deidentify_template_name argument. |
inspect_ |
Configuration for the inspector. Items specified here will override the template referenced by the inspect_template_name argument. |
item |
The item to de-identify. Will be treated as text. This value must be of type |
inspect_ |
Template to use. Any configuration directly specified in inspect_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged. |
deidentify_ |
Template to use. Any configuration directly specified in deidentify_config will override those set in the template. Singular fields that are set in this request will replace their corresponding fields in the template. Repeated fields are appended. Singular sub-messages and groups are recursively merged. |
location_ |
Deprecated. This field has no effect. |
DeidentifyContentResponse
Results of de-identifying a ContentItem.
Fields | |
---|---|
item |
The de-identified item. |
overview |
An overview of the changes that were made on the |
DeidentifyDataSourceDetails
The results of a Deidentify
action from an inspect job.
Fields | |
---|---|
requested_ |
De-identification config used for the request. |
deidentify_ |
Stats about the de-identification operation. |
RequestedDeidentifyOptions
De-identification options.
Fields | |
---|---|
snapshot_ |
Snapshot of the state of the |
snapshot_ |
Snapshot of the state of the structured |
snapshot_ |
Snapshot of the state of the image transformation |
DeidentifyDataSourceStats
Summary of what was modified during a transformation.
Fields | |
---|---|
transformed_ |
Total size in bytes that were transformed in some way. |
transformation_ |
Number of successfully applied transformations. |
transformation_ |
Number of errors encountered while trying to apply transformations. |
DeidentifyTemplate
DeidentifyTemplates contains instructions on how to de-identify content. See https://cloud.google.com/sensitive-data-protection/docs/concepts-templates to learn more.
Fields | |
---|---|
name |
Output only. The template name. The template will have one of the following formats: |
display_ |
Display name (max 256 chars). |
description |
Short description (max 256 chars). |
create_ |
Output only. The creation timestamp of an inspectTemplate. |
update_ |
Output only. The last update timestamp of an inspectTemplate. |
deidentify_ |
The core content of the template. |
DeleteConnectionRequest
Request message for DeleteConnection.
Fields | |
---|---|
name |
Required. Resource name of the Connection to be deleted, in the format: Authorization requires the following IAM permission on the specified resource
|
DeleteDeidentifyTemplateRequest
Request message for DeleteDeidentifyTemplate.
Fields | |
---|---|
name |
Required. Resource name of the organization and deidentify template to be deleted, for example Authorization requires the following IAM permission on the specified resource
|
DeleteDiscoveryConfigRequest
Request message for DeleteDiscoveryConfig.
Fields | |
---|---|
name |
Required. Resource name of the project and the config, for example Authorization requires the following IAM permission on the specified resource
|
DeleteDlpJobRequest
The request message for deleting a DLP job.
Fields | |
---|---|
name |
Required. The name of the DlpJob resource to be deleted. Authorization requires the following IAM permission on the specified resource
|
DeleteFileStoreDataProfileRequest
Request message for DeleteFileStoreProfile.
Fields | |
---|---|
name |
Required. Resource name of the file store data profile. Authorization requires the following IAM permission on the specified resource
|
DeleteInspectTemplateRequest
Request message for DeleteInspectTemplate.
Fields | |
---|---|
name |
Required. Resource name of the organization and inspectTemplate to be deleted, for example Authorization requires the following IAM permission on the specified resource
|
DeleteJobTriggerRequest
Request message for DeleteJobTrigger.
Fields | |
---|---|
name |
Required. Resource name of the project and the triggeredJob, for example Authorization requires the following IAM permission on the specified resource
|
DeleteStoredInfoTypeRequest
Request message for DeleteStoredInfoType.
Fields | |
---|---|
name |
Required. Resource name of the organization and storedInfoType to be deleted, for example Authorization requires the following IAM permission on the specified resource
|
DeleteTableDataProfileRequest
Request message for DeleteTableProfile.
Fields | |
---|---|
name |
Required. Resource name of the table data profile. Authorization requires the following IAM permission on the specified resource
|
Disabled
This type has no fields.
Do not profile the tables.
DiscoveryBigQueryConditions
Requirements that must be true before a table is scanned in discovery for the first time. There is an AND relationship between the top-level attributes. Additionally, minimum conditions with an OR relationship that must be met before Cloud DLP scans a table can be set (like a minimum row count or a minimum table age).
Fields | |
---|---|
created_ |
BigQuery table must have been created after this date. Used to avoid backfilling. |
or_ |
At least one of the conditions must be true for a table to be scanned. |
Union field included_types . The type of BigQuery tables to scan. If nothing is set the default behavior is to scan only tables of type TABLE and to give errors for all unsupported tables. included_types can be only one of the following: |
|
types |
Restrict discovery to specific table types. |
type_ |
Restrict discovery to categories of table types. |
OrConditions
There is an OR relationship between these attributes. They are used to determine if a table should be scanned or not in Discovery.
Fields | |
---|---|
min_ |
Minimum number of rows that should be present before Cloud DLP profiles a table |
min_ |
Minimum age a table must have before Cloud DLP can profile it. Value must be 1 hour or greater. |
DiscoveryBigQueryFilter
Determines what tables will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID, dataset ID, and table ID.
Fields | |
---|---|
Union field filter . Whether the filter applies to a specific set of tables or all other tables within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to other_tables . filter can be only one of the following: |
|
tables |
A specific set of tables for this filter to apply to. A table collection must be specified in only one filter per config. If a table id or dataset is empty, Cloud DLP assumes all tables in that collection must be profiled. Must specify a project ID. |
other_ |
Catch-all. This should always be the last filter in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically. |
table_ |
The table to scan. Discovery configurations including this can only include one DiscoveryTarget (the DiscoveryTarget with this TableReference). |
AllOtherBigQueryTables
This type has no fields.
Catch-all for all other tables not specified by other filters. Should always be last, except for single-table configurations, which will only have a TableReference target.
DiscoveryCloudSqlConditions
Requirements that must be true before a table is profiled for the first time.
Fields | |
---|---|
database_ |
Optional. Database engines that should be profiled. Optional. Defaults to ALL_SUPPORTED_DATABASE_ENGINES if unspecified. |
types[] |
Data profiles will only be generated for the database resource types specified in this field. If not specified, defaults to [DATABASE_RESOURCE_TYPE_ALL_SUPPORTED_TYPES]. |
DatabaseEngine
The database engines that should be profiled.
Enums | |
---|---|
DATABASE_ENGINE_UNSPECIFIED |
Unused. |
ALL_SUPPORTED_DATABASE_ENGINES |
Include all supported database engines. |
MYSQL |
MySQL database. |
POSTGRES |
PostgreSQL database. |
DatabaseResourceType
Cloud SQL database resource types. New values can be added at a later time.
Enums | |
---|---|
DATABASE_RESOURCE_TYPE_UNSPECIFIED |
Unused. |
DATABASE_RESOURCE_TYPE_ALL_SUPPORTED_TYPES |
Includes database resource types that become supported at a later time. |
DATABASE_RESOURCE_TYPE_TABLE |
Tables. |
DiscoveryCloudSqlFilter
Determines what tables will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID, location, instance, database, and database resource name.
Fields | |
---|---|
Union field filter . Whether the filter applies to a specific set of database resources or all other database resources within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to others . filter can be only one of the following: |
|
collection |
A specific set of database resources for this filter to apply to. |
others |
Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically. |
database_ |
The database resource to scan. Targets including this can only include one target (the target with this database resource reference). |
DiscoveryCloudSqlGenerationCadence
How often existing tables should have their profiles refreshed. New tables are scanned as quickly as possible depending on system capacity.
Fields | |
---|---|
schema_ |
When to reprofile if the schema has changed. |
refresh_ |
Data changes (non-schema changes) in Cloud SQL tables can't trigger reprofiling. If you set this field, profiles are refreshed at this frequency regardless of whether the underlying tables have changed. Defaults to never. |
inspect_ |
Governs when to update data profiles when the inspection rules defined by the |
SchemaModifiedCadence
How frequently to modify the profile when the table's schema is modified.
Fields | |
---|---|
types[] |
The types of schema modifications to consider. Defaults to NEW_COLUMNS. |
frequency |
Frequency to regenerate data profiles when the schema is modified. Defaults to monthly. |
CloudSqlSchemaModification
The type of modification that causes a profile update.
Enums | |
---|---|
SQL_SCHEMA_MODIFICATION_UNSPECIFIED |
Unused. |
NEW_COLUMNS |
New columns have appeared. |
REMOVED_COLUMNS |
Columns have been removed from the table. |
DiscoveryCloudStorageConditions
Requirements that must be true before a Cloud Storage bucket or object is scanned in discovery for the first time. There is an AND relationship between the top-level attributes.
Fields | |
---|---|
included_ |
Required. Only objects with the specified attributes will be scanned. If an object has one of the specified attributes but is inside an excluded bucket, it will not be scanned. Defaults to [ALL_SUPPORTED_OBJECTS]. A profile will be created even if no objects match the included_object_attributes. |
included_ |
Required. Only objects with the specified attributes will be scanned. Defaults to [ALL_SUPPORTED_BUCKETS] if unset. |
CloudStorageBucketAttribute
The attribute of a bucket.
Enums | |
---|---|
CLOUD_STORAGE_BUCKET_ATTRIBUTE_UNSPECIFIED |
Unused. |
ALL_SUPPORTED_BUCKETS |
Scan buckets regardless of the attribute. |
AUTOCLASS_DISABLED |
Buckets with autoclass disabled (https://cloud.google.com/storage/docs/autoclass). Only one of AUTOCLASS_DISABLED or AUTOCLASS_ENABLED should be set. |
AUTOCLASS_ENABLED |
Buckets with autoclass enabled (https://cloud.google.com/storage/docs/autoclass). Only one of AUTOCLASS_DISABLED or AUTOCLASS_ENABLED should be set. Scanning Autoclass-enabled buckets can affect object storage classes. |
CloudStorageObjectAttribute
The attribute of an object. See https://cloud.google.com/storage/docs/storage-classes for more information on storage classes.
Enums | |
---|---|
CLOUD_STORAGE_OBJECT_ATTRIBUTE_UNSPECIFIED |
Unused. |
ALL_SUPPORTED_OBJECTS |
Scan objects regardless of the attribute. |
STANDARD |
Scan objects with the standard storage class. |
NEARLINE |
Scan objects with the nearline storage class. This will incur retrieval fees. |
COLDLINE |
Scan objects with the coldline storage class. This will incur retrieval fees. |
ARCHIVE |
Scan objects with the archive storage class. This will incur retrieval fees. |
REGIONAL |
Scan objects with the regional storage class. |
MULTI_REGIONAL |
Scan objects with the multi-regional storage class. |
DURABLE_REDUCED_AVAILABILITY |
Scan objects with the dual-regional storage class. This will incur retrieval fees. |
DiscoveryCloudStorageFilter
Determines which buckets will have profiles generated within an organization or project. Includes the ability to filter by regular expression patterns on project ID and bucket name.
Fields | |
---|---|
Union field filter . Whether the filter applies to a specific set of buckets or all other buckets within the location being profiled. The first filter to match will be applied, regardless of the condition. If none is set, will default to others . filter can be only one of the following: |
|
collection |
Optional. A specific set of buckets for this filter to apply to. |
cloud_ |
Optional. The bucket to scan. Targets including this can only include one target (the target with this bucket). This enables profiling the contents of a single bucket, while the other options allow for easy profiling of many bucets within a project or an organization. |
others |
Optional. Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically. |
DiscoveryCloudStorageGenerationCadence
How often existing buckets should have their profiles refreshed. New buckets are scanned as quickly as possible depending on system capacity.
Fields | |
---|---|
refresh_ |
Optional. Data changes in Cloud Storage can't trigger reprofiling. If you set this field, profiles are refreshed at this frequency regardless of whether the underlying buckets have changed. Defaults to never. |
inspect_ |
Optional. Governs when to update data profiles when the inspection rules defined by the |
DiscoveryConfig
Configuration for discovery to scan resources for profile generation. Only one discovery configuration may exist per organization, folder, or project.
The generated data profiles are retained according to the data retention policy.
Fields | |
---|---|
name |
Unique resource name for the DiscoveryConfig, assigned by the service when the DiscoveryConfig is created, for example |
display_ |
Display name (max 100 chars) |
org_ |
Only set when the parent is an org. |
other_ |
Must be set only when scanning other clouds. |
inspect_ |
Detection logic for profile generation. Not all template features are used by Discovery. FindingLimits, include_quote and exclude_info_types have no impact on Discovery. Multiple templates may be provided if there is data in multiple regions. At most one template must be specified per-region (including "global"). Each region is scanned using the applicable template. If no region-specific template is specified, but a "global" template is specified, it will be copied to that region and used instead. If no global or region-specific template is provided for a region with data, that region's data will not be scanned. For more information, see https://cloud.google.com/sensitive-data-protection/docs/data-profiles#data-residency. |
actions[] |
Actions to execute at the completion of scanning. |
targets[] |
Target to match against for determining what to scan and how frequently. |
errors[] |
Output only. A stream of errors encountered when the config was activated. Repeated errors may result in the config automatically being paused. Output only field. Will return the last 100 errors. Whenever the config is modified this list will be cleared. |
create_ |
Output only. The creation timestamp of a DiscoveryConfig. |
update_ |
Output only. The last update timestamp of a DiscoveryConfig. |
last_ |
Output only. The timestamp of the last time this config was executed. |
status |
Required. A status for this configuration. |
OrgConfig
Project and scan location information. Only set when the parent is an org.
Fields | |
---|---|
location |
The data to scan: folder, org, or project |
project_ |
The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the DLP API must be enabled. |
Status
Whether the discovery config is currently active. New options may be added at a later time.
Enums | |
---|---|
STATUS_UNSPECIFIED |
Unused |
RUNNING |
The discovery config is currently active. |
PAUSED |
The discovery config is paused temporarily. |
DiscoveryFileStoreConditions
Requirements that must be true before a file store is scanned in discovery for the first time. There is an AND relationship between the top-level attributes.
Fields | |
---|---|
created_ |
Optional. File store must have been created after this date. Used to avoid backfilling. |
min_ |
Optional. Minimum age a file store must have. If set, the value must be 1 hour or greater. |
Union field conditions . File store specific conditions. conditions can be only one of the following: |
|
cloud_ |
Optional. Cloud Storage conditions. |
DiscoveryGenerationCadence
What must take place for a profile to be updated and how frequently it should occur. New tables are scanned as quickly as possible depending on system capacity.
Fields | |
---|---|
schema_ |
Governs when to update data profiles when a schema is modified. |
table_ |
Governs when to update data profiles when a table is modified. |
inspect_ |
Governs when to update data profiles when the inspection rules defined by the |
refresh_ |
Frequency at which profiles should be updated, regardless of whether the underlying resource has changed. Defaults to never. |
DiscoveryInspectTemplateModifiedCadence
The cadence at which to update data profiles when the inspection rules defined by the InspectTemplate
change.
Fields | |
---|---|
frequency |
How frequently data profiles can be updated when the template is modified. Defaults to never. |
DiscoveryOtherCloudConditions
Requirements that must be true before a resource is profiled for the first time.
Fields | |
---|---|
min_ |
Minimum age a resource must be before Cloud DLP can profile it. Value must be 1 hour or greater. |
Union field conditions . The conditions to apply. conditions can be only one of the following: |
|
amazon_ |
Amazon S3 bucket conditions. |
DiscoveryOtherCloudFilter
Determines which resources from the other cloud will have profiles generated. Includes the ability to filter by resource names.
Fields | |
---|---|
Union field filter . Whether the filter applies to a specific set of resources or all other resources. The first filter to match will be applied, regardless of the condition. Defaults to others if none is set. filter can be only one of the following: |
|
collection |
A collection of resources for this filter to apply to. |
single_ |
The resource to scan. Configs using this filter can only have one target (the target with this single resource reference). |
others |
Optional. Catch-all. This should always be the last target in the list because anything above it will apply first. Should only appear once in a configuration. If none is specified, a default one will be added automatically. |
DiscoveryOtherCloudGenerationCadence
How often existing resources should have their profiles refreshed. New resources are scanned as quickly as possible depending on system capacity.
Fields | |
---|---|
refresh_ |
Optional. Frequency to update profiles regardless of whether the underlying resource has changes. Defaults to never. |
inspect_ |
Optional. Governs when to update data profiles when the inspection rules defined by the |
DiscoverySchemaModifiedCadence
The cadence at which to update data profiles when a schema is modified.
Fields | |
---|---|
types[] |
The type of events to consider when deciding if the table's schema has been modified and should have the profile updated. Defaults to NEW_COLUMNS. |
frequency |
How frequently profiles may be updated when schemas are modified. Defaults to monthly. |
DiscoveryStartingLocation
The location to begin a discovery scan. Denotes an organization ID or folder ID within an organization.
Fields | |
---|---|
Union field location . The location to be scanned. location can be only one of the following: |
|
organization_ |
The ID of an organization to scan. |
folder_ |
The ID of the folder within an organization to be scanned. |
DiscoveryTableModifiedCadence
The cadence at which to update data profiles when a table is modified.
Fields | |
---|---|
types[] |
The type of events to consider when deciding if the table has been modified and should have the profile updated. Defaults to MODIFIED_TIMESTAMP. |
frequency |
How frequently data profiles can be updated when tables are modified. Defaults to never. |
DiscoveryTarget
Target used to match against for Discovery.
Fields | |
---|---|
Union field target . A target to match against for Discovery. target can be only one of the following: |
|
big_ |
BigQuery target for Discovery. The first target to match a table will be the one applied. |
cloud_ |
Cloud SQL target for Discovery. The first target to match a table will be the one applied. |
secrets_ |
Discovery target that looks for credentials and secrets stored in cloud resource metadata and reports them as vulnerabilities to Security Command Center. Only one target of this type is allowed. |
cloud_ |
Cloud Storage target for Discovery. The first target to match a table will be the one applied. |
other_ |
Other clouds target for discovery. The first target to match a resource will be the one applied. |
DlpJob
Combines all of the information about a DLP job.
Fields | |
---|---|
name |
The server-assigned name. |
type |
The type of job. |
state |
State of a job. |
create_ |
Time when the job was created. |
start_ |
Time when the job started. |
end_ |
Time when the job finished. |
last_ |
Time when the job was last modified by the system. |
job_ |
If created by a job trigger, the resource name of the trigger that instantiated the job. |
errors[] |
A stream of errors encountered running the job. |
action_ |
Events that should occur after the job has completed. |
Union field details . Job details. details can be only one of the following: |
|
risk_ |
Results from analyzing risk of a data source. |
inspect_ |
Results from inspecting a data source. |
JobState
Possible states of a job. New items may be added.
Enums | |
---|---|
JOB_STATE_UNSPECIFIED |
Unused. |
PENDING |
The job has not yet started. |
RUNNING |
The job is currently running. Once a job has finished it will transition to FAILED or DONE. |
DONE |
The job is no longer running. |
CANCELED |
The job was canceled before it could be completed. |
FAILED |
The job had an error and did not complete. |
ACTIVE |
The job is currently accepting findings via hybridInspect. A hybrid job in ACTIVE state may continue to have findings added to it through the calling of hybridInspect. After the job has finished no more calls to hybridInspect may be made. ACTIVE jobs can transition to DONE. |
DlpJobType
An enum to represent the various types of DLP jobs.
Enums | |
---|---|
DLP_JOB_TYPE_UNSPECIFIED |
Defaults to INSPECT_JOB. |
INSPECT_JOB |
The job inspected Google Cloud for sensitive data. |
RISK_ANALYSIS_JOB |
The job executed a Risk Analysis computation. |
DocumentLocation
Location of a finding within a document.
Fields | |
---|---|
file_ |
Offset of the line, from the beginning of the file, where the finding is located. |
EncryptionStatus
How a resource is encrypted.
Enums | |
---|---|
ENCRYPTION_STATUS_UNSPECIFIED |
Unused. |
ENCRYPTION_GOOGLE_MANAGED |
Google manages server-side encryption keys on your behalf. |
ENCRYPTION_CUSTOMER_MANAGED |
Customer provides the key. |
EntityId
An entity in a dataset is a field or set of fields that correspond to a single person. For example, in medical records the EntityId
might be a patient identifier, or for financial records it might be an account identifier. This message is used when generalizations or analysis must take into account that multiple rows correspond to the same entity.
Fields | |
---|---|
field |
Composite key indicating which field contains the entity identifier. |
Error
Details information about an error encountered during job execution or the results of an unsuccessful activation of the JobTrigger.
Fields | |
---|---|
details |
Detailed error codes and messages. |
timestamps[] |
The times the error occurred. List includes the oldest timestamp and the last 9 timestamps. |
extra_ |
Additional information about the error. |
ErrorExtraInfo
Additional information about the error.
Enums | |
---|---|
ERROR_INFO_UNSPECIFIED |
Unused. |
IMAGE_SCAN_UNAVAILABLE_IN_REGION |
Image scan is not available in the region. |
FILE_STORE_CLUSTER_UNSUPPORTED |
File store cluster is not supported for profile generation. |
ExcludeByHotword
The rule to exclude findings based on a hotword. For record inspection of tables, column names are considered hotwords. An example of this is to exclude a finding if it belongs to a BigQuery column that matches a specific pattern.
Fields | |
---|---|
hotword_ |
Regular expression pattern defining what qualifies as a hotword. |
proximity |
Range of characters within which the entire hotword must reside. The total length of the window cannot exceed 1000 characters. The windowBefore property in proximity should be set to 1 if the hotword needs to be included in a column header. |
ExcludeInfoTypes
List of excluded infoTypes.
Fields | |
---|---|
info_ |
InfoType list in ExclusionRule rule drops a finding when it overlaps or contained within with a finding of an infoType from this list. For example, for |
ExclusionRule
The rule that specifies conditions when findings of infoTypes specified in InspectionRuleSet
are removed from results.
Fields | |
---|---|
matching_ |
How the rule is applied, see MatchingType documentation for details. |
Union field type . Exclusion rule types. type can be only one of the following: |
|
dictionary |
Dictionary which defines the rule. |
regex |
Regular expression which defines the rule. |
exclude_ |
Set of infoTypes for which findings would affect this rule. |
exclude_ |
Drop if the hotword rule is contained in the proximate context. For tabular data, the context includes the column name. |
FieldId
General identifier of a data field in a storage service.
Fields | |
---|---|
name |
Name describing the field. |
FieldTransformation
The transformation to apply to the field.
Fields | |
---|---|
fields[] |
Required. Input field(s) to apply the transformation to. When you have columns that reference their position within a list, omit the index from the FieldId. FieldId name matching ignores the index. For example, instead of "contact.nums[0].type", use "contact.nums.type". |
condition |
Only apply the transformation if the condition evaluates to true for the given Example Use Cases:
|
Union field transformation . Transformation to apply. [required] transformation can be only one of the following: |
|
primitive_ |
Apply the transformation to the entire field. |
info_ |
Treat the contents of the field as free text, and selectively transform content that matches an |
FileClusterSummary
The file cluster summary.
Fields | |
---|---|
file_ |
The file cluster type. |
file_ |
InfoTypes detected in this cluster. |
sensitivity_ |
The sensitivity score of this cluster. The score will be SENSITIVITY_LOW if nothing has been scanned. |
data_ |
The data risk level of this cluster. RISK_LOW if nothing has been scanned. |
errors[] |
A list of errors detected while scanning this cluster. The list is truncated to 10 per cluster. |
file_ |
A sample of file types scanned in this cluster. Empty if no files were scanned. File extensions can be derived from the file name or the file content. |
file_ |
A sample of file types seen in this cluster. Empty if no files were seen. File extensions can be derived from the file name or the file content. |
no_ |
True if no files exist in this cluster. If the bucket had more files than could be listed, this will be false even if no files for this cluster were seen and file_extensions_seen is empty. |
FileClusterType
Message used to identify file cluster type being profiled.
Fields | |
---|---|
Union field file_cluster_type . File cluster type. file_cluster_type can be only one of the following: |
|
cluster |
Cluster type. |
Cluster
Cluster type. Each cluster corresponds to a set of file types. Over time, new types may be added and files may move between clusters.
Enums | |
---|---|
CLUSTER_UNSPECIFIED |
Unused. |
CLUSTER_UNKNOWN |
Unsupported files. |
CLUSTER_TEXT |
Plain text. |
CLUSTER_STRUCTURED_DATA |
Structured data like CSV, TSV etc. |
CLUSTER_SOURCE_CODE |
Source code. |
CLUSTER_RICH_DOCUMENT |
Rich document like docx, xlsx etc. |
CLUSTER_IMAGE |
Images like jpeg, bmp. |
CLUSTER_ARCHIVE |
Archives and containers like .zip, .tar etc. |
CLUSTER_MULTIMEDIA |
Multimedia like .mp4, .avi etc. |
CLUSTER_EXECUTABLE |
Executable files like .exe, .class, .apk etc. |
FileExtensionInfo
Information regarding the discovered file extension.
Fields | |
---|---|
file_ |
The file extension if set. (aka .pdf, .jpg, .txt) |
FileStoreCollection
Match file stores (e.g. buckets) using regex filters.
Fields | |
---|---|
Union field pattern . The first filter containing a pattern that matches a file store will be used. pattern can be only one of the following: |
|
include_ |
Optional. A collection of regular expressions to match a file store against. |
FileStoreDataProfile
The profile for a file store.
- Cloud Storage: maps 1:1 with a bucket.
- Amazon S3: maps 1:1 with a bucket.
Fields | |
---|---|
name |
The name of the profile. |
data_ |
The resource type that was profiled. |
project_ |
The resource name of the project data profile for this file store. |
project_ |
The Google Cloud project ID that owns the resource. For Amazon S3 buckets, this is the AWS Account Id. |
file_ |
The location of the file store. |
data_ |
For resources that have multiple storage locations, these are those regions. For Cloud Storage this is the list of regions chosen for dual-region storage. |
location_ |
The location type of the bucket (region, dual-region, multi-region, etc). If dual-region, expect data_storage_locations to be populated. |
file_ |
The file store path.
|
full_ |
The resource name of the resource profiled. https://cloud.google.com/apis/design/resource_names#full_resource_name Example format of an S3 bucket full resource name: |
config_ |
The snapshot of the configurations used to generate the profile. |
profile_ |
Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated. |
state |
State of a profile. |
profile_ |
The last time the profile was generated. |
resource_ |
How broadly a resource has been shared. |
sensitivity_ |
The sensitivity score of this resource. |
data_ |
The data risk level of this resource. |
create_ |
The time the file store was first created. |
last_ |
The time the file store was last modified. |
file_ |
FileClusterSummary per each cluster. |
resource_ |
Attributes of the resource being profiled. Currently used attributes:
|
resource_ |
The labels applied to the resource at the time the profile was generated. |
file_ |
InfoTypes detected in this file store. |
sample_ |
The BigQuery table to which the sample findings are written. |
file_ |
The file store does not have any files. |
State
Possible states of a profile. New items may be added.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unused. |
RUNNING |
The profile is currently running. Once a profile has finished it will transition to DONE. |
DONE |
The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed. |
FileStoreInfoTypeSummary
Information regarding the discovered InfoType.
Fields | |
---|---|
info_ |
The InfoType seen. |
FileStoreRegex
A pattern to match against one or more file stores.
Fields | |
---|---|
Union field resource_regex . The type of resource regex to use. resource_regex can be only one of the following: |
|
cloud_ |
Optional. Regex for Cloud Storage. |
FileStoreRegexes
A collection of regular expressions to determine what file store to match against.
Fields | |
---|---|
patterns[] |
Required. The group of regular expression patterns to match against one or more file stores. Maximum of 100 entries. The sum of all regular expression's length can't exceed 10 KiB. |
FileType
Definitions of file type groups to scan. New types will be added to this list.
Enums | |
---|---|
FILE_TYPE_UNSPECIFIED |
Includes all files. |
BINARY_FILE |
Includes all file extensions not covered by another entry. Binary scanning attempts to convert the content of the file to utf_8 to scan the file. If you wish to avoid this fall back, specify one or more of the other file types in your storage scan. |
TEXT_FILE |
Included file extensions: asc,asp, aspx, brf, c, cc,cfm, cgi, cpp, csv, cxx, c++, cs, css, dart, dat, dot, eml,, epbub, ged, go, h, hh, hpp, hxx, h++, hs, html, htm, mkd, markdown, m, ml, mli, perl, pl, plist, pm, php, phtml, pht, properties, py, pyw, rb, rbw, rs, rss, rc, scala, sh, sql, swift, tex, shtml, shtm, xhtml, lhs, ics, ini, java, js, json, jsonl, kix, kml, ocaml, md, txt, text, tsv, vb, vcard, vcs, wml, xcodeproj, xml, xsl, xsd, yml, yaml. |
IMAGE |
Included file extensions: bmp, gif, jpg, jpeg, jpe, png. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on image files. Image inspection is restricted to the global , us , asia , and europe regions. |
WORD |
Microsoft Word files larger than 30 MB will be scanned as binary files. Included file extensions: docx, dotx, docm, dotm. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on Word files. |
PDF |
PDF files larger than 30 MB will be scanned as binary files. Included file extensions: pdf. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on PDF files. |
AVRO |
Included file extensions: avro |
CSV |
Included file extensions: csv |
TSV |
Included file extensions: tsv |
POWERPOINT |
Microsoft PowerPoint files larger than 30 MB will be scanned as binary files. Included file extensions: pptx, pptm, potx, potm, pot. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on PowerPoint files. |
EXCEL |
Microsoft Excel files larger than 30 MB will be scanned as binary files. Included file extensions: xlsx, xlsm, xltx, xltm. Setting bytes_limit_per_file or bytes_limit_per_file_percent has no effect on Excel files. |
Finding
Represents a piece of potentially sensitive content.
Fields | |
---|---|
name |
Resource name in format projects/{project}/locations/{location}/findings/{finding} Populated only when viewing persisted findings. |
quote |
The content that was found. Even if the content is not textual, it may be converted to a textual representation here. Provided if |
info_ |
The type of content that might have been found. Provided if |
likelihood |
Confidence of how likely it is that the |
location |
Where the content was found. |
create_ |
Timestamp when finding was detected. |
quote_ |
Contains data parsed from quotes. Only populated if include_quote was set to true and a supported infoType was requested. Currently supported infoTypes: DATE, DATE_OF_BIRTH and TIME. |
resource_ |
The job that stored the finding. |
trigger_ |
Job trigger name, if applicable, for this finding. |
labels |
The labels associated with this Label keys must be between 1 and 63 characters long and must conform to the following regular expression: Label values must be between 0 and 63 characters long and must conform to the regular expression No more than 10 labels can be associated with a given finding. Examples:
|
job_ |
Time the job started that produced this finding. |
job_ |
The job that stored the finding. |
finding_ |
The unique finding id. |
FinishDlpJobRequest
The request message for finishing a DLP hybrid job.
Fields | |
---|---|
name |
Required. The name of the DlpJob resource to be finished. Authorization requires the following IAM permission on the specified resource
|
FixedSizeBucketingConfig
Buckets values based on fixed size ranges. The Bucketing transformation can provide all of this functionality, but requires more configuration. This message is provided as a convenience to the user for simple bucketing strategies.
The transformed value will be a hyphenated string of {lower_bound}-{upper_bound}. For example, if lower_bound = 10 and upper_bound = 20, all values that are within this bucket will be replaced with "10-20".
This can be used on data of type: double, long.
If the bound Value type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing.
See https://cloud.google.com/sensitive-data-protection/docs/concepts-bucketing to learn more.
Fields | |
---|---|
lower_ |
Required. Lower bound value of buckets. All values less than |
upper_ |
Required. Upper bound value of buckets. All values greater than upper_bound are grouped together into a single bucket; for example if |
bucket_ |
Required. Size of each bucket (except for minimum and maximum buckets). So if |
GetColumnDataProfileRequest
Request to get a column data profile.
Fields | |
---|---|