[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[[["\u003cp\u003eDataplex enables you to manage user permissions for administering lakes, accessing data connected to a lake, and viewing metadata about the data.\u003c/p\u003e\n"],["\u003cp\u003eDataplex offers basic roles like Viewer, Editor, Administrator, and Developer, as well as predefined roles for granular access, including Metadata Reader and Writer roles.\u003c/p\u003e\n"],["\u003cp\u003eData roles such as Data Reader, Data Writer, and Data Owner, granted at the lake, zone, or asset level, provide specific data access, which are then translated and propagated to the underlying storage resources, like Cloud Storage and BigQuery.\u003c/p\u003e\n"],["\u003cp\u003eThe Dataplex "Manage" view provides a comprehensive overview of permissions, while the "Secure" view offers a filtered view focused on specific resource data and lake roles, separating data roles from lake resource roles.\u003c/p\u003e\n"],["\u003cp\u003eDataplex propagates permissions to managed resources and continuously monitors for changes, while also allowing fine-grained control using BigLake tables, which enable column-level, row-level, and table-level policies, and manages metadata security across BigQuery, Dataproc Metastore, and Data Catalog.\u003c/p\u003e\n"]]],[],null,["# Secure your lake\n\nThis document describes how to secure and manage access to\nDataplex Universal Catalog lakes.\n\nThe Dataplex Universal Catalog security model lets you manage user permissions for\nthe following tasks:\n\n- Administering a lake (creating and attaching assets, zones, and additional lakes)\n- Accessing data connected to a lake through the mapping asset (for example, Google Cloud resources, such as Cloud Storage buckets and BigQuery datasets)\n- Accessing metadata about the data connected to a lake\n\nAn administrator for a lake controls access to Dataplex Universal Catalog resources,\nsuch as lake, zone, and assets by granting the basic and predefined roles.\n\nBasic roles\n-----------\n\n^\\*^ To query a BigQuery table, you need the permission to run a BigQuery job. Set this permission in the project you want attributed or charged for the compute spend of the job. For more information, see [BigQuery predefined roles and permissions](/bigquery/docs/access-control). \nTo run a Spark job, create Dataproc clusters and submit Dataproc jobs in the project to which you want the compute attributed.\n\nPredefined roles\n----------------\n\nGoogle Cloud manages the predefined roles that provide granular access for\nDataplex Universal Catalog.\n\n### Metadata roles\n\nMetadata roles have the ability to view metadata, such as table schemas.\n\n### Data roles\n\n| **Note:** Specify data roles at the lake level or lower (to zones or assets). Data roles set at the project level aren't further inherited by the lakes in the project.\n\nGranting data roles to a principal gives them the ability to read or write data\nin the underlying resources pointed to by the assets of the lake.\n\nDataplex Universal Catalog maps its roles to the data roles for each underlying\nstorage resource, such as Cloud Storage and BigQuery).\n\nDataplex Universal Catalog translates and propagates Dataplex Universal Catalog data\nroles to the underlying storage resource, setting the correct roles for each\nstorage resource. You can grant a single Dataplex Universal Catalog data role at the\nlake hierarchy (for example, a lake),\nand Dataplex Universal Catalog maintains the specified access to data on all\nresources connected to that lake (for example, Cloud Storage buckets\nand BigQuery datasets are referred to by assets in the\nunderlying zones).\n\nFor example, granting a principal the `dataplex.dataWriter` role for a lake\ngives the principal write access to all data within the lake, its\nunderlying zones and assets. Data access roles granted at a lower level (zone)\nare inherited in the lake hierarchy to the underlying assets.\n| **Note** : Dataplex Universal Catalog propagates IAM permissions to underlying resources. The propagation from data roles to underlying resource roles can take up to 30 minutes. Policy reconciliation, which is when Dataplex Universal Catalog detects changes made to propagated policies by underlying Cloud Storage admins, can take up to three hours.\n|\n| To check the policy propagation status, use the following API fields:\n|\n| - For a lake or a zone, use [AssetStatus](/dataplex/docs/reference/rest/v1/AssetStatus).\n| - For an asset, use [SecurityStatus](/dataplex/docs/reference/rest/v1/projects.locations.lakes.zones.assets#securitystatus).\n\nSecure your lake\n----------------\n\nYou can secure and manage access to your lake, and the data attached to it. In\nthe Google Cloud console, use one of the following views:\n\n- The Dataplex Universal Catalog **Manage** view on the **Permissions** tab\n- The Dataplex Universal Catalog **Secure** view\n\n### Using the **Manage** view\n\nThe **Permissions** tab lets you manage all the permissions on a lake\nresource, and presents an unfiltered view of all the permissions, including\nthose inherited.\n\nTo secure your lake, follow these steps:\n\n1. In the Google Cloud console, go to Dataplex Universal Catalog.\n\n [Go to Dataplex Universal Catalog](https://console.cloud.google.com/dataplex/lakes)\n2. Navigate to the **Manage** view.\n\n3. Click the name of the lake that you created.\n\n4. Click the **Permissions** tab.\n\n5. Click the **View by Roles** tab.\n\n6. Click **Add** to add a new role. Add the **Dataplex Data Reader** ,\n **Data Writer** , and **Data Owner** roles.\n\n7. Verify that the **Dataplex Data Reader** , **Data Writer** , and **Data\n Owner** roles appear.\n\n### Using the **Secure** view\n\n| **Note:** You can allow the Dataplex Universal Catalog **Secure** view to report missing permissions, by granting the `cloudasset.assets.analyzeIamPolicy` [permission](/asset-inventory/docs/access-control) to the project or the parent project containing the lake.\n\nThe Dataplex Universal Catalog **Secure** view in the Google Cloud console\nprovides the following:\n\n- A filterable view of only the Dataplex Universal Catalog roles that are centered on a specific resource\n- Separate data roles from lake resource roles\n\n**Figure 1** : In this example of a lake, both principals have data permissions on the asset called **Cloud Storage data (GCS data)**. These permissions aren't inherited from higher lake resources.\n\n\u003cbr /\u003e\n\n**Figure 2** : This example shows: \n\n1. A service account that inherits the Dataplex Administrator role from the project.\n2. Principals (email address) that inherit Dataplex Editor and Viewer roles from the project. These are the roles that apply to all resources.\n3. A principal (email address) that inherits the Dataplex Administrator role from the project.\n\nPolicy management\n-----------------\n\nAfter you specify your security policy, Dataplex Universal Catalog propagates\nthe permissions to the IAM policies of the managed resources.\n\nThe security policy configured at the lake level is propagated to all the\nresources managed within that lake.\nDataplex Universal Catalog provides propagation status and visibility into these\nlarge scale propagations on the Dataplex Universal Catalog **Manage \\\u003e\nPermissions** tab. It continuously monitors the managed resources for any\nchanges to IAM policy outside of Dataplex Universal Catalog.\n\nUsers that already have permissions on a resource continue to have them after a\nresource gets attached to a Dataplex Universal Catalog lake.\nSimilarly, non-Dataplex Universal Catalog role bindings that are created or updated\nafter attaching the resource to Dataplex Universal Catalog stay the same.\n\n### Set column-level, row-level, and table-level policies\n\nCloud Storage bucket assets have associated BigQuery\n[external tables](/bigquery/docs/external-tables) attached to them.\n\nYou can [upgrade a Cloud Storage bucket asset](/dataplex/docs/manage-assets#upgrade-asset),\nwhich means that Dataplex Universal Catalog removes the attached external tables and\nattaches [BigLake tables](/bigquery/docs/biglake-intro) instead.\n\nYou can use BigLake tables instead of external tables to give you\nfine-grained access control, including [row-level controls](/bigquery/docs/row-level-security-intro),\n[column-level controls](/bigquery/docs/column-level-security-intro), and\n[column data masking](/bigquery/docs/column-data-masking).\n\nMetadata security\n-----------------\n\nMetadata primarily refers to schema information associated with user data\npresent in resources managed by a lake.\n\nDataplex Universal Catalog Discovery examines the data in managed\nresources and extracts tabular schema information. These tables are published to\nBigQuery, Dataproc Metastore, and\nData Catalog\n([Deprecated](/data-catalog/docs/deprecations)) systems.\n| **Note:** The actual underlying data resides in the resource that's managed by the asset and is protected by data permissions through the Dataplex Data Reader role (`roles/dataplex.dataReader`) and the Dataplex Data Writer role (`roles/dataplex.dataWriter`). You are responsible for managing the metadata permissions within each of the metadata systems.\n\n### BigQuery\n\nEach discovered table has an associated table registered in\nBigQuery. For each zone, there is an associated\nBigQuery dataset under which all the external tables associated\nwith tables discovered in that data zone are registered.\n\nThe discovered Cloud Storage-hosted tables are registered under the dataset\ncreated for the zone.\n| **Note:** Dataplex Universal Catalog supports upgrading external tables into BigLake tables. For more information, see [Metadata security](#upgrade).\n\n### Dataproc Metastore\n\nDatabases and tables are made available in the\nDataproc Metastore associated with the Dataplex Universal Catalog\nlake instance.\nEach data zone has an associated database, and each asset can have one or more\nassociated tables.\n\nThe data in a Dataproc Metastore service is secured by\nconfiguring your VPC-SC network. The Dataproc Metastore\ninstance is provided to Dataplex Universal Catalog during lake creation, which\nalready makes it a user-managed resource.\n\n### Data Catalog\n\nEach discovered table has an associated entry\nin Data Catalog ([Deprecated](/data-catalog/docs/deprecations)),\nto enable search and discovery.\n\nData Catalog requires IAM policy names\nduring entry creation. Therefore, Dataplex Universal Catalog provides the\nIAM policy name of the Dataplex Universal Catalog asset resource that\nthe entry should be associated with. As a result, the permissions on the\nDataplex Universal Catalog entry are driven by the permissions on the asset resource.\nGrant the Dataplex Metadata Reader role (`roles/dataplex.metadataReader`) and\nthe Dataplex Metadata Writer role (`roles/dataplex.metadataWriter`) on the asset\nresource.\n\nWhat's next?\n------------\n\n- Learn more about [Dataplex Universal Catalog IAM](/dataplex/docs/iam-and-access-control).\n- Learn more about [Dataplex Universal Catalog IAM roles](/dataplex/docs/iam-roles).\n- Learn more about [Dataplex Universal Catalog IAM permissions](/dataplex/docs/iam-permissions)."]]