[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[[["\u003cp\u003eDataplex zones are named entities within a Dataplex lake, serving as logical groupings for unstructured, semi-structured, and structured data from various assets.\u003c/p\u003e\n"],["\u003cp\u003eThere are two types of Dataplex zones: raw zones, which store data in any format for staging before transformation, and curated zones, which store structured data for processing or analysis.\u003c/p\u003e\n"],["\u003cp\u003eTo add a zone to an existing lake, you must have the Dataplex Administrator IAM role or equivalent permissions, and then navigate to the Dataplex console to add a new zone, providing necessary configurations.\u003c/p\u003e\n"],["\u003cp\u003eBoth raw and curated zones support Cloud Storage buckets or BigQuery datasets and offer bucket-level or dataset-level read and write permission granularity, with curated zones enforcing schema consistency for BigQuery tables.\u003c/p\u003e\n"],["\u003cp\u003eMetadata discovery is an optional setting for zones which allows Dataplex to automatically scan and extract metadata from data in the zone, with options for scheduling scans and inclusion/exclusion patterns.\u003c/p\u003e\n"]]],[],null,["# Add a zone\n\nThis document describes what Dataplex Universal Catalog zones are and how to add\nthem to your Dataplex Universal Catalog lake.\n\nOverview\n--------\n\nDataplex Universal Catalog zones are named entities within a\nDataplex Universal Catalog lake. They are logical groupings of unstructured,\nsemi-structured, and structured data, consisting of multiple assets, such as\nCloud Storage buckets, BigQuery datasets, and BigQuery tables.\n\nA lake can include one or more zones. While a zone can only be part of one\nlake, it might contain assets that point to resources that are part of projects\noutside of its parent project.\n\nYou can select configurations for a zone in Dataplex Universal Catalog. There are\ntwo types of zones that you can choose from: raw and curated.\n\n### Raw zones\n\nRaw zones store structured data, semi-structured data such as CSV files and\nJSON files, and unstructured data in any format from external sources. Raw zones\nare useful for staging raw data before performing any transformations. Data can\nbe stored in Cloud Storage buckets or BigQuery datasets.\n\nRaw zones support bucket-level or dataset-level granularity for read and write\npermissions. There are no restrictions on the type of data that can be stored\nin raw zones.\n\n### Curated zones\n\nCurated zones store structured data. Data can be stored in Cloud Storage buckets\nor BigQuery datasets.\n\nSupported formats for Cloud Storage buckets include Parquet, Avro, and ORC.\nCurated zones are useful for staging data that requires processing before being\nused for analysis, or for serving data that is ready for analysis.\n\nFor BigQuery tables, you must have a well-defined schema and\nHive-style partitions. When you provide a schema for a given table in a curated\nzone, the data should conform to the schema defined for the table without schema\ndrift. This means that the data should be compatible with the schema\ndefined for the table, and new partitions shouldn't have a schema that\nconflicts with the table schema.\n\nCurated zones support Cloud Storage bucket-level or\nBigQuery dataset-level granularity for read and write\npermissions.\n\nBefore you begin\n----------------\n\nBefore you can add zones to a lake, you must have a lake. If you haven't\nalready, [create a lake](/dataplex/docs/create-lake).\n\nMost `gcloud lake` commands require a location. You can specify the location by\nsetting the `--location` parameter.\n\n### Required roles\n\n\nTo get the permission that\nyou need to add a zone,\n\nask your administrator to grant you the\n\n\n[Dataplex Administrator](/iam/docs/roles-permissions/dataplex#dataplex.admin) (`roles/dataplex.admin`)\nIAM role on project.\n\n\nFor more information about granting roles, see [Manage access to projects, folders, and organizations](/iam/docs/granting-changing-revoking-access).\n\n\nThis predefined role contains the\n` dataplex.lakes.create`\npermission,\nwhich is required to\nadd a zone.\n\n\nYou might also be able to get\nthis permission\nwith [custom roles](/iam/docs/creating-custom-roles) or\nother [predefined roles](/iam/docs/roles-overview#predefined).\n\nAdd a zone\n----------\n\nYou can add multiple zones to your lake. You can add one zone at a time but\nstill use your lake while the zone is being created.\n\nTo add a zone to an existing lake, follow these steps: \n\n### Console\n\n1. In the Google Cloud console, go to Dataplex Universal Catalog.\n\n [Go to Dataplex](https://console.cloud.google.com/dataplex/lakes)\n2. Navigate to the **Manage** view.\n\n3. In the **Manage** view, click the name of the lake you'd like to add a\n zone to.\n\n4. In the **Zones** tab, click add\n **Add zone**.\n\n5. Enter a **Display name** for your zone.\n\n | **Note:** The zone ID is automatically generated for you. You can also provide your own ID. Choose a meaningful ID, because it's used in creating dataset and database names.\n6. Click the **Type** menu. Choose **Raw Zone** or **Curated Zone** . Learn\n more about [supported zone types](#zone-concepts).\n\n7. Optional: Enter a description.\n\n8. Under **Data locations** , select either **Regional** or **Multi-regional**.\n What you choose cannot be changed later. Single region and multi-region\n data cannot be mixed in the same zone.\n\n9. Optional: Enable metadata discovery, which lets Dataplex Universal Catalog\n automatically scan and extract metadata from the data in your zone:\n\n 1. Click **Discovery settings**.\n\n 2. Make sure **Enable metadata discovery** is selected.\n\n 3. Optional: Under **Include patterns**, list the files to include in the\n discovery scans.\n\n 4. Optional: Under **Exclude patterns**, list the files to exclude in the\n discovery scans. If you enter both include and exclude patterns, exclude\n patterns are applied first.\n\n 5. Click the **Repeats** menu and select a frequency. If you select\n **Custom** , in the **Schedule** field, enter a\n [job schedule](/scheduler/docs/configuring/cron-job-schedules?&_ga=2.153003257.-1308073873.1643231419#defining_the_job_schedule).\n Otherwise, the **Schedule** value is automatically filled for you.\n\n 6. Click the **Timezone** menu and select a timezone.\n\n10. Click **Create**.\n\n### REST\n\nTo add a zone, use the\n[lakes.zones.create](/dataplex/docs/reference/rest/v1/projects.locations.lakes.zones/create)\nmethod.\n\nIt might take a few minutes for the zone to be created.\n\nWhen the zone creation succeeds, the zone automatically enters active state. If\nit fails, then the lake is rolled back to its previous state.\n\nAfter you create your zone, you can map data stored in Cloud Storage buckets and\nBigQuery datasets as assets to your zone. For more information, see\n[Add an asset](/dataplex/docs/manage-assets#add-asset).\n\nWhat's next\n-----------\n\n- Learn how to [manage buckets](/dataplex/docs/manage-buckets).\n- Learn how to [create a lake](/dataplex/docs/create-lake).\n- Learn more about [Cloud Audit Logs](/logging/docs/audit)."]]