Create an Apache Iceberg table with metadata in BigQuery metastore

This document shows you how to run Dataproc Serverless for Spark SQL and PySpark batch workloads to create an Apache Iceberg table with metadata stored in BigQuery metastore. For information on other ways to run Spark code, see Run PySpark code in a BigQuery notebook and Run an Apache Spark workload

Before you begin

If you haven't already done so, create a Google Cloud project and a Cloud Storage bucket.

  1. Set up your project

    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    3. Enable the Dataproc, BigQuery, and Cloud Storage APIs.

      Enable the APIs

    4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    5. Enable the Dataproc, BigQuery, and Cloud Storage APIs.

      Enable the APIs

  2. Create a Cloud Storage bucket in your project.

    1. In the Google Cloud console, go to the Cloud Storage Buckets page.

      Go to Buckets

    2. Click Create.
    3. On the Create a bucket page, enter your bucket information. To go to the next step, click Continue.
      1. In the Get started section, do the following:
        • Enter a globally unique name that meets the bucket naming requirements.
        • To add a bucket label, expand the Labels section (), click Add label, and specify a key and a value for your label.
      2. In the Choose where to store your data section, do the following:
        1. Select a Location type.
        2. Choose a location where your bucket's data is permanently stored from the Location type drop-down menu.
        3. To set up cross-bucket replication, select Add cross-bucket replication via Storage Transfer Service and follow these steps:

          Set up cross-bucket replication

          1. In the Bucket menu, select a bucket.
          2. In the Replication settings section, click Configure to configure settings for the replication job.

            The Configure cross-bucket replication pane appears.

            • To filter objects to replicate by object name prefix, enter a prefix that you want to include or exclude objects from, then click Add a prefix.
            • To set a storage class for the replicated objects, select a storage class from the Storage class menu. If you skip this step, the replicated objects will use the destination bucket's storage class by default.
            • Click Done.
      3. In the Choose how to store your data section, do the following:
        1. Select a default storage class for the bucket or Autoclass for automatic storage class management of your bucket's data.
        2. To enable hierarchical namespace, in the Optimize storage for data-intensive workloads section, select Enable hierarchical namespace on this bucket.
      4. In the Choose how to control access to objects section, select whether or not your bucket enforces public access prevention, and select an access control method for your bucket's objects.
      5. In the Choose how to protect object data section, do the following:
        • Select any of the options under Data protection that you want to set for your bucket.
          • To enable soft delete, click the Soft delete policy (For data recovery) checkbox, and specify the number of days you want to retain objects after deletion.
          • To set Object Versioning, click the Object versioning (For version control) checkbox, and specify the maximum number of versions per object and the number of days after which the noncurrent versions expire.
          • To enable the retention policy on objects and buckets, click the Retention (For compliance) checkbox, and then do the following:
            • To enable Object Retention Lock, click the Enable object retention checkbox.
            • To enable Bucket Lock, click the Set bucket retention policy checkbox, and choose a unit of time and a length of time for your retention period.
        • To choose how your object data will be encrypted, expand the Data encryption section (), and select a Data encryption method.
    4. Click Create.

  3. Grant the BigQuery Data Editor (roles/bigquery.dataEditor) role to the Compute Engine default service account, PROJECT_NUMBER-compute@developer.gserviceaccount.com. For instructions, see Grant a single role.

    Google Cloud CLI example:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member PROJECT_NUMBER-compute@developer.gserviceaccount.com \
    --role roles/bigquery.dataEditor
    

    Notes:

    • PROJECT_ID and PROJECT_NUMBER are listed in the Project info section in the Google Cloud console Dashboard.

OSS resources to BigQuery resources mapping

Note the following mapping between open source resource and BigQuery resource terms:

OSS resource BigQuery resource
Namespace, Database Dataset
Partitioned or Unpartitioned Table Table
View View

Create an Iceberg table

This section shows you how to create an Iceberg table with metadata in BigQuery metastore using Dataproc Serverless Spark SQL and PySpark batch workloads.

Spark SQL

Run a Spark SQL workload to create an Iceberg table

The following steps show you how to run a Dataproc Serverless Spark SQL batch workload to create an Iceberg table with table metadata stored in BigQuery metastore.

  1. Copy the following Spark SQL commands locally or in Cloud Shell into an iceberg-table.sql file.

    USE CATALOG_NAME;
    CREATE NAMESPACE IF NOT EXISTS example_namespace;
    DROP TABLE IF EXISTS example_table;
    CREATE TABLE example_table (id int, data string) USING ICEBERG LOCATION 'gs://BUCKET/WAREHOUSE_FOLDER';
    INSERT INTO example_table VALUES (1, 'first row');
    ALTER TABLE example_table ADD COLUMNS (newDoubleCol double);
    DESCRIBE TABLE example_table;
    

    Replace the following:

    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used as Iceberg warehouse directory.
  2. Run the following command locally or in Cloud Shell from the directory containing iceberg-table.sql to submit the Spark SQL workload.

    gcloud dataproc batches submit spark-sql iceberg-table.sql \
        --project=PROJECT_ID \
        --region=REGION \
        --deps-bucket=BUCKET_NAME \
        --version=2.2 \
        --subnet=SUBNET_NAME \
        --properties="spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog,spark.sql.catalog.CATALOG_NAME.catalog-impl=org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog,spark.sql.catalog.CATALOG_NAME.gcp_project=PROJECT_ID,spark.sql.catalog.CATALOG_NAME.gcp_location=LOCATION,spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER"
      

    Notes:

    • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
    • REGION: An available Compute Engine region to run the workload.
    • BUCKET_NAME: The name of your Cloud Storage bucket. Spark uploads workload dependencies to a /dependencies folder in this bucket before running the batch workload. The WAREHOUSE_FOLDER is located in this bucket.
    • --version: Dataproc Serverless runtime version 2.2 or later.
    • SUBNET_NAME: The name of a VPC subnet in the REGION. If you omit this flag, Dataproc Serverless selects the default subnet in the session region. Dataproc Serverless enables Private Google Access (PGA) on the subnet. For network connectivity requirements, see Dataproc Serverless for Spark network configuration.
    • LOCATION: A supported BigQuery location. The default location is "US".
    • --properties Catalog properties.
  3. View table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata.

PySpark

The following steps show you how to run a Dataproc Serverless PySpark batch workload to create an Iceberg table with table metadata stored in BigQuery metastore.

  1. Copy the following PySpark code locally or in Cloud Shell into an iceberg-table.py file.
    from pyspark.sql import SparkSession
    spark = SparkSession.builder.appName("iceberg-table-example").getOrCreate()
    
    catalog = "CATALOG_NAME"
    namespace = "NAMESPACE"
    
    spark.sql(f"USE `{catalog}`;")
    spark.sql(f"CREATE NAMESPACE IF NOT EXISTS `{namespace}`;")
    spark.sql(f"USE `{namespace}`;")
    
    # Create table and display schema
    spark.sql("DROP TABLE IF EXISTS example_iceberg_table")
    spark.sql("CREATE TABLE example_iceberg_table (id int, data string) USING ICEBERG") 
    spark.sql("DESCRIBE example_iceberg_table;")
    
    # Insert table data.
    spark.sql("INSERT INTO example_iceberg_table VALUES (1, 'first row');")
    
    # Alter table, then display schema. 
    spark.sql("ALTER TABLE example_iceberg_table ADD COLUMNS (newDoubleCol double);")
    spark.sql("DESCRIBE example_iceberg_table;")

    Replace the following:

    • CATALOG_NAME and NAMESPACE: The Iceberg catalog name and namespace combine to identify the Iceberg table (catalog.namespace.table_name).
  2. Run the following command locally or in Cloud Shell from the directory containing iceberg-table.py to submit the PySpark workload.
    gcloud dataproc batches submit pyspark iceberg-table.py \
        --project=PROJECT_ID \
        --region=REGION \
        --deps-bucket=BUCKET_NAME \
        --version=2.2 \
        --subnet=SUBNET_NAME \
        --properties="spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog,spark.sql.catalog.CATALOG_NAME.catalog-impl=org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog,spark.sql.catalog.CATALOG_NAME.gcp_project=PROJECT_ID,spark.sql.catalog.CATALOG_NAME.gcp_location=LOCATION,spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER"
        

    Notes:

    • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
    • REGION: An available Compute Engine region to run the workload.
    • BUCKET_NAME: The name of your Cloud Storage bucket. Spark uploads workload dependencies to a /dependencies folder in this bucket before running the batch workload.
    • --version: Dataproc Serverless runtime version 2.2 or later.
    • SUBNET_NAME: The name of a VPC subnet in the REGION. If you omit this flag, Dataproc Serverless selects the default subnet in the session region. Dataproc Serverless enables Private Google Access (PGA) on the subnet. For network connectivity requirements, see Dataproc Serverless for Spark network configuration.
    • LOCATION: A supported BigQuery location. The default location is "US".
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used as the Iceberg warehouse directory.
    • --properties: Catalog properties.
  3. View table schema in BigQuery.
    1. In the Google Cloud console, go to the BigQuery page. Go to BigQuery Studio
    2. View Iceberg table metadata.