Import and export data

This page lists available methods for importing and exporting data into and from Bigtable.

Import data into Bigtable

To import BigQuery data into Bigtable, see Export data to Bigtable (Reverse ETL) in the BigQuery documentation.

You can run continuous queries on your BigQuery data and export the results to Bigtable in real time using reverse ETL. For more information, see Introduction to continuous queries in the BigQuery documentation.

Move or copy data using a template

You can use the following Dataflow templates to move or copy data between Bigtable and other sources or destinations.

BigQuery

The following Dataflow template lets you export data from BigQuery to Bigtable.

Apache Cassandra to Bigtable

The following Dataflow template lets you export data from Apache Cassandra to Bigtable.

Avro files

The following Dataflow templates allow you to export data from Bigtable as Avro files and then import the data back into Bigtable. You can execute the templates by using the Google Cloud CLI or the Google Cloud console. The source code is on GitHub.

Parquet files

The following Dataflow templates allow you to export data from Bigtable as Parquet files and then import the data back into Bigtable. You can execute the templates by using the gcloud CLI or the Google Cloud console. The source code is on GitHub.

SequenceFiles

The following Dataflow templates allow you to export data from Bigtable as SequenceFiles and then import the data back into Bigtable. You can execute the templates by using the Google Cloud CLI or the Google Cloud console.

Import from the Tables page

You can execute many of the import methods described on this page using the Google Cloud console. Import the following types of data from the Tables page:

  • CSV data
  • BigQuery data
  • Avro files
  • Cassandra keyspaces and tables
  • Parquet files
  • SequenceFile files

Console

  1. Open the list of Bigtable instances in the Google Cloud console.

    Open the instance list

  2. Click the instance that contains the table you want to import.

  3. Click Tables in the left pane.

    The Tables page displays a list of tables in the instance.

  4. Next to the name of the table that you want to import data into, click the Table action menu.

  5. Click Import data, and then select the type of data that you want to import:

    • If you select Avro, Parquet, SequenceFile or Cassandra, the console displays a partly completed Dataflow template. Fill out the job template and click Run job.
    • If you select CSV, the cbt CLI terminal window opens. For more information, see the Import CSV data section of this document.
    • If you select BigQuery, BigQuery Studio opens. Fill out the reverse ETL query and run it.

Export from the Tables page

You can execute some of the export methods described on this page using the Google Cloud console. Export the following types of data from the Tables page:

  • Avro files
  • Parquet files
  • SequenceFile files

Console

  1. Open the list of Bigtable instances in the Google Cloud console.

    Open the instance list

  2. Click the instance that contains the table you want to export.

  3. Click Tables in the left pane.

    The Tables page displays a list of tables in the instance.

  4. Next to the name of the table, click the Table action menu.

  5. Click Export data, and then select the file type that you want to export.

    The console displays a partly completed Dataflow template.

  6. Fill out the job template and click Run job.

Import CSV data

You can import data from a CSV file into a Bigtable table by using the cbt CLI . To do this, you need to make sure that your environment, such as Cloud Shell, can access the CSV file. You can get your CSV file into Cloud Shell in one of the following ways:

Upload a local CSV file:

  1. In Cloud Shell, click the More menu and select Upload.
  2. Select the CSV file from your local machine.
  3. After you upload the file, refer to the file by its name in the cbt CLI command.

Copy a CSV file from Cloud Storage:

The cbt CLI does not directly support importing from a Cloud Storage bucket. You must first copy the CSV file from Cloud Storage to your Cloud Shell environment. For more information, see Upload an object to a bucket.

After the CSV file is available in your environment, use the cbt CLI command to import the data. For a sample command, see Batch write many rows based on the input file.

What's next