Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Konektor Bigtable Beam
Konektor Bigtable Beam (BigtableIO) adalah konektor I/O Apache
Beam open source yang dapat membantu Anda melakukan operasi batch dan streaming
pada data Bigtable dalam pipeline menggunakan
Dataflow.
Jika Anda memigrasikan data dari HBase ke Bigtable atau menjalankan aplikasi yang menggunakan HBase API, bukan Bigtable API, gunakan konektor Bigtable HBase Beam (CloudBigtableIO), bukan konektor yang dijelaskan di halaman ini.
Sebelum membuat pipeline Dataflow, periksa dukungan runtime Apache Beam untuk memastikan Anda menggunakan versi Java yang didukung untuk Dataflow. Gunakan
rilis Apache Beam terbaru yang didukung.
Konektor Bigtable Beam digunakan bersama dengan klien Bigtable untuk Java, yaitu library klien yang memanggil Bigtable API. Anda menulis kode untuk men-deploy pipeline yang menggunakan
konektor ke Dataflow, yang menangani penyediaan dan
pengelolaan resource serta membantu skalabilitas dan keandalan pemrosesan data.
Untuk mengetahui informasi selengkapnya tentang model pemrograman Apache Beam, lihat dokumentasi
Beam.
Kontrol alur penulisan batch
Saat mengirim batch tulis (termasuk permintaan penghapusan) ke tabel menggunakan
konektor Bigtable Beam, Anda dapat mengaktifkan kontrol alur batch tulis. Jika
fitur ini diaktifkan, Bigtable akan otomatis melakukan
hal berikut:
Membatasi laju traffic untuk menghindari kelebihan beban pada cluster Bigtable
Memastikan cluster berada di bawah beban yang cukup untuk memicu penskalaan otomatis Bigtable (jika diaktifkan), sehingga lebih banyak node ditambahkan secara otomatis ke cluster saat diperlukan
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-27 UTC."],[[["\u003cp\u003eThe Bigtable Beam connector (\u003ccode\u003eBigtableIO\u003c/code\u003e) facilitates batch and streaming operations on Bigtable data within Apache Beam pipelines, especially when used in conjunction with Dataflow.\u003c/p\u003e\n"],["\u003cp\u003eFor migrations from HBase or applications using the HBase API, the Bigtable HBase Beam connector (\u003ccode\u003eCloudBigtableIO\u003c/code\u003e) should be used instead of the standard \u003ccode\u003eBigtableIO\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe Bigtable Beam connector is a component of the Apache Beam GitHub repository, and users should refer to the \u003ccode\u003eClass BigtableIO\u003c/code\u003e Javadoc for detailed information.\u003c/p\u003e\n"],["\u003cp\u003eWhen using Dataflow pipelines with the connector, it is important to refer to the Apache Beam runtime support page to ensure you are using a compatible version of Java.\u003c/p\u003e\n"],["\u003cp\u003eBatch write flow control, when enabled, allows Bigtable to automatically rate-limit traffic and ensure the cluster has enough load to trigger autoscaling.\u003c/p\u003e\n"]]],[],null,["# Bigtable Beam connector\n=======================\n\nThe Bigtable Beam connector (`BigtableIO`) is an open source [Apache\nBeam](https://beam.apache.org/) I/O connector that can help you perform batch and streaming\noperations on Bigtable data in a [pipeline](https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline) using\n[Dataflow](/dataflow/docs/overview).\n\nIf you are migrating from HBase to Bigtable or you are running an\napplication uses the HBase API instead of the Bigtable\nAPIs, use the [Bigtable HBase Beam connector](/bigtable/docs/hbase-dataflow-java)\n(`CloudBigtableIO`) instead of the connector described on this page.\n\nConnector details\n-----------------\n\nThe Bigtable Beam connector is a component of the\n[Apache Beam GitHub\nrepository](https://github.com/apache/beam). The Javadoc is available\nat [`Class\nBigtableIO`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/BigtableIO.html).\n\nBefore you create a Dataflow pipeline, check [Apache Beam\nruntime support](/dataflow/docs/support/beam-runtime-support) to make sure you\nare using a version of Java that is supported for Dataflow. Use\nthe most recent supported release of Apache Beam.\n\nThe Bigtable Beam connector is used in conjunction with the\nBigtable client for Java, a client library that calls the\nBigtable APIs. You write code to deploy a pipeline that uses the\nconnector to Dataflow, which handles the provisioning and\nmanagement of resources and assists with the scalability and reliability of data\nprocessing.\n\nFor more information on the Apache Beam programming model, see the [Beam\ndocumentation](https://beam.apache.org/get-started/beam-overview/).\n\nBatch write flow control\n------------------------\n\nWhen you send batch writes (including delete requests) to a table using the\nBigtable Beam connector, you can enable *batch write flow control*. When\nthis feature is enabled, Bigtable automatically does the\nfollowing:\n\n- Rate-limits traffic to avoid overloading your Bigtable cluster\n- Ensures the cluster is under enough load to trigger Bigtable autoscaling (if enabled), so that more nodes are automatically added to the cluster when needed\n\nFor more information, see [Batch write flow\ncontrol](/bigtable/docs/writes#flow-control). For a code sample, see [Enable\nbatch write flow control](/bigtable/docs/writing-data#batch-write-flow-control).\n\nWhat's next\n-----------\n\n- [Read an overview of Bigtable write requests.](/bigtable/docs/writes)\n- [Review a list of Dataflow templates that work with\n Bigtable.](/bigtable/docs/dataflow-templates)\n- [Bigtable Kafka Connect sink connector](/bigtable/docs/kafka-sink-connector)"]]