O paralelismo é controlado pelo número de
nós no cluster do Bigtable. Cada nó gerencia um ou mais intervalos de chaves,
mas eles podem se mover entre os nós como parte do
balanceamento de carga. Para mais informações,
Confira Leituras e desempenhona
documentação do Bigtable.
Você é cobrado pelo número de nós nos clusters da sua instância. Confira Preços do Bigtable.
Performance
A tabela a seguir mostra as métricas de desempenho das operações de leitura
do Bigtable. As cargas de trabalho foram executadas em um worker e2-standard2 usando
o SDK do Apache Beam 2.48.0 para Java. Eles não usaram o Runner v2.
Essas métricas são baseadas em pipelines de lote simples. Elas servem para comparar o desempenho
entre conectores de E/S e não representam necessariamente pipelines reais.
O desempenho do pipeline do Dataflow é complexo e depende do tipo de VM, dos dados
processados, do desempenho de origens e coletores externos e do código do usuário. As métricas se baseiam
na execução do SDK do Java e não representam as características de desempenho de outros
SDKs da linguagem. Para mais informações, confira Desempenho do E/S do Beam.
Práticas recomendadas
Para novos pipelines, use o conector BigtableIO, não o
CloudBigtableIO.
Crie perfis de app separados para cada tipo de
pipeline. Os perfis de app permitem melhores métricas para diferenciar o tráfego
entre pipelines, para suporte e rastreamento de uso.
Monitore os nós do Bigtable. Se você tiver gargalos de desempenho, verifique se os recursos, como o uso da CPU, estão restritos
no Bigtable. Para mais informações, consulte
Monitoramento.
Em geral, os tempos limite padrão são bem ajustados na maioria dos pipelines. Se um
pipeline de streaming travar ao ler no Bigtable,
tente chamar withAttemptTimeout para ajustar o tempo limite
da tentativa.
Considere definir
maxNumWorkers no job do Dataflow para limitar a carga
no cluster do Bigtable.
Se um processamento significativo for feito em um elemento do Bigtable antes
de um embaralhamento, as chamadas para o Bigtable podem expirar. Nesse caso, é possível
chamar withMaxBufferElementCount para armazenar elementos
em buffer. Esse método converte a operação de leitura de streaming para paginado,
o que evita o problema.
Se você usa um único cluster do Bigtable para pipelines de streaming e
em lote, e o desempenho é reduzido no lado do Bigtable,
considere configurar a replicação no cluster. Em seguida, separe os pipelines de lote
e de streaming para que eles leiam de réplicas diferentes. Para mais
informações, consulte Informações gerais da replicação.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-04 UTC."],[[["\u003cp\u003eUse the Apache Beam Bigtable I/O connector to read data from Bigtable to Dataflow, considering Google-provided Dataflow templates as an alternative depending on your specific use case.\u003c/p\u003e\n"],["\u003cp\u003eParallelism in reading Bigtable data is governed by the number of nodes in the Bigtable cluster, with each node managing key ranges.\u003c/p\u003e\n"],["\u003cp\u003ePerformance metrics for Bigtable read operations on one \u003ccode\u003ee2-standard2\u003c/code\u003e worker using Apache Beam SDK 2.48.0 for Java, show a throughput of 180 MBps or 170,000 elements per second for 100M records, 1 kB, and 1 column, noting that real-world pipeline performance may vary.\u003c/p\u003e\n"],["\u003cp\u003eFor new pipelines, use the \u003ccode\u003eBigtableIO\u003c/code\u003e connector instead of \u003ccode\u003eCloudBigtableIO\u003c/code\u003e, and create separate app profiles for each pipeline type for better traffic differentiation and tracking.\u003c/p\u003e\n"],["\u003cp\u003eBest practices for pipeline optimization include monitoring Bigtable node resources, adjusting timeouts as needed, considering Bigtable autoscaling or resizing, and potentially using replication to separate batch and streaming pipelines for improved performance.\u003c/p\u003e\n"]]],[],null,["# Read from Bigtable to Dataflow\n\nTo read data from Bigtable to Dataflow, use the\nApache Beam [Bigtable I/O connector](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/package-summary.html).\n| **Note:** Depending on your scenario, consider using one of the [Google-provided Dataflow templates](/dataflow/docs/guides/templates/provided-templates). Several of these read from Bigtable.\n\nParallelism\n-----------\n\nParallelism is controlled by the number of\n[nodes](/bigtable/docs/instances-clusters-nodes#nodes) in the\nBigtable cluster. Each node manages one or more key ranges,\nalthough key ranges can move between nodes as part of\n[load balancing](/bigtable/docs/overview#load-balancing). For more information,\nsee [Reads and performance](/bigtable/docs/reads#performance) in the\nBigtable documentation.\n\nYou are charged for the number of nodes in your instance's clusters. See\n[Bigtable pricing](/bigtable/pricing).\n\nPerformance\n-----------\n\nThe following table shows performance metrics for Bigtable read\noperations. The workloads were run on one `e2-standard2` worker, using the\nApache Beam SDK 2.48.0 for Java. They did not use Runner v2.\n\n\nThese metrics are based on simple batch pipelines. They are intended to compare performance\nbetween I/O connectors, and are not necessarily representative of real-world pipelines.\nDataflow pipeline performance is complex, and is a function of VM type, the data\nbeing processed, the performance of external sources and sinks, and user code. Metrics are based\non running the Java SDK, and aren't representative of the performance characteristics of other\nlanguage SDKs. For more information, see [Beam IO\nPerformance](https://beam.apache.org/performance/).\n\n\u003cbr /\u003e\n\nBest practices\n--------------\n\n- For new pipelines, use the [`BigtableIO`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/BigtableIO.html) connector, not\n `CloudBigtableIO`.\n\n- Create separate [app profiles](/bigtable/docs/app-profiles) for each type of\n pipeline. App profiles enable better metrics for differentiating traffic\n between pipelines, both for support and for tracking usage.\n\n- Monitor the Bigtable nodes. If you experience performance\n bottlenecks, check whether resources such as CPU utilization are constrained\n within Bigtable. For more information, see\n [Monitoring](/bigtable/docs/monitoring-instance).\n\n- In general, the default timeouts are well tuned for most pipelines. If a\n streaming pipeline appears to get stuck reading from Bigtable,\n try calling [`withAttemptTimeout`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/BigtableIO.Read.html#withAttemptTimeout-org.joda.time.Duration-) to adjust the attempt\n timeout.\n\n- Consider enabling\n [Bigtable autoscaling](/bigtable/docs/autoscaling), or resize\n the Bigtable cluster to scale with the size of your\n Dataflow jobs.\n\n- Consider setting\n [`maxNumWorkers`](/dataflow/docs/reference/pipeline-options#resource_utilization)\n on the Dataflow job to limit load on the\n Bigtable cluster.\n\n- If significant processing is done on a Bigtable element before\n a shuffle, calls to Bigtable might time out. In that case, you\n can call [`withMaxBufferElementCount`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/BigtableIO.Read.html#withMaxBufferElementCount-java.lang.Integer-) to buffer\n elements. This method converts the read operation from streaming to paginated,\n which avoids the issue.\n\n- If you use a single Bigtable cluster for both streaming and\n batch pipelines, and the performance degrades on the Bigtable\n side, consider setting up replication on the cluster. Then separate the batch\n and streaming pipelines, so that they read from different replicas. For more\n information, see [Replication overview](/bigtable/docs/replication-overview).\n\nWhat's next\n-----------\n\n- Read the [Bigtable I/O connector](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/package-summary.html) documentation.\n- See the list of [Google-provided templates](/dataflow/docs/guides/templates/provided-templates)."]]