[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[[["\u003cp\u003eDataflow Runner v2 is enabled by default for batch pipelines using Apache Beam Java SDK 2.54.0 or later and Python SDK 2.21.0 or later, and is the only available runner for Go SDK.\u003c/p\u003e\n"],["\u003cp\u003eRunner v2 supports multi-language pipelines, allowing the use of transforms from other Apache Beam SDKs, such as using Java transforms from a Python pipeline or vice versa.\u003c/p\u003e\n"],["\u003cp\u003eRunner v2 requires Streaming Engine for all streaming jobs and includes specific limitations for certain classes within the Apache Beam Java SDK, such as \u003ccode\u003eMapState\u003c/code\u003e and \u003ccode\u003eSetState\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eTo use Runner v2 when it is not enabled by default or to disable it when it is, the \u003ccode\u003e--experiments=use_runner_v2\u003c/code\u003e or \u003ccode\u003e--experiments=disable_runner_v2\u003c/code\u003e flags can be used, respectively, depending on the SDK.\u003c/p\u003e\n"],["\u003cp\u003eTroubleshooting Runner v2 involves monitoring both SDK process and runner harness process logs, with the SDK processes running user code and the runner harness managing other tasks.\u003c/p\u003e\n"]]],[],null,["# Use Dataflow Runner v2\n\nWhen you use Dataflow to run your pipeline, the\nDataflow runner uploads your pipeline code and dependencies to a\nCloud Storage bucket and creates a Dataflow job. This\nDataflow job runs your pipeline on managed resources in\nGoogle Cloud.\n\n- For batch pipelines that use the Apache Beam Java SDK versions 2.54.0 or later, Runner v2 is enabled by default.\n- For pipelines that use the Apache Beam Java SDK, Runner v2 is required when running multi-language pipelines, using custom containers, or using Spanner or Bigtable change stream pipelines. In other cases, use the default runner.\n- For pipelines that use the Apache Beam Python SDK versions 2.21.0 or later, Runner v2 is enabled by default. For pipelines that use the Apache Beam Python SDK versions 2.45.0 and later, Dataflow Runner v2 is the only Dataflow runner available.\n- For the Apache Beam SDK for Go, Dataflow Runner v2 is the only Dataflow runner available.\n\nRunner v2 uses a services-based architecture that benefits\nsome pipelines:\n\n- Dataflow Runner v2 lets you pre-build your Python\n container, which can improve VM startup times and Horizontal Autoscaling\n performance. For more information, see\n [Pre-build Python dependencies](/dataflow/docs/guides/build-container-image#prebuild).\n\n- Dataflow Runner v2 supports\n [multi-language pipelines](https://beam.apache.org/documentation/programming-guide/#multi-language-pipelines),\n a feature that enables your Apache Beam pipeline to use transforms defined in\n other Apache Beam SDKs. Dataflow Runner v2 supports\n [using Java transforms from a Python SDK pipeline](https://beam.apache.org/documentation/sdks/python-multi-language-pipelines/)\n and\n [using Python transforms from a Java SDK pipeline](https://beam.apache.org/documentation/sdks/java-multi-language-pipelines/).\n When you run Apache Beam pipelines without Runner v2, the\n Dataflow runner uses language-specific workers.\n\nLimitations and restrictions\n----------------------------\n\nDataflow Runner v2 has the following requirements:\n\n- Dataflow Runner v2 requires [Streaming Engine](/dataflow/docs/streaming-engine) for streaming jobs.\n- Because Dataflow Runner v2 requires Streaming Engine for streaming jobs, any Apache Beam transform that requires Dataflow Runner v2 also requires the use of Streaming Engine for streaming jobs. For example, the [Pub/Sub Lite I/O\n connector](https://beam.apache.org/releases/pydoc/current/apache_beam.io.gcp.pubsublite.html) for the Apache Beam SDK for Python is a cross-language transform that requires Dataflow Runner v2. If you try to disable Streaming Engine for a job or template that uses this transform, the job fails.\n- For streaming pipelines that use the Apache Beam Java SDK, the classes [`MapState`](https://beam.apache.org/releases/javadoc/current/index.html?org/apache/beam/sdk/state/MapState.html) and [`SetState`](https://beam.apache.org/releases/javadoc/current/index.html?org/apache/beam/sdk/state/SetState.html) are not supported with Runner v2. To use the `MapState` and `SetState` classes with Java pipelines, enable Streaming Engine, disable Runner v2, and use the Apache Beam SDK version 2.58.0 or later.\n- For batch and streaming pipelines that use the Apache Beam Java SDK, the class [`AfterSynchronizedProcessingTime`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/transforms/windowing/AfterSynchronizedProcessingTime.html) isn't supported.\n- Dataflow [classic\n templates](/dataflow/docs/guides/templates/running-templates) can't be run with a different version of the Dataflow runner than they were built with. This means that Google-provided classic templates can't enable Runner v2. To enable Runner v2 for custom templates, set the `--experiments=use_runner_v2` flag when you build the template.\n- Due to a known autoscaling issue, Runner v2 is disabled by default for batch Java pipelines that require [stateful processing](https://beam.apache.org/documentation/programming-guide/#state-and-timers). You can still enable Runner v2 for those pipelines (see [Enable Runner v2](/dataflow/docs/runner-v2#enable)), but pipeline performance might be severely bottlenecked.\n\nEnable Runner v2\n----------------\n\nTo enable Dataflow Runner v2, follow the configuration\ninstructions for your Apache Beam SDK. \n\n### Java\n\nDataflow Runner v2 requires the Apache Beam Java SDK\nversions 2.30.0 or later, with version 2.44.0 or later being recommended.\n\nFor batch pipelines that use the Apache Beam Java SDK versions\n2.54.0 or later, Runner v2 is enabled by default.\n\nTo enable Runner v2, run your job with the `use_runner_v2` experiment. For\nmore information, see\n[Set experimental pipeline options](/dataflow/docs/guides/setting-pipeline-options#experimental).\n\n### Python\n\nFor pipelines that use the Apache Beam Python SDK versions\n2.21.0 or later, Runner v2 is enabled by default.\n\nDataflow Runner v2 isn't supported with the Apache Beam\nPython SDK versions 2.20.0 and earlier.\n\nIn some cases, your pipeline might not use Runner v2 even though\nthe pipeline runs on a supported SDK version. To run the job with Runner v2,\nset the `use_runner_v2` experiment. For more information, see\n[Set experimental pipeline options](/dataflow/docs/guides/setting-pipeline-options#experimental).\n\n### Go\n\nDataflow Runner v2 is the only Dataflow runner\navailable for the Apache Beam SDK for Go. Runner v2 is enabled by default.\n\nDisable Runner v2\n-----------------\n\nTo disable Dataflow Runner v2, follow the configuration\ninstructions for your Apache Beam SDK. \n\n### Java\n\nTo disable Runner v2, set the `disable_runner_v2` experiment. For more\ninformation, see\n[Set experimental pipeline options](/dataflow/docs/guides/setting-pipeline-options#experimental).\n\n### Python\n\nDisabling Runner v2 is not supported with the Apache Beam Python SDK\nversions 2.45.0 and later.\n\nFor earlier versions of the Python SDK, if your job is identified as using the\n`auto_runner_v2` experiment, you can disable Runner v2 by setting the\n`disable_runner_v2` experiment. For more information, see\n[Set experimental pipeline options](/dataflow/docs/guides/setting-pipeline-options#experimental).\n\n### Go\n\nDataflow Runner v2 can't be disabled in Go. Runner v2 is the\nonly Dataflow runner available for the Apache Beam SDK for\nGo.\n\nMonitor your job\n----------------\n\nUse the monitoring interface to view\n[Dataflow job metrics](/dataflow/docs/guides/using-monitoring-intf),\nsuch as memory utilization, CPU utilization, and more.\n\nWorker VM logs are available through the\n[Logs Explorer](/logging/docs/view/logs-explorer-interface) and the\n[Dataflow monitoring interface](/dataflow/docs/guides/monitoring-overview).\nWorker VM logs include logs from the runner harness process and logs from the SDK\nprocesses. You can use the VM logs to troubleshoot your job.\n\nTroubleshoot Runner v2\n----------------------\n\nTo troubleshoot jobs using Dataflow Runner v2, follow\n[standard pipeline troubleshooting steps](/dataflow/docs/guides/troubleshooting-your-pipeline).\nThe following list provides additional information about how\nDataflow Runner v2 works:\n\n- Dataflow Runner v2 jobs run two types of processes on the worker VM: SDK process and the runner harness process. Depending on the pipeline and VM type, there might be one or more SDK processes, but there is only one runner harness process per VM.\n- SDK processes run user code and other language-specific functions. The runner harness process manages everything else.\n- The runner harness process waits for all SDK processes to connect to it before starting to request work from Dataflow.\n- Jobs might be delayed if the worker VM downloads and installs dependencies during the SDK process startup. If issues occur during an SDK process, such as when starting up or installing libraries, the worker reports its status as unhealthy. If the startup times increase, enable the Cloud Build API on your project and submit your pipeline with the following parameter: `--prebuild_sdk_container_engine=cloud_build`.\n- Because Dataflow Runner v2 uses checkpointing, each worker might wait for up to five seconds while buffering changes before sending the changes for further processing. As a result, latency of approximately six seconds is expected.\n\n| **Note:** The pre-build feature requires the Apache Beam SDK for Python, version 2.25.0 or later.\n\n- To diagnose problems in your user code, examine the worker logs from the SDK processes. If you find any errors in the runner harness logs, [contact Support](https://console.cloud.google.com/support) to file a bug.\n- To debug common errors related to Dataflow multi-language pipelines, see the [Multi-language Pipelines Tips](https://cwiki.apache.org/confluence/display/BEAM/Multi-language+Pipelines+Tips) guide."]]