Pada 15 September 2026, semua lingkungan Cloud Composer 1 dan Cloud Composer 2 versi 2.0.x akan mencapai akhir masa pakainya yang direncanakan, dan Anda tidak akan dapat menggunakannya. Sebaiknya rencanakan migrasi ke Cloud Composer 3.
Lingkungan Cloud Composer 2 secara otomatis diskalakan sebagai respons terhadap permintaan
DAG dan tugas yang dieksekusi:
Jika lingkungan Anda mengalami beban berat, Cloud Composer akan otomatis meningkatkan jumlah pekerja di lingkungan Anda.
Jika lingkungan Anda tidak menggunakan beberapa pekerjanya, pekerja ini akan
dihapus untuk menghemat resource dan biaya lingkungan.
Anda dapat menetapkan jumlah pekerja minimum dan maksimum untuk lingkungan Anda.
Cloud Composer akan otomatis menskalakan lingkungan Anda dalam
batas yang ditetapkan. Anda dapat menyesuaikan batas ini kapan saja.
Jumlah pekerja disesuaikan berdasarkan
metrik Target Faktor Penskalaan. Metrik ini
dihitung berdasarkan:
Jumlah pekerja saat ini
Jumlah tugas Celery dalam antrean Celery, yang tidak ditetapkan ke pekerja
Cloud Composer mengonfigurasi autoscaler ini di cluster
lingkungan. Tindakan ini akan otomatis menskalakan jumlah node dalam cluster,
jenis mesin, dan jumlah pekerja.
Parameter skala dan performa
Selain penskalaan otomatis, Anda dapat mengontrol parameter skala dan performa lingkungan dengan menyesuaikan batas CPU, memori, dan disk untuk penjadwal, server web, dan pekerja. Dengan demikian, Anda dapat menskalakan
lingkungan secara vertikal, selain penskalaan horizontal yang disediakan oleh
fitur penskalaan otomatis. Anda dapat menyesuaikan parameter skala dan performa
penjadwal Airflow, server web, dan pekerja kapan saja.
Parameter performa ukuran lingkungan di lingkungan Anda mengontrol
parameter performa infrastruktur Cloud Composer terkelola
yang menyertakan database Airflow. Pertimbangkan untuk memilih ukuran lingkungan
yang lebih besar jika Anda ingin menjalankan sejumlah besar DAG dan tugas dengan performa
infrastruktur yang lebih tinggi. Misalnya, ukuran lingkungan yang lebih besar akan meningkatkan jumlah entri log tugas Airflow yang dapat diproses lingkungan Anda dengan penundaan minimal.
Beberapa penjadwal
Airflow 2 dapat menggunakan lebih dari satu penjadwal Airflow secara bersamaan. Fitur
Airflow ini juga dikenal sebagai penjadwal HA. Di Cloud Composer 2,
Anda dapat menetapkan jumlah penjadwal untuk lingkungan dan menyesuaikannya kapan saja. Cloud Composer tidak otomatis menskalakan jumlah penjadwal di lingkungan Anda.
Untuk informasi selengkapnya tentang cara mengonfigurasi jumlah penjadwal untuk lingkungan Anda, lihat Menskalakan lingkungan.
Kapasitas disk database
Ruang disk untuk database Airflow akan otomatis bertambah untuk mengakomodasi
permintaan.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-26 UTC."],[[["\u003cp\u003eCloud Composer 2 environments automatically scale the number of workers based on the demands of your DAGs and tasks, increasing workers during heavy loads and removing them during low usage.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 uses GKE's Horizontal Pod Autoscaler (HPA), Cluster Autoscaler (CA), and Node auto-provisioning (NAP) to manage autoscaling, adjusting the number of nodes, machine types, and workers.\u003c/p\u003e\n"],["\u003cp\u003eYou can manually adjust CPU, memory, and disk limits for schedulers, web servers, and workers, providing vertical scaling in addition to the horizontal autoscaling.\u003c/p\u003e\n"],["\u003cp\u003eThe environment size parameter affects the managed Cloud Composer infrastructure's performance, impacting how many Airflow task log entries your environment can process efficiently.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 allows for multiple Airflow schedulers, which you can configure, but the number of schedulers does not automatically scale, and the Airflow database's disk space scales automatically to adapt to the demand.\u003c/p\u003e\n"]]],[],null,["# About environment scaling\n\n**Cloud Composer 3** \\| [Cloud Composer 2](/composer/docs/composer-2/environment-scaling \"View this page for Cloud Composer 2\") \\| [Cloud Composer 1](/composer/docs/composer-1/environment-scaling \"View this page for Cloud Composer 1\")\n\n\u003cbr /\u003e\n\n| **Note:** This page is **not yet revised for Cloud Composer 3** and displays content for Cloud Composer 2.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page describes how environment scaling works in Cloud Composer 2.\n\nOther pages about scaling:\n\n- For a guide about selecting optimal scale and performance parameters for your environment, see [Optimize environment performance and costs](/composer/docs/composer-2/optimize-environments).\n- For information about scaling your environments, see [Scale environments](/composer/docs/composer-2/scale-environments).\n\nAutoscaling environments\n------------------------\n\nCloud Composer 2 environments automatically scale in response to the demands\nof your executed DAGs and tasks:\n\n- If your environment experiences a heavy load, Cloud Composer automatically increases the number of workers in your environment.\n- If your environment does not use some of its workers, these workers are removed to save environment resources and costs.\n- You can set the minimum and maximum number of workers for your environment. Cloud Composer automatically scales your environment within the set limits. You can adjust these limits at any time.\n\nThe number of workers is adjusted based on\nthe [Scaling Factor Target](/composer/docs/composer-2/monitor-environments#worker-metrics) metric. This metric is\ncalculated based on:\n\n- Current number of workers\n- Number of Celery tasks in the Celery queue, that are not assigned to a worker\n- Number of idle workers\n- `celery.worker_concurrency` Airflow configuration option\n\nCloud Composer autoscaling uses three different autoscalers\nprovided by GKE:\n\n- [Horizontal Pod Autoscaler (HPA)](/kubernetes-engine/docs/concepts/horizontalpodautoscaler)\n- [Cluster Autoscaler (CA)](/kubernetes-engine/docs/concepts/cluster-autoscaler)\n- [Node auto-provisioning (NAP)](/kubernetes-engine/docs/how-to/node-auto-provisioning)\n\nCloud Composer configures these autoscalers in the environment's\ncluster. This automatically scales the number of nodes in the cluster, the\nmachine type and the number of workers.\n\nScale and performance parameters\n--------------------------------\n\nIn addition to autoscaling, you can control the scale and performance\nparameters of your environment by adjusting the CPU, memory, and disk limits\nfor schedulers, web server, and workers. By doing so you can scale your\nenvironment vertically, in addition to the horizontal scaling provided by the\nautoscaling feature. You can adjust the scale and performance parameters of\nAirflow schedulers, web server, and workers at any time.\n\nThe *environment size* performance parameter of your environment controls the\nperformance parameters of the managed Cloud Composer infrastructure\nthat includes the Airflow database. Consider selecting a larger environment\nsize if you want to run a large number of DAGs and tasks with higher\ninfrastructure performance. For example, larger environment's size increases\nthe amount of Airflow task log entries that your environment can process with\nminimal delay.\n| **Note:** Environment size is different from the environment presets. Environment presets, which you can select when you create an environment, determine all limits, scale, and performance parameters of your environment, including the environment size. Environment size determines only the performance parameters of the managed Cloud Composer infrastructure of your environment.\n\nMultiple schedulers\n-------------------\n\nAirflow 2 can use more than one Airflow scheduler at the same time. This\nAirflow feature is also known as the **HA scheduler**. In Cloud Composer 2,\nyou can set the number of schedulers for your environment and adjust it at any\ntime. Cloud Composer does not automatically scale the number of\nschedulers in your environment.\n\nFor more information about configuring the number of schedulers for your\nenvironment, see [Scale environments](/composer/docs/composer-2/scale-environments#scheduler-count).\n\nDatabase disk space\n-------------------\n\nDisk space for the Airflow database automatically increases to accommodate the\ndemand.\n\n\nWhat's next\n-----------\n\n- [Scale environments](/composer/docs/composer-2/scale-environments)\n- [Cloud Composer pricing](/composer/pricing)\n- [Create environments](/composer/docs/composer-2/create-environments)\n- [Environment architecture](/composer/docs/composer-2/environment-architecture)"]]