Beberapa produk dan fitur sedang dalam proses penggantian nama. Fitur playbook dan alur generatif juga dimigrasikan ke satu konsol gabungan. Lihat detailnya.
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Eksperimen digunakan untuk membandingkan performa
beberapa versi flow (versi varian)
dengan versi kontrol (biasanya versi produksi) saat menangani traffic live.
Anda dapat mengalokasikan sebagian traffic live ke setiap versi flow
dan memantau metrik berikut:
Terbatas:
Jumlah sesi yang mencapai
END_SESSION
tanpa memicu metrik lainnya di bawah.
Hanya tersedia untuk agen yang menggunakan integrasi telepon.
Total jumlah giliran:
Total jumlah giliran percakapan
(satu input pengguna akhir dan satu respons agen dianggap sebagai satu giliran).
Jumlah belokan rata-rata:
Jumlah belokan rata-rata.
Persiapan
Untuk mempersiapkan eksperimen:
Tentukan alur yang akan digunakan untuk eksperimen.
Anda tidak dapat menjalankan beberapa eksperimen pada satu alur,
jadi pastikan Anda telah mempartisi agen ke dalam beberapa alur.
Buat beberapa
versi
untuk alur Anda.
Perbedaan antara setiap versi dapat kecil atau besar,
bergantung pada hal yang ingin Anda bandingkan.
Tentukan jumlah traffic yang akan dialokasikan untuk eksperimen Anda.
Jika menguji perubahan kecil,
Anda dapat memulai dengan jumlah traffic yang lebih tinggi.
Untuk perubahan besar yang mungkin mengganggu,
pertimbangkan untuk mengalokasikan sejumlah kecil traffic ke eksperimen Anda.
Pilih lingkungan
tempat Anda ingin menjalankan eksperimen.
Pilih alur untuk eksperimen.
Secara opsional, masukkan jumlah hari
saat eksperimen akan otomatis dihentikan.
Masukkan versi alur kontrol
dan persentase traffic yang akan diarahkan ke versi kontrol.
Masukkan satu hingga empat versi alur varian, dan persentase traffic yang akan diarahkan ke versi varian.
Jika perlu, klik Aktifkan peluncuran dan langkah otomatis untuk peluncuran traffic secara bertahap ke alur varian. Eksperimen otomatis didasarkan pada langkah,
yaitu durasi waktu saat persentase traffic ditingkatkan ke
alur varian. Peluncuran otomatis hanya mendukung satu alur varian.
Di bagian Aturan peluncuran, Anda dapat menetapkan satu atau beberapa aturan
kondisional untuk menentukan cara eksperimen harus melanjutkan melalui langkah-langkah.
Jika Anda memilih Cocokkan minimal satu aturan, eksperimen akan dilanjutkan ke langkah berikutnya jika setidaknya satu aturan dan durasi waktu untuk langkah saat ini terpenuhi.
Jika Anda memilih Cocokkan semua aturan, eksperimen akan dilanjutkan ke
langkah berikutnya jika semua aturan dan durasi waktu untuk langkah
saat ini terpenuhi.
Jika Anda memilih Steps only, eksperimen akan dilanjutkan sesuai dengan durasi waktu untuk setiap langkah.
Di bagian Tingkatkan langkah, tentukan persentase traffic yang akan
dialokasikan ke alur varian dan durasi waktu untuk setiap langkah. Durasi waktu default untuk setiap langkah adalah 6 jam.
Pilih Stop conditions untuk menetapkan satu atau beberapa kondisi
yang akan menghentikan pengiriman traffic ke alur varian. Perhatikan bahwa Anda
tidak dapat memulai ulang eksperimen yang dihentikan.
Klik Simpan.
Memulai dan menghentikan eksperimen
Anda dapat memulai eksperimen tersimpan atau menghentikan eksperimen yang sedang berjalan secara manual kapan saja.
Menghentikan eksperimen akan membatalkan alokasi traffic
dan akan mengembalikan traffic ke status semula.
Untuk memulai atau menghentikan eksperimen:
Buka panel Eksperimen.
Pilih tab Status.
Klik Mulai atau Berhenti untuk eksperimen dalam daftar.
Mengelola eksperimen
Anda dapat mengedit atau menghapus eksperimen kapan saja:
Buka panel Eksperimen.
Pilih tab Status.
Klik menu opsi
more_vert
untuk eksperimen dalam daftar.
Klik Edit atau Hapus.
Memantau status eksperimen
Semua eksperimen, terlepas dari statusnya,
dapat ditemukan di panel eksperimen.
Eksperimen dapat memiliki empat status yang berbeda:
Draf: Eksperimen telah dibuat, tetapi belum pernah dijalankan.
Tertunda:
Eksperimen baru saja dimulai, tetapi hasilnya belum tersedia.
Berjalan: Eksperimen sedang berjalan dan hasil sementara tersedia.
Selesai: Eksperimen telah selesai karena dihentikan secara otomatis atau manual.
Pilih lingkungan dan eksperimen untuk melihat hasilnya.
Hasil berwarna hijau menunjukkan hasil yang menguntungkan,
sedangkan merah menunjukkan hasil yang kurang menguntungkan.
Perhatikan bahwa dalam beberapa kasus,
angka yang lebih tinggi/lebih rendah tidak selalu lebih baik
(rasio pengabaian tinggi / rasio pengabaian rendah).
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-03 UTC."],[[["\u003cp\u003eExperiments compare multiple flow versions (variants) against a control version to assess performance using live traffic.\u003c/p\u003e\n"],["\u003cp\u003eKey metrics monitored during experiments include session outcomes like containment, live agent handoff rate, callback rate, abandonment rate, session end rate, no-match count, and turn counts.\u003c/p\u003e\n"],["\u003cp\u003ePreparing for an experiment involves choosing a flow, creating multiple versions of it, and deciding how much traffic to allocate.\u003c/p\u003e\n"],["\u003cp\u003eExperiments can be started, stopped, edited, and deleted, with status indicators including Draft, Pending, Running, and Completed.\u003c/p\u003e\n"],["\u003cp\u003eExperiment results are displayed in green or red to represent favorable or unfavorable outcomes respectively, with a note that higher or lower values are not always better depending on the metric in question.\u003c/p\u003e\n"]]],[],null,["# Experiments are used to compare the performance\nof multiple flow versions (*variant* versions)\nto a *control* version (normally a production version) while handling live traffic.\nYou can allocate a portion of live traffic to each flow version\nand monitor the following metrics:\n\n- **Contained** : Count of sessions that reached [END_SESSION](https://cloud.google.com/dialogflow/cx/docs/concept/handler#symbolic) without triggering other metrics below. Only available to agents using a telephony integration.\n- **Live agent handoff rate** : Count of sessions [handed off to a live agent](/dialogflow/cx/docs/concept/fulfillment#handoff).\n- **Callback rate**: Count of sessions that were restarted by an end-user. Only available to agents using a telephony integration.\n- **Abandoned rate**: Count of sessions that were abandoned by an end-user. Only available to agents using a telephony integration.\n- **Session end rate** : Count of sessions that reached [END_SESSION](https://cloud.google.com/dialogflow/cx/docs/concept/handler#symbolic).\n- **Total no-match count** : Total count of occurrences of a [no-match event](/dialogflow/cx/docs/concept/handler#event-built-in).\n- **Total turn count**: Total number of conversational turns (one end-user input and one agent response is considered a turn).\n- **Average turn count**: Average number of turns.\n\nPreparation\n-----------\n\nTo prepare for an experiment:\n\n1. Decide which flow will be used for the experiment. You cannot run multiple experiments on a single flow, so ensure that you have partitioned your agent into multiple flows.\n2. Create multiple [versions](/dialogflow/cx/docs/concept/version) for your flow. The differences between each version could be small or large, depending on what you want to compare.\n3. Decide on the amount of traffic that will be allocated to your experiment. If you are testing minor changes, you might start with a higher amount of traffic. For large changes that may be disruptive, consider allocating a small amount of traffic to your experiment.\n\nCreate an experiment\n--------------------\n\nTo create an experiment:\n\n1. Open the [Dialogflow CX console](https://dialogflow.cloud.google.com/cx/projects).\n2. Select your project to open the agent selector.\n3. Select your agent to open the agent builder.\n4. Select the **Manage** tab.\n5. Click **Experiments** to open the Experiments panel.\n6. Select the **Status** tab.\n7. Click **Create**.\n8. Enter a description.\n9. Select the [environment](/dialogflow/cx/docs/concept/version) that you want to run the experiment from.\n10. Select the flow for the experiment.\n11. Optionally enter the number of days in which the experiment will automatically stop.\n12. Enter the control flow version and the percentage of traffic that will go to the control version.\n13. Enter one to four variant flow versions, and the percentage of traffic that will go to the variant version.\n14. Optionally, click **Enable auto rollout and steps** for a gradual rollout of traffic to the variant flow. An automated experiment is based on *steps* , which are time durations in which a percentage of traffic is increased to the variant flow. Auto rollout only supports one variant flow.\n 1. Under **Rollout rules** , you can set one or more conditional rules to determine how the experiment should proceed through the steps.\n 1. If you select **Match at least one rule**, the experiment proceeds to the next step if at least one rule and the time duration for the current step are met.\n 2. If you select **Match all rules**, the experiment proceeds to the next step if all rules and the time duration for the current step are met.\n 3. If you select **Steps only**, the experiment proceeds according to the time durations for each step.\n 2. Under **Increase steps**, define a percentage of traffic to allocate to the variant flow and a time duration for each step. The default time duration for each step is 6 hours.\n 3. Select **Stop conditions** to set one or more conditions under which to stop sending traffic to the variant flow. Note that you cannot restart a stopped experiment.\n15. Click **Save**.\n\nStart and stop an experiment\n----------------------------\n\nYou can start a saved experiment\nor manually stop a running experiment at any time.\nStopping an experiment will cancel the traffic allocation\nand will revert traffic to its original state.\n| **Note:** If you stop an experiment while it is [pending](#monitor), results will not be available. If you stop an experiment while it is [running](#monitor), results might be inconclusive or missing.\n\nTo start or stop an experiment:\n\n1. Open the Experiments panel.\n2. Select the **Status** tab.\n3. Click **Start** or **Stop** for an experiment in the list.\n\nManage experiments\n------------------\n\n| **Note:** You can change variant traffic allocation while an experiment is running.\n\nYou can edit or delete experiments at any time:\n\n1. Open the Experiments panel.\n2. Select the **Status** tab.\n3. Click the option *more_vert* menu for an experiment in the list.\n4. Click **Edit** or **Delete**.\n\nMonitor status of experiments\n-----------------------------\n\nAll experiments, regardless of their status,\ncan be found on the experiments panel.\nExperiments can have four different statuses:\n\n- **Draft**: Experiment has been created, but it has never run.\n- **Pending**: Experiment has started recently, but results are not available yet.\n- **Running**: Experiment is running and interim results are available.\n- **Completed**: Experiment has been completed due to automatically or manually being stopped.\n\nViewing experiment results\n--------------------------\n\nTo see experiment results:\n\n1. Open the [Dialogflow CX console](https://dialogflow.cloud.google.com/cx/projects).\n2. Select your project to open the agent selector.\n3. Select your agent to open the agent builder.\n4. Select the **Manage** tab.\n5. Click **Experiments** to open the Experiments panel.\n6. Select the **Results** tab.\n7. Select an environment and experiment to see the results.\n\nGreen colored results suggest a favorable outcome,\nwhile red suggests a less favorable result.\nNotice that in some cases,\nhigher/lower numbers are not necessarily better\n(high abandonment rate / low abandonment rate).\n| **Note:** You will see \"no experiment result\" if not enough conversations have been through each variant of the experiment.\n\nLimitations\n-----------\n\nThe following limitations apply:\n\n- The [Enable interaction logging](/dialogflow/cx/docs/concept/agent-settings#settings-general) agent setting must be enabled."]]