Menyesuaikan konfigurasi GKE Inference Gateway


Halaman ini menjelaskan cara menyesuaikan deployment GKE Inference Gateway.

Halaman ini ditujukan untuk spesialis Jaringan yang bertanggung jawab mengelola infrastruktur GKE, dan administrator platform yang mengelola workload AI.

Untuk mengelola dan mengoptimalkan workload inferensi, Anda mengonfigurasi fitur lanjutan GKE Inference Gateway.

Pahami dan konfigurasikan fitur lanjutan berikut:

Mengonfigurasi pemeriksaan keamanan dan keselamatan AI

GKE Inference Gateway terintegrasi dengan Model Armor untuk melakukan pemeriksaan keamanan pada perintah dan respons untuk aplikasi yang menggunakan model bahasa besar (LLM). Integrasi ini memberikan lapisan penerapan keamanan tambahan di tingkat infrastruktur yang melengkapi tindakan keamanan tingkat aplikasi. Hal ini memungkinkan penerapan kebijakan terpusat di semua traffic LLM.

Diagram berikut mengilustrasikan integrasi Model Armor dengan GKE Inference Gateway di cluster GKE:

Integrasi Google Cloud Model Armor di cluster GKE
Gambar: Integrasi Armor model di cluster GKE

Untuk mengonfigurasi pemeriksaan keamanan AI, lakukan langkah-langkah berikut:

  1. Pastikan prasyarat berikut terpenuhi:

    1. Aktifkan layanan Model Armor di project Google Cloud Anda.
    2. Buat template Model Armor menggunakan konsol Model Armor, Google Cloud CLI, atau API.
  2. Pastikan Anda telah membuat template Model Armor bernama my-model-armor-template-name-id.

  3. Untuk mengonfigurasi GCPTrafficExtension, lakukan langkah-langkah berikut:

    1. Simpan manifes contoh berikut sebagai gcp-traffic-extension.yaml:

      kind: GCPTrafficExtension
      apiVersion: networking.gke.io/v1
      metadata:
        name: my-model-armor-extension
      spec:
        targetRefs:
        - group: "gateway.networking.k8s.io"
          kind: Gateway
          name: GATEWAY_NAME
        extensionChains:
        - name: my-model-armor-chain1
          matchCondition:
            celExpressions:
            - celMatcher: request.path.startsWith("/")
          extensions:
          - name: my-model-armor-service
            supportedEvents:
            - RequestHeaders
            timeout: 1s
            googleAPIServiceName: "modelarmor.us-central1.rep.googleapis.com"
            metadata:
              'extensionPolicy': MODEL_ARMOR_TEMPLATE_NAME
              'sanitizeUserPrompt': 'true'
              'sanitizeUserResponse': 'true'
      

      Ganti kode berikut:

      • GATEWAY_NAME: nama Gateway.
      • MODEL_ARMOR_TEMPLATE_NAME: nama template Model Armor Anda.

      File gcp-traffic-extension.yaml menyertakan setelan berikut:

      • targetRefs: menentukan Gateway tempat ekstensi ini berlaku.
      • extensionChains: menentukan rantai ekstensi yang akan diterapkan ke traffic.
      • matchCondition: menentukan kondisi penerapan ekstensi.
      • extensions: menentukan ekstensi yang akan diterapkan.
      • supportedEvents: menentukan peristiwa selama ekstensi dipanggil.
      • timeout: menentukan waktu tunggu untuk ekstensi.
      • googleAPIServiceName: menentukan nama layanan untuk ekstensi.
      • metadata: menentukan metadata untuk ekstensi, termasuk extensionPolicy dan setelan pembersihan perintah atau respons.
    2. Terapkan manifes contoh ke cluster Anda:

      kubectl apply -f `gcp-traffic-extension.yaml`
      

Setelah Anda mengonfigurasi pemeriksaan keamanan AI dan mengintegrasikannya dengan Gateway, Model Armor akan otomatis memfilter perintah dan respons berdasarkan aturan yang ditentukan.

Mengonfigurasi kemampuan observasi

GKE Inference Gateway memberikan insight tentang kondisi, performa, dan perilaku workload inferensi Anda. Hal ini membantu Anda mengidentifikasi dan menyelesaikan masalah, mengoptimalkan penggunaan resource, dan memastikan keandalan aplikasi Anda.

Google Cloud menyediakan dasbor Cloud Monitoring berikut yang menawarkan kemampuan observasi inferensi untuk GKE Inference Gateway:

  • Dasbor GKE Inference Gateway: menyediakan metrik emas untuk penayangan LLM, seperti throughput permintaan dan token, latensi, error, dan penggunaan cache untuk InferencePool. Untuk melihat daftar lengkap metrik GKE Inference Gateway yang tersedia, lihat Metrik yang ditampilkan.
  • Dasbor server model: menyediakan dasbor untuk sinyal emas server model. Hal ini memungkinkan Anda memantau beban dan performa server model, seperti KVCache Utilization dan Queue length. Hal ini memungkinkan Anda memantau beban dan performa server model.
  • Dasbor load balancer: melaporkan metrik dari load balancer, seperti permintaan per detik, latensi penayangan permintaan end-to-end, dan kode status permintaan-respons. Metrik ini membantu Anda memahami performa penayangan permintaan menyeluruh dan mengidentifikasi error.
  • Metrik Data Center GPU Manager (DCGM): memberikan metrik dari GPU NVIDIA, seperti performa dan penggunaan GPU NVIDIA. Anda dapat mengonfigurasi metrik NVIDIA Data Center GPU Manager (DCGM) di Cloud Monitoring. Untuk informasi selengkapnya, lihat Mengumpulkan dan melihat metrik DCGM.

Melihat dasbor GKE Inference Gateway

Untuk melihat dasbor GKE Inference Gateway, lakukan langkah-langkah berikut:

  1. Di Google Cloud console, buka halaman Monitoring.

    Buka Monitoring

  2. Di panel navigasi, pilih Dashboard.

  3. Di bagian Integrasi, pilih GMP.

  4. Di halaman Cloud Monitoring Dashboard Templates, telusuri "Gateway".

  5. Lihat dasbor GKE Inference Gateway.

Atau, Anda dapat mengikuti petunjuk di Dasbor Monitoring.

Mengonfigurasi dasbor visibilitas server model

Untuk mengumpulkan sinyal emas dari setiap server model dan memahami hal yang berkontribusi pada performa GKE Inference Gateway, Anda dapat mengonfigurasi pemantauan otomatis untuk server model. Hal ini mencakup server model seperti berikut:

Untuk melihat dasbor integrasi, lakukan langkah-langkah berikut:

  1. Kumpulkan metrik dari server model Anda.
  2. Di Google Cloud console, buka halaman Monitoring.

    Buka Monitoring

  3. Di panel navigasi, pilih Dashboard.

  4. Di bagian Integrations, pilih GMP. Dasbor integrasi yang sesuai akan ditampilkan.

    Tampilan dasbor integrasi
    Gambar: Dasbor integrasi

Untuk informasi selengkapnya, lihat Menyesuaikan pemantauan untuk aplikasi.

Mengonfigurasi dasbor visibilitas load balancer

Untuk menggunakan Load Balancer Aplikasi dengan GKE Inference Gateway, impor dasbor dengan melakukan langkah-langkah berikut:

  1. Untuk membuat dasbor load balancer, buat file berikut dan simpan sebagai dashboard.json:

    
    {
        "displayName": "GKE Inference Gateway (Load Balancer) Prometheus Overview",
        "dashboardFilters": [
          {
            "filterType": "RESOURCE_LABEL",
            "labelKey": "cluster",
            "templateVariable": "",
            "valueType": "STRING"
          },
          {
            "filterType": "RESOURCE_LABEL",
            "labelKey": "location",
            "templateVariable": "",
            "valueType": "STRING"
          },
          {
            "filterType": "RESOURCE_LABEL",
            "labelKey": "namespace",
            "templateVariable": "",
            "valueType": "STRING"
          },
          {
            "filterType": "RESOURCE_LABEL",
            "labelKey": "forwarding_rule_name",
            "templateVariable": "",
            "valueType": "STRING"
          }
        ],
        "labels": {},
        "mosaicLayout": {
          "columns": 48,
          "tiles": [
            {
              "height": 8,
              "width": 48,
              "widget": {
                "title": "",
                "id": "",
                "text": {
                  "content": "### Inferece Gateway Metrics\n\nPlease refer to the [official documentation](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/site-src/guides/metrics.md) for more details of underlying metrics used in the dashboard.\n\n\n### External Application Load Balancer Metrics\n\nPlease refer to the [pubic page](/load-balancing/docs/metrics) for complete list of External Application Load Balancer metrics.\n\n### Model Server Metrics\n\nYou can redirect to the detail dashboard for model servers under the integration tab",
                  "format": "MARKDOWN",
                  "style": {
                    "backgroundColor": "#FFFFFF",
                    "fontSize": "FS_EXTRA_LARGE",
                    "horizontalAlignment": "H_LEFT",
                    "padding": "P_EXTRA_SMALL",
                    "pointerLocation": "POINTER_LOCATION_UNSPECIFIED",
                    "textColor": "#212121",
                    "verticalAlignment": "V_TOP"
                  }
                }
              }
            },
            {
              "yPos": 8,
              "height": 4,
              "width": 48,
              "widget": {
                "title": "External Application Load Balancer",
                "id": "",
                "sectionHeader": {
                  "dividerBelow": false,
                  "subtitle": ""
                }
              }
            },
            {
              "yPos": 12,
              "height": 15,
              "width": 24,
              "widget": {
                "title": "E2E Request Latency p99 (by code)",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.99, sum by(le, response_code) (rate(loadbalancing_googleapis_com:https_external_regional_total_latencies_bucket{monitored_resource=\"http_external_regional_lb_rule\",forwarding_rule_name=~\".*inference-gateway.*\"}[1m])))",
                        "unitOverride": "ms"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 12,
              "height": 43,
              "width": 48,
              "widget": {
                "title": "Regional",
                "collapsibleGroup": {
                  "collapsed": false
                },
                "id": ""
              }
            },
            {
              "yPos": 12,
              "xPos": 24,
              "height": 15,
              "width": 24,
              "widget": {
                "title": "E2E Request Latency p95 (by code)",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le, response_code) (rate(loadbalancing_googleapis_com:https_external_regional_total_latencies_bucket{monitored_resource=\"http_external_regional_lb_rule\",forwarding_rule_name=~\".*inference-gateway.*\"}[1m])))",
                        "unitOverride": "ms"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 27,
              "height": 15,
              "width": 24,
              "widget": {
                "title": "E2E Request Latency p90 (by code)",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.90, sum by(le, response_code) (rate(loadbalancing_googleapis_com:https_external_regional_total_latencies_bucket{monitored_resource=\"http_external_regional_lb_rule\",forwarding_rule_name=~\".*inference-gateway.*\"}[1m])))",
                        "unitOverride": "ms"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 27,
              "xPos": 24,
              "height": 15,
              "width": 24,
              "widget": {
                "title": "E2E Request Latency p50 (by code)",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.50, sum by(le, response_code) (rate(loadbalancing_googleapis_com:https_external_regional_total_latencies_bucket{monitored_resource=\"http_external_regional_lb_rule\",forwarding_rule_name=~\".*inference-gateway.*\"}[1m])))",
                        "unitOverride": "ms"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 42,
              "height": 13,
              "width": 48,
              "widget": {
                "title": "Request /s (by code)",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by (response_code)(rate(loadbalancing_googleapis_com:https_external_regional_request_count{monitored_resource=\"http_external_regional_lb_rule\", forwarding_rule_name=~\".*inference-gateway.*\"}[1m]))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 55,
              "height": 4,
              "width": 48,
              "widget": {
                "title": "Inference Optimized Gateway",
                "id": "",
                "sectionHeader": {
                  "dividerBelow": false,
                  "subtitle": ""
                }
              }
            },
            {
              "yPos": 59,
              "height": 17,
              "width": 48,
              "widget": {
                "title": "Request Latency",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(inference_model_request_duration_seconds_bucket{}[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(inference_model_request_duration_seconds_bucket{}[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(inference_model_request_duration_seconds_bucket{}[${__interval}])))",
                        "unitOverride": "s"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 59,
              "height": 65,
              "width": 48,
              "widget": {
                "title": "Inference Model",
                "collapsibleGroup": {
                  "collapsed": false
                },
                "id": ""
              }
            },
            {
              "yPos": 76,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Request / s",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by(model_name, target_model_name) (rate(inference_model_request_total{}[${__interval}]))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 76,
              "xPos": 24,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Request Error / s",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by (error_code,model_name,target_model_name) (rate(inference_model_request_error_total[${__interval}]))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 92,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Request Size",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(inference_model_request_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(inference_model_request_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(inference_model_request_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 92,
              "xPos": 24,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Response Size",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(inference_model_response_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(inference_model_response_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(inference_model_response_sizes_bucket{}[${__interval}])))",
                        "unitOverride": "By"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 108,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Input Token Count",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(inference_model_input_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(inference_model_input_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(inference_model_input_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": []
                }
              }
            },
            {
              "yPos": 108,
              "xPos": 24,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Output Token Count",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(inference_model_output_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(inference_model_output_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(inference_model_output_tokens_bucket{}[${__interval}])))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": []
                }
              }
            },
            {
              "yPos": 124,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Average KV Cache Utilization",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by (name)(avg_over_time(inference_pool_average_kv_cache_utilization[${__interval}]))*100",
                        "unitOverride": "%"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 124,
              "height": 16,
              "width": 48,
              "widget": {
                "title": "Inference Pool",
                "collapsibleGroup": {
                  "collapsed": false
                },
                "id": ""
              }
            },
            {
              "yPos": 124,
              "xPos": 24,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Average Queue Size",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by (name) (avg_over_time(inference_pool_average_queue_size[${__interval}]))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 140,
              "height": 4,
              "width": 48,
              "widget": {
                "title": "Model Server",
                "id": "",
                "sectionHeader": {
                  "dividerBelow": true,
                  "subtitle": "The following charts will only be populated if model server is exporting metrics."
                }
              }
            },
            {
              "yPos": 144,
              "height": 32,
              "width": 48,
              "widget": {
                "title": "vLLM",
                "collapsibleGroup": {
                  "collapsed": false
                },
                "id": ""
              }
            },
            {
              "yPos": 144,
              "xPos": 1,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Token Throughput",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "Prompt Tokens/Sec",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by(model_name) (rate(vllm:prompt_tokens_total[${__interval}]))",
                        "unitOverride": ""
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "Generation Tokens/Sec",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "sum by(model_name) (rate(vllm:generation_tokens_total[${__interval}]))",
                        "unitOverride": ""
                      }
                    }
                  ],
                  "thresholds": []
                }
              }
            },
            {
              "yPos": 144,
              "xPos": 25,
              "height": 16,
              "width": 23,
              "widget": {
                "title": "Request Latency",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(vllm:e2e_request_latency_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(vllm:e2e_request_latency_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(vllm:e2e_request_latency_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 160,
              "xPos": 1,
              "height": 16,
              "width": 24,
              "widget": {
                "title": "Time Per Output Token Latency",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(vllm:time_per_output_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(vllm:time_per_output_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(vllm:time_per_output_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            },
            {
              "yPos": 160,
              "xPos": 25,
              "height": 16,
              "width": 23,
              "widget": {
                "title": "Time To First Token Latency",
                "id": "",
                "xyChart": {
                  "chartOptions": {
                    "displayHorizontal": false,
                    "mode": "COLOR",
                    "showLegend": false
                  },
                  "dataSets": [
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p95",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.95, sum by(le) (rate(vllm:time_to_first_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p90",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.9, sum by(le) (rate(vllm:time_to_first_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    },
                    {
                      "breakdowns": [],
                      "dimensions": [],
                      "legendTemplate": "p50",
                      "measures": [],
                      "plotType": "LINE",
                      "targetAxis": "Y1",
                      "timeSeriesQuery": {
                        "outputFullDuration": false,
                        "prometheusQuery": "histogram_quantile(0.5, sum by(le) (rate(vllm:time_to_first_token_seconds_bucket[${__interval}])))",
                        "unitOverride": "s"
                      }
                    }
                  ],
                  "thresholds": [],
                  "yAxis": {
                    "label": "",
                    "scale": "LINEAR"
                  }
                }
              }
            }
          ]
        }
      }
    
  2. Untuk menginstal dasbor ke Google Cloud Armor, jalankan perintah berikut:

    gcloud monitoring dashboards create --project $PROJECT_ID --config-from-file=dashboard.json
    
  3. Buka halaman Monitoring di Google Cloud console.

    Buka Monitoring

  4. Di menu navigasi, pilih Dasbor.

  5. Pilih dasbor Ringkasan Prometheus Inference Optimized Gateway (dengan L7LB) dari daftar dasbor kustom.

  6. Bagian Load Balancer Aplikasi Eksternal menampilkan metrik load balancing berikut:

    • E2E Request Latency p99 (by code): menampilkan persentil ke-99 dari latensi permintaan menyeluruh untuk permintaan yang ditayangkan load balancer, yang digabungkan berdasarkan kode status yang ditampilkan.
    • Permintaan (menurut kode): menampilkan jumlah permintaan yang ditayangkan load balancer, yang digabungkan menurut kode status yang ditampilkan.

Mengonfigurasi logging untuk GKE Inference Gateway

Mengonfigurasi logging untuk GKE Inference Gateway memberikan informasi mendetail tentang permintaan dan respons, yang berguna untuk pemecahan masalah, audit, dan analisis performa. Log akses HTTP mencatat setiap permintaan dan respons, termasuk header, kode status, dan stempel waktu. Tingkat detail ini dapat membantu Anda mengidentifikasi masalah, menemukan error, dan memahami perilaku beban kerja inferensi Anda.

Untuk mengonfigurasi logging GKE Inference Gateway, aktifkan logging akses HTTP untuk setiap objek InferencePool Anda.

  1. Simpan manifes contoh berikut sebagai logging-backend-policy.yaml:

    apiVersion: networking.gke.io/v1
    kind: GCPBackendPolicy
    metadata:
      name: logging-backend-policy
      namespace: NAMESPACE_NAME
    spec:
      default:
        logging:
          enabled: true
          sampleRate: 500000
      targetRef:
        group: inference.networking.x-k8s.io
        kind: InferencePool
        name: INFERENCE_POOL_NAME
    

    Ganti kode berikut:

    • NAMESPACE_NAME: nama namespace tempat InferencePool Anda di-deploy.
    • INFERENCE_POOL_NAME: nama InferencePool.
  2. Terapkan manifes contoh ke cluster Anda:

    kubectl apply -f logging-backend-policy.yaml
    

Setelah Anda menerapkan manifes ini, GKE Inference Gateway akan mengaktifkan log akses HTTP untuk InferencePool yang ditentukan. Anda dapat melihat log ini di Cloud Logging. Log menyertakan informasi mendetail tentang setiap permintaan dan respons, seperti URL permintaan, header, kode status respons, dan latensi.

Konfigurasi penskalaan otomatis

Penskalaan otomatis menyesuaikan alokasi resource sebagai respons terhadap variasi beban, mempertahankan performa dan efisiensi resource dengan menambahkan atau menghapus Pod secara dinamis berdasarkan permintaan. Untuk GKE Inference Gateway, hal ini melibatkan penskalaan otomatis horizontal Pod di setiap InferencePool. Horizontal Pod Autoscaler (HPA) GKE menskalakan otomatis Pod berdasarkan metrik server model seperti KVCache Utilization. Hal ini memastikan layanan inferensi menangani beban kerja dan volume kueri yang berbeda sekaligus mengelola penggunaan resource secara efisien.

Untuk mengonfigurasi instance InferencePool agar menskalakan secara otomatis berdasarkan metrik yang dihasilkan oleh GKE Inference Gateway, lakukan langkah-langkah berikut:

  1. Deploy objek PodMonitoring di cluster untuk mengumpulkan metrik yang dihasilkan oleh GKE Inference Gateway. Untuk informasi selengkapnya, lihat Mengonfigurasi visibilitas.

  2. Deploy Adaptor Stackdriver Metrik Kustom untuk memberi HPA akses ke metrik:

    1. Simpan manifes contoh berikut sebagai adapter_new_resource_model.yaml:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: custom-metrics
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: custom-metrics-stackdriver-adapter
        namespace: custom-metrics
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: custom-metrics:system:auth-delegator
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:auth-delegator
      subjects:
      - kind: ServiceAccount
        name: custom-metrics-stackdriver-adapter
        namespace: custom-metrics
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: custom-metrics-auth-reader
        namespace: kube-system
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: extension-apiserver-authentication-reader
      subjects:
      - kind: ServiceAccount
        name: custom-metrics-stackdriver-adapter
        namespace: custom-metrics
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: custom-metrics-resource-reader
        namespace: custom-metrics
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - nodes
        - nodes/stats
        verbs:
        - get
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: custom-metrics-resource-reader
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: custom-metrics-resource-reader
      subjects:
      - kind: ServiceAccount
        name: custom-metrics-stackdriver-adapter
        namespace: custom-metrics
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        run: custom-metrics-stackdriver-adapter
        k8s-app: custom-metrics-stackdriver-adapter
      spec:
        replicas: 1
        selector:
          matchLabels:
            run: custom-metrics-stackdriver-adapter
            k8s-app: custom-metrics-stackdriver-adapter
        template:
          metadata:
            labels:
              run: custom-metrics-stackdriver-adapter
              k8s-app: custom-metrics-stackdriver-adapter
              kubernetes.io/cluster-service: "true"
          spec:
            serviceAccountName: custom-metrics-stackdriver-adapter
            containers:
            - image: gcr.io/gke-release/custom-metrics-stackdriver-adapter:v0.15.2-gke.1
              imagePullPolicy: Always
              name: pod-custom-metrics-stackdriver-adapter
              command:
              - /adapter
              - --use-new-resource-model=true
              - --fallback-for-container-metrics=true
              resources:
                limits:
                  cpu: 250m
                  memory: 200Mi
                requests:
                  cpu: 250m
                  memory: 200Mi
      ---
      apiVersion: v1
      kind: Service
      metadata:
        labels:
          run: custom-metrics-stackdriver-adapter
          k8s-app: custom-metrics-stackdriver-adapter
          kubernetes.io/cluster-service: 'true'
          kubernetes.io/name: Adapter
        name: custom-metrics-stackdriver-adapter
        namespace: custom-metrics
      spec:
        ports:
        - port: 443
          protocol: TCP
          targetPort: 443
        selector:
          run: custom-metrics-stackdriver-adapter
          k8s-app: custom-metrics-stackdriver-adapter
        type: ClusterIP
      ---
      apiVersion: apiregistration.k8s.io/v1
      kind: APIService
      metadata:
        name: v1beta1.custom.metrics.k8s.io
      spec:
        insecureSkipTLSVerify: true
        group: custom.metrics.k8s.io
        groupPriorityMinimum: 100
        versionPriority: 100
        service:
          name: custom-metrics-stackdriver-adapter
          namespace: custom-metrics
        version: v1beta1
      ---
      apiVersion: apiregistration.k8s.io/v1
      kind: APIService
      metadata:
        name: v1beta2.custom.metrics.k8s.io
      spec:
        insecureSkipTLSVerify: true
        group: custom.metrics.k8s.io
        groupPriorityMinimum: 100
        versionPriority: 200
        service:
          name: custom-metrics-stackdriver-adapter
          namespace: custom-metrics
        version: v1beta2
      ---
      apiVersion: apiregistration.k8s.io/v1
      kind: APIService
      metadata:
        name: v1beta1.external.metrics.k8s.io
      spec:
        insecureSkipTLSVerify: true
        group: external.metrics.k8s.io
        groupPriorityMinimum: 100
        versionPriority: 100
        service:
          name: custom-metrics-stackdriver-adapter
          namespace: custom-metrics
        version: v1beta1
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: external-metrics-reader
      rules:
      - apiGroups:
        - "external.metrics.k8s.io"
        resources:
        - "*"
        verbs:
        - list
        - get
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: external-metrics-reader
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: external-metrics-reader
      subjects:
      - kind: ServiceAccount
        name: horizontal-pod-autoscaler
        namespace: kube-system
      
    2. Terapkan manifes contoh ke cluster Anda:

      kubectl apply -f adapter_new_resource_model.yaml
      
  3. Untuk memberikan izin adaptor guna membaca metrik dari project, jalankan perintah berikut:

    $ PROJECT_ID=PROJECT_ID
    $ PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value(projectNumber)")
    $ gcloud projects add-iam-policy-binding projects/PROJECT_ID \
      --role roles/monitoring.viewer \
      --member=principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/custom-metrics/sa/custom-metrics-stackdriver-adapter
    

    Ganti PROJECT_ID dengan Google Cloud project ID Anda.

  4. Untuk setiap InferencePool, deploy satu HPA yang mirip dengan berikut ini:

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: INFERENCE_POOL_NAME
      namespace: INFERENCE_POOL_NAMESPACE
    spec:
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: INFERENCE_POOL_NAME
      minReplicas: MIN_REPLICAS
      maxReplicas: MAX_REPLICAS
      metrics:
      - type: External
        external:
          metric:
            name: prometheus.googleapis.com|inference_pool_average_kv_cache_utilization|gauge
            selector:
              matchLabels:
                metric.labels.name: INFERENCE_POOL_NAME
                resource.labels.cluster: CLUSTER_NAME
                resource.labels.namespace: INFERENCE_POOL_NAMESPACE
          target:
            type: AverageValue
            averageValue: TARGET_VALUE
    

    Ganti kode berikut:

    • INFERENCE_POOL_NAME: nama InferencePool.
    • INFERENCE_POOL_NAMESPACE: namespace InferencePool.
    • CLUSTER_NAME: nama cluster.
    • MIN_REPLICAS: ketersediaan minimum InferencePool (kapasitas dasar pengukuran). HPA mempertahankan jumlah replika ini saat penggunaan berada di bawah nilai minimum target HPA. Workload yang sangat tersedia harus menetapkannya ke nilai yang lebih tinggi dari 1 untuk memastikan ketersediaan berkelanjutan selama gangguan Pod.
    • MAX_REPLICAS: nilai yang membatasi jumlah akselerator yang harus ditetapkan ke workload yang dihosting di InferencePool. HPA tidak akan meningkatkan jumlah replika di luar nilai ini. Selama waktu traffic puncak, pantau jumlah replika untuk memastikan bahwa nilai kolom MAX_REPLICAS memberikan headroom yang cukup sehingga beban kerja dapat diskalakan untuk mempertahankan karakteristik performa beban kerja yang dipilih.
    • TARGET_VALUE: nilai yang mewakili target KV-Cache Utilization yang dipilih per server model. Ini adalah angka antara 0-100 dan sangat bergantung pada server model, model, akselerator, dan karakteristik traffic masuk. Anda dapat menentukan nilai target ini secara eksperimental melalui pengujian beban dan memetakan grafik throughput versus latensi. Pilih kombinasi throughput dan latensi yang dipilih dari grafik, dan gunakan nilai KV-Cache Utilization yang sesuai sebagai target HPA. Anda harus menyesuaikan dan memantau nilai ini dengan cermat untuk mencapai hasil harga-performa yang dipilih. Anda dapat menggunakan Rekomendasi Inferens GKE untuk menentukan nilai ini secara otomatis.

Langkah berikutnya