教學課程:對 Cloud Run 的事件路由進行偵錯


本教學課程將說明如何排除使用 Eventarc 將 Cloud Storage 事件轉送至未經驗證 Cloud Run 服務時,遇到的執行階段錯誤。

目標

本教學課程將說明如何完成下列工作:

  1. 建立 Artifact Registry 標準存放區,用於儲存容器映像檔。
  2. 建立 Cloud Storage bucket 做為事件來源。
  3. 建構容器映像檔,並將其上傳及部署至 Cloud Run。
  4. 建立 Eventarc 觸發條件。
  5. 將檔案上傳至 Cloud Storage 值區。
  6. 排解並修正執行階段錯誤。

費用

在本文件中,您會使用 Google Cloud的下列計費元件:

您可以使用 Pricing Calculator 根據預測用量產生預估費用。 新 Google Cloud 使用者可能符合申請免費試用的資格。

事前準備

貴機構定義的安全性限制,可能會導致您無法完成下列步驟。如需疑難排解資訊,請參閱「在受限的 Google Cloud 環境中開發應用程式」。

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. Install the Google Cloud CLI.

  3. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  4. To initialize the gcloud CLI, run the following command:

    gcloud init
  5. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Install the Google Cloud CLI.

  8. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  9. To initialize the gcloud CLI, run the following command:

    gcloud init
  10. Create or select a Google Cloud project.

    • Create a Google Cloud project:

      gcloud projects create PROJECT_ID

      Replace PROJECT_ID with a name for the Google Cloud project you are creating.

    • Select the Google Cloud project that you created:

      gcloud config set project PROJECT_ID

      Replace PROJECT_ID with your Google Cloud project name.

  11. Make sure that billing is enabled for your Google Cloud project.

  12. 如果您是專案建立者,系統會授予您基本擁有者角色 (roles/owner)。根據預設,這個身分與存取權管理 (IAM) 角色會包含完整存取大多數 Google Cloud資源所需的權限,因此您可以略過這個步驟。

    如果您不是專案建立者,則必須將必要權限授予專案中的適當實體。舉例來說,主體可以是 Google 帳戶 (適用於使用者),也可以是服務帳戶 (適用於應用程式和運算工作負載)。詳情請參閱活動目的地的角色和權限頁面。

    請注意,根據預設,Cloud Build 權限包含上傳及下載 Artifact Registry 構件權限

    所需權限

    如要取得完成本教學課程所需的權限,請要求管理員為您授予專案的下列 IAM 角色:

    如要進一步瞭解如何授予角色,請參閱「管理專案、資料夾和機構的存取權」。

    您或許還可透過自訂角色或其他預先定義的角色取得必要權限。

  13. 針對 Cloud Storage,請為 ADMIN_READDATA_WRITEDATA_READ 資料存取類型啟用稽核記錄。

    1. 讀取與 Google Cloud 專案、資料夾或機構相關的身分與存取權管理 (IAM) 政策,並將其儲存在暫存檔中:
      gcloud projects get-iam-policy PROJECT_ID > /tmp/policy.yaml
    2. 在文字編輯器中開啟 /tmp/policy.yaml,然後auditConfigs 區段中新增或變更稽核記錄設定:

      auditConfigs:
      - auditLogConfigs:
        - logType: ADMIN_READ
        - logType: DATA_WRITE
        - logType: DATA_READ
        service: storage.googleapis.com
      bindings:
      - members:
      [...]
      etag: BwW_bHKTV5U=
      version: 1
    3. 編寫新的 IAM 政策:
      gcloud projects set-iam-policy PROJECT_ID /tmp/policy.yaml

      如果上述指令回報與其他變更衝突,請重複執行這些步驟,從讀取 IAM 政策開始。詳情請參閱「使用 API 設定資料存取稽核記錄」一文。

  14. eventarc.eventReceiver 角色授予 Compute Engine 服務帳戶:

    export PROJECT_NUMBER="$(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')"
    
    gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
        --member=serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
        --role='roles/eventarc.eventReceiver'

  15. 如果您是在 2021 年 4 月 8 日當天或之前啟用 Pub/Sub 服務帳戶,請將 iam.serviceAccountTokenCreator 角色授予 Pub/Sub 服務帳戶:

    gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
        --member="serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-pubsub.iam.gserviceaccount.com"\
        --role='roles/iam.serviceAccountTokenCreator'

  16. 設定本教學課程中使用的預設值:
    export REGION=us-central1
    gcloud config set run/region ${REGION}
    gcloud config set run/platform managed
    gcloud config set eventarc/location ${REGION}

建立 Artifact Registry 標準存放區

建立 Artifact Registry 標準存放區來儲存容器映像檔:

gcloud artifacts repositories create REPOSITORY \
    --repository-format=docker \
    --location=$REGION

REPOSITORY 替換為儲存庫的專屬名稱。

建立 Cloud Storage 值區

在兩個地區中建立 Cloud Storage bucket,做為 Cloud Run 服務的事件來源:

  1. us-east1 中建立 bucket:

    export BUCKET1="troubleshoot-bucket1-PROJECT_ID"
    gsutil mb -l us-east1 gs://${BUCKET1}
  2. us-west1 中建立 bucket:

    export BUCKET2="troubleshoot-bucket2-PROJECT_ID"
    gsutil mb -l us-west1 gs://${BUCKET2}

建立事件來源後,請在 Cloud Run 上部署事件接收器服務。

部署事件接收器

部署可接收及記錄事件的 Cloud Run 服務。

  1. 複製 GitHub 存放區來擷取程式碼範例:

    Go

    git clone https://github.com/GoogleCloudPlatform/golang-samples.git
    cd golang-samples/eventarc/audit_storage
    

    Java

    git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git
    cd java-docs-samples/eventarc/audit-storage
    

    .NET

    git clone https://github.com/GoogleCloudPlatform/dotnet-docs-samples.git
    cd dotnet-docs-samples/eventarc/audit-storage
    

    Node.js

    git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
    cd nodejs-docs-samples/eventarc/audit-storage
    

    Python

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
    cd python-docs-samples/eventarc/audit-storage
    
  2. 請查看本教學課程的程式碼,其中包含下列項目:

    • 事件處理常式會在 HTTP POST 要求中,將傳入的事件接收為 CloudEvent:

      Go

      
      // Processes CloudEvents containing Cloud Audit Logs for Cloud Storage
      package main
      
      import (
      	"fmt"
      	"log"
      	"net/http"
      	"os"
      
      	cloudevent "github.com/cloudevents/sdk-go/v2"
      )
      
      // HelloEventsStorage receives and processes a Cloud Audit Log event with Cloud Storage data.
      func HelloEventsStorage(w http.ResponseWriter, r *http.Request) {
      	if r.Method != http.MethodPost {
      		http.Error(w, "Expected HTTP POST request with CloudEvent payload", http.StatusMethodNotAllowed)
      		return
      	}
      
      	event, err := cloudevent.NewEventFromHTTPRequest(r)
      	if err != nil {
      		log.Printf("cloudevent.NewEventFromHTTPRequest: %v", err)
      		http.Error(w, "Failed to create CloudEvent from request.", http.StatusBadRequest)
      		return
      	}
      	s := fmt.Sprintf("Detected change in Cloud Storage bucket: %s", event.Subject())
      	fmt.Fprintln(w, s)
      }
      

      Java

      import io.cloudevents.CloudEvent;
      import io.cloudevents.rw.CloudEventRWException;
      import io.cloudevents.spring.http.CloudEventHttpUtils;
      import org.springframework.http.HttpHeaders;
      import org.springframework.http.HttpStatus;
      import org.springframework.http.ResponseEntity;
      import org.springframework.web.bind.annotation.RequestBody;
      import org.springframework.web.bind.annotation.RequestHeader;
      import org.springframework.web.bind.annotation.RequestMapping;
      import org.springframework.web.bind.annotation.RequestMethod;
      import org.springframework.web.bind.annotation.RestController;
      
      @RestController
      public class EventController {
      
        @RequestMapping(value = "/", method = RequestMethod.POST, consumes = "application/json")
        public ResponseEntity<String> receiveMessage(
            @RequestBody String body, @RequestHeader HttpHeaders headers) {
          CloudEvent event;
          try {
            event =
                CloudEventHttpUtils.fromHttp(headers)
                    .withData(headers.getContentType().toString(), body.getBytes())
                    .build();
          } catch (CloudEventRWException e) {
            return new ResponseEntity<>(e.getMessage(), HttpStatus.BAD_REQUEST);
          }
      
          String ceSubject = event.getSubject();
          String msg = "Detected change in Cloud Storage bucket: " + ceSubject;
          System.out.println(msg);
          return new ResponseEntity<>(msg, HttpStatus.OK);
        }
      }

      .NET

      
      using Microsoft.AspNetCore.Builder;
      using Microsoft.AspNetCore.Hosting;
      using Microsoft.AspNetCore.Http;
      using Microsoft.Extensions.DependencyInjection;
      using Microsoft.Extensions.Hosting;
      using Microsoft.Extensions.Logging;
      
      public class Startup
      {
          public void ConfigureServices(IServiceCollection services)
          {
          }
      
          public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILogger<Startup> logger)
          {
              if (env.IsDevelopment())
              {
                  app.UseDeveloperExceptionPage();
              }
      
              logger.LogInformation("Service is starting...");
      
              app.UseRouting();
      
              app.UseEndpoints(endpoints =>
              {
                  endpoints.MapPost("/", async context =>
                  {
                      logger.LogInformation("Handling HTTP POST");
      
                      var ceSubject = context.Request.Headers["ce-subject"];
                      logger.LogInformation($"ce-subject: {ceSubject}");
      
                      if (string.IsNullOrEmpty(ceSubject))
                      {
                          context.Response.StatusCode = 400;
                          await context.Response.WriteAsync("Bad Request: expected header Ce-Subject");
                          return;
                      }
      
                      await context.Response.WriteAsync($"GCS CloudEvent type: {ceSubject}");
                  });
              });
          }
      }
      

      Node.js

      const express = require('express');
      const app = express();
      
      app.use(express.json());
      app.post('/', (req, res) => {
        if (!req.header('ce-subject')) {
          return res
            .status(400)
            .send('Bad Request: missing required header: ce-subject');
        }
      
        console.log(
          `Detected change in Cloud Storage bucket: ${req.header('ce-subject')}`
        );
        return res
          .status(200)
          .send(
            `Detected change in Cloud Storage bucket: ${req.header('ce-subject')}`
          );
      });
      
      module.exports = app;

      Python

      @app.route("/", methods=["POST"])
      def index():
          # Create a CloudEvent object from the incoming request
          event = from_http(request.headers, request.data)
          # Gets the GCS bucket name from the CloudEvent
          # Example: "storage.googleapis.com/projects/_/buckets/my-bucket"
          bucket = event.get("subject")
      
          print(f"Detected change in Cloud Storage bucket: {bucket}")
          return (f"Detected change in Cloud Storage bucket: {bucket}", 200)
      
      
    • 使用事件處理常式的伺服器:

      Go

      
      func main() {
      	http.HandleFunc("/", HelloEventsStorage)
      	// Determine port for HTTP service.
      	port := os.Getenv("PORT")
      	if port == "" {
      		port = "8080"
      	}
      	// Start HTTP server.
      	log.Printf("Listening on port %s", port)
      	if err := http.ListenAndServe(":"+port, nil); err != nil {
      		log.Fatal(err)
      	}
      }
      

      Java

      
      import org.springframework.boot.SpringApplication;
      import org.springframework.boot.autoconfigure.SpringBootApplication;
      
      @SpringBootApplication
      public class Application {
        public static void main(String[] args) {
          SpringApplication.run(Application.class, args);
        }
      }

      .NET

          public static void Main(string[] args)
          {
              CreateHostBuilder(args).Build().Run();
          }
          public static IHostBuilder CreateHostBuilder(string[] args)
          {
              var port = Environment.GetEnvironmentVariable("PORT") ?? "8080";
              var url = $"http://0.0.0.0:{port}";
      
              return Host.CreateDefaultBuilder(args)
                  .ConfigureWebHostDefaults(webBuilder =>
                  {
                      webBuilder.UseStartup<Startup>().UseUrls(url);
                  });
          }
      

      Node.js

      const app = require('./app.js');
      const PORT = parseInt(process.env.PORT) || 8080;
      
      app.listen(PORT, () =>
        console.log(`nodejs-events-storage listening on port ${PORT}`)
      );

      Python

      import os
      
      from cloudevents.http import from_http
      
      from flask import Flask, request
      
      app = Flask(__name__)
      if __name__ == "__main__":
          app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
    • 定義服務作業環境的 Dockerfile。Dockerfile 的內容依程式語言而有所不同:

      Go

      
      # Use the official Go image to create a binary.
      # This is based on Debian and sets the GOPATH to /go.
      # https://hub.docker.com/_/golang
      FROM golang:1.23-bookworm as builder
      
      # Create and change to the app directory.
      WORKDIR /app
      
      # Retrieve application dependencies.
      # This allows the container build to reuse cached dependencies.
      # Expecting to copy go.mod and if present go.sum.
      COPY go.* ./
      RUN go mod download
      
      # Copy local code to the container image.
      COPY . ./
      
      # Build the binary.
      RUN go build -v -o server
      
      # Use the official Debian slim image for a lean production container.
      # https://hub.docker.com/_/debian
      # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
      FROM debian:bookworm-slim
      RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
          ca-certificates && \
          rm -rf /var/lib/apt/lists/*
      
      # Copy the binary to the production image from the builder stage.
      COPY --from=builder /app/server /server
      
      # Run the web service on container startup.
      CMD ["/server"]
      

      Java

      
      # Use the official maven image to create a build artifact.
      # https://hub.docker.com/_/maven
      FROM maven:3-eclipse-temurin-17-alpine as builder
      
      # Copy local code to the container image.
      WORKDIR /app
      COPY pom.xml .
      COPY src ./src
      
      # Build a release artifact.
      RUN mvn package -DskipTests
      
      # Use Eclipse Temurin for base image.
      # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
      FROM eclipse-temurin:17.0.15_6-jre-alpine
      
      # Copy the jar to the production image from the builder stage.
      COPY --from=builder /app/target/audit-storage-*.jar /audit-storage.jar
      
      # Run the web service on container startup.
      CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/audit-storage.jar"]
      

      .NET

      
      # Use Microsoft's official build .NET image.
      # https://hub.docker.com/_/microsoft-dotnet-core-sdk/
      FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
      WORKDIR /app
      
      # Install production dependencies.
      # Copy csproj and restore as distinct layers.
      COPY *.csproj ./
      RUN dotnet restore
      
      # Copy local code to the container image.
      COPY . ./
      WORKDIR /app
      
      # Build a release artifact.
      RUN dotnet publish -c Release -o out
      
      
      # Use Microsoft's official runtime .NET image.
      # https://hub.docker.com/_/microsoft-dotnet-core-aspnet/
      FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime
      WORKDIR /app
      COPY --from=build /app/out ./
      
      # Run the web service on container startup.
      ENTRYPOINT ["dotnet", "AuditStorage.dll"]

      Node.js

      
      # Use the official lightweight Node.js image.
      # https://hub.docker.com/_/node
      FROM node:20-slim
      # Create and change to the app directory.
      WORKDIR /usr/src/app
      
      # Copy application dependency manifests to the container image.
      # A wildcard is used to ensure both package.json AND package-lock.json are copied.
      # Copying this separately prevents re-running npm install on every code change.
      COPY package*.json ./
      
      # Install dependencies.
      # if you need a deterministic and repeatable build create a 
      # package-lock.json file and use npm ci:
      # RUN npm ci --omit=dev
      # if you need to include development dependencies during development
      # of your application, use:
      # RUN npm install --dev
      
      RUN npm install --omit=dev
      
      # Copy local code to the container image.
      COPY . .
      
      # Run the web service on container startup.
      CMD [ "npm", "start" ]
      

      Python

      
      # Use the official Python image.
      # https://hub.docker.com/_/python
      FROM python:3.11-slim
      
      # Allow statements and log messages to immediately appear in the Cloud Run logs
      ENV PYTHONUNBUFFERED True
      
      # Copy application dependency manifests to the container image.
      # Copying this separately prevents re-running pip install on every code change.
      COPY requirements.txt ./
      
      # Install production dependencies.
      RUN pip install -r requirements.txt
      
      # Copy local code to the container image.
      ENV APP_HOME /app
      WORKDIR $APP_HOME
      COPY . ./
      
      # Run the web service on container startup. 
      # Use gunicorn webserver with one worker process and 8 threads.
      # For environments with multiple CPU cores, increase the number of workers
      # to be equal to the cores available.
      CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app

  3. 使用 Cloud Build 建構容器映像檔,並將映像檔上傳至 Artifact Registry:

    export PROJECT_ID=$(gcloud config get-value project)
    export SERVICE_NAME=troubleshoot-service
    gcloud builds submit --tag $REGION-docker.pkg.dev/${PROJECT_ID}/REPOSITORY/${SERVICE_NAME}:v1
  4. 將容器映像檔部署至 Cloud Run:

    gcloud run deploy ${SERVICE_NAME} \
        --image $REGION-docker.pkg.dev/${PROJECT_ID}/REPOSITORY/${SERVICE_NAME}:v1 \
        --allow-unauthenticated

    部署成功後,指令列會顯示服務網址。

建立觸發條件

部署 Cloud Run 服務後,請設定觸發條件,透過稽核記錄監聽 Cloud Storage 的事件。

  1. 建立 Eventarc 觸發條件,以便監聽使用 Cloud 稽核記錄轉送的 Cloud Storage 事件:

    gcloud eventarc triggers create troubleshoot-trigger \
        --destination-run-service=troubleshoot-service \
        --event-filters="type=google.cloud.audit.log.v1.written" \
        --event-filters="serviceName=storage.googleapis.com" \
        --event-filters="methodName=storage.objects.create" \
        --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com
    

    這項操作會建立名為 troubleshoot-trigger 的觸發條件。

  2. 如要確認 troubleshoot-trigger 是否已建立,請執行:

    gcloud eventarc triggers list
    

    畫面會顯示如下的輸出內容:

    NAME: troubleshoot-trigger
    TYPE: google.cloud.audit.log.v1.written
    DESTINATION: Cloud Run service: troubleshoot-service
    ACTIVE: By 20:03:37
    LOCATION: us-central1
    

產生及查看事件

確認您已成功部署服務,且可以接收 Cloud Storage 傳出的事件。

  1. 建立檔案並上傳至 BUCKET1 儲存空間值區:

     echo "Hello World" > random.txt
     gsutil cp random.txt gs://${BUCKET1}/random.txt
    
  2. 監控記錄,確認服務是否收到事件。如要查看記錄項目,請完成下列步驟:

    1. 篩選記錄項目,並以 JSON 格式傳回輸出內容:

      gcloud logging read "resource.labels.service_name=troubleshoot-service \
          AND textPayload:random.txt" \
          --format=json
    2. 尋找類似下列內容的記錄項目:

      "textPayload": "Detected change in Cloud Storage bucket: ..."
      

請注意,系統一開始不會傳回任何記錄項目。這表示設定有問題,您必須進行調查。

調查問題

逐步調查服務未收到事件的原因。

初始化時間

雖然觸發條件會立即建立,但觸發條件傳播及篩選事件可能需要最多兩分鐘的時間。執行下列指令,確認觸發條件是否處於啟用狀態:

gcloud eventarc triggers list

輸出結果會顯示觸發事件的狀態。在以下範例中,troubleshoot-trigger 會在 14:16:56 前啟用:

NAME                  TYPE                               DESTINATION_RUN_SERVICE  ACTIVE
troubleshoot-trigger  google.cloud.audit.log.v1.written  troubleshoot-service     By 14:16:56

觸發條件啟用後,請再次將檔案上傳至儲存體儲存空間。事件會寫入 Cloud Run 服務記錄。如果服務未收到事件,可能與事件大小有關。

稽核記錄

在本教學課程中,我們會使用 Cloud 稽核記錄將 Cloud Storage 事件路由並傳送至 Cloud Run。確認已為 Cloud Storage 啟用稽核記錄。

  1. 前往 Google Cloud 控制台的「Audit Logs」頁面。

    前往「稽核記錄」

  2. 勾選「Google Cloud Storage」核取方塊。
  3. 確認已選取「管理員讀取」、「資料讀取」和「資料寫入」記錄類型。

啟用 Cloud 稽核記錄後,請再次將檔案上傳至儲存體值區,然後查看記錄。如果服務仍未收到事件,可能與觸發位置有關。

觸發地點

不同位置可能會有多個資源,因此您必須篩選與 Cloud Run 目標位於相同區域的來源事件。詳情請參閱「Eventarc 支援的位置」和「瞭解 Eventarc 位置」。

在本教學課程中,您已將 Cloud Run 服務部署至 us-central1。由於您將 eventarc/location 設為 us-central1,因此也會在相同位置建立觸發條件。

但您在 us-east1us-west1 位置建立了兩個 Cloud Storage 值區。如要接收這些位置的事件,您必須在這些位置建立 Eventarc 觸發條件。

建立位於 us-east1 中的 Eventarc 觸發條件:

  1. 確認現有觸發條件的所在位置:

    gcloud eventarc triggers describe troubleshoot-trigger
    
  2. 將位置和區域設為 us-east1

    gcloud config set eventarc/location us-east1
    gcloud config set run/region us-east1
    
  3. 再次部署事件接收器,方法是建構容器映像檔並部署至 Cloud Run。

  4. 建立位於 us-east1 中的新觸發條件:

    gcloud eventarc triggers create troubleshoot-trigger-new \
      --destination-run-service=troubleshoot-service \
      --event-filters="type=google.cloud.audit.log.v1.written" \
      --event-filters="serviceName=storage.googleapis.com" \
      --event-filters="methodName=storage.objects.create" \
      --service-account=${PROJECT_NUMBER}-compute@developer.gserviceaccount.com
    
  5. 檢查是否已建立觸發條件:

    gcloud eventarc triggers list
    

    觸發條件初始化最多可能需要兩分鐘,才能開始將事件路由。

  6. 如要確認觸發條件已正確部署,請產生並查看事件

其他可能遇到的問題

使用 Eventarc 時,您可能會遇到其他問題。

事件大小

您傳送的事件不得超過事件大小限制。

先前可觸發事件的觸發條件已停止運作

  1. 確認來源是否會產生事件。檢查 Cloud 稽核記錄,確認受監控的服務是否會發出記錄。如果記錄了記錄檔,但未傳送事件,請與支援團隊聯絡

  2. 確認是否有同名觸發條件的 Pub/Sub 主題。Eventarc 會使用 Pub/Sub 做為傳輸層,並使用現有的 Pub/Sub 主題,或自動建立主題並為您管理。

    1. 如要列出觸發條件,請參閱 gcloud eventarc triggers list
    2. 如要列出 Pub/Sub 主題,請執行:

      gcloud pubsub topics list
      
    3. 確認 Pub/Sub 主題名稱包含已建立觸發條件的名稱。例如:

      name: projects/PROJECT_ID/topics/eventarc-us-east1-troubleshoot-trigger-new-123

    如果找不到 Pub/Sub 主題,請針對特定供應商、事件類型和 Cloud Run 目的地重新建立觸發事件

  3. 確認已為服務設定觸發條件。

    1. 前往 Google Cloud 控制台的「Services」頁面。

      前往「Services」

    2. 按一下服務名稱,開啟「Service details」(服務詳細資料)頁面。

    3. 按一下「觸發條件」分頁標籤。

      系統應列出與服務相關聯的 Eventarc 觸發條件。

  4. 使用 Pub/Sub 指標類型驗證 Pub/Sub 主題和訂閱的健康狀況。

    • 您可以使用 subscription/dead_letter_message_count 指標監控轉寄的無法送達郵件。這個指標會顯示 Pub/Sub 從訂閱項目轉寄的無法送達訊息數量。

      如果訊息未發布至主題,請檢查 Cloud 稽核記錄,確認受監控的服務是否會產生記錄。如果記錄了記錄檔,但未傳送事件,請與支援團隊聯絡

    • 您可以使用 subscription/push_request_count 指標,並依 response_codesubcription_id 將指標分組,監控推播訂閱項目

      如果系統回報推送錯誤,請查看 Cloud Run 服務記錄。如果接收端點傳回的狀態碼不是 OK,表示 Cloud Run 程式碼無法正常運作,您必須與支援團隊聯絡

    詳情請參閱「建立以指標門檻為基礎的警告政策」。

清除所用資源

如果您是為了這個教學課程建立新專案,請刪除專案。如果您使用現有的專案,且想保留該專案而不採用本教學課程中新增的變更,請刪除為教學課程建立的資源

刪除專案

如要避免付費,最簡單的方法就是刪除您為了本教學課程所建立的專案。

如要刪除專案:

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

刪除教學課程資源

  1. 刪除您在本教學課程中部署的 Cloud Run 服務:

    gcloud run services delete SERVICE_NAME

    其中 SERVICE_NAME 是您選擇的服務名稱。

    您也可以從 Google Cloud 控制台刪除 Cloud Run 服務。

  2. 移除您在教學課程設定期間新增的所有 gcloud CLI 預設設定。

    例如:

    gcloud config unset run/region

    gcloud config unset project

  3. 刪除本教學課程中建立的其他 Google Cloud 資源:

    • 刪除 Eventarc 觸發條件:
      gcloud eventarc triggers delete TRIGGER_NAME
      
      請將 TRIGGER_NAME 替換為觸發條件名稱。

後續步驟