託管靜態網站

本頁說明如何設定 Cloud Storage 值區來代管您網域的靜態網站。靜態網頁可能會運用到 HTML、CSS 和 JavaScript 等用戶端技術,且不得包含伺服器端指令碼 (如 PHP) 的動態內容。

總覽

由於 Cloud Storage 本身不支援透過 HTTPS 使用自訂網域,本教學課程會搭配外部應用程式負載平衡器使用 Cloud Storage,透過 HTTPS 從自訂網域提供內容。如要進一步瞭解如何透過 HTTPS 從自訂網域提供內容,請參閱 HTTPS 服務疑難排解。您也可以使用 Cloud Storage 透過 HTTP 提供自訂網域內容,這不需要負載平衡器。

如需靜態網頁的範例和提示,包括如何託管動態網站的靜態資產,請參閱靜態網站頁面

本頁面的操作說明將說明如何執行下列步驟:

  • 上傳及分享您網站的檔案。

  • 設定負載平衡器和 SSL 憑證。

  • 將負載平衡器連線至儲存空間。

  • 使用 A 記錄將網域指向負載平衡器。

  • 測試網站。

定價

本頁的操作說明會使用Google Cloud的下列計費元件:

請參閱「監控費用」小秘訣,進一步瞭解託管靜態網站時會產生哪些費用。

限制

您可以透過物件可供大眾讀取的值區,託管靜態網站。如果 bucket 啟用禁止公開存取,就無法用來託管靜態網站。如要使用 Cloud Storage 託管靜態網站,可以採用下列任一方法:

  • 建立可公開存取資料的新值區。建立 bucket 時,取消勾選標示為「強制禁止公開存取這個 bucket」的方塊。建立 bucket 後,請將 Storage 物件檢視者角色授予 allUsers 主體。詳情請參閱「建立 bucket」。

  • 將現有值區的資料設為公開。詳情請參閱「共用檔案」。

事前準備

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. 為專案啟用 Compute Engine API
  7. 具備下列身分與存取權管理角色:Storage 管理員Compute 網路管理員
  8. 有一個由您擁有或管理的網域。如果您目前沒有網域,可以透過許多現有服務 (例如 Cloud Domains) 來註冊新網域。

    本教學課程使用的網域為 example.com

  9. 您想提供幾個網站檔案。本教學課程最適合至少有索引頁面 (index.html) 和 404 頁面 (404.html) 的使用者。
  10. 建立 Cloud Storage bucket,用於儲存要提供的檔案。如果目前沒有值區,請建立值區
  11. (選用) 如要讓 Cloud Storage 值區與網域同名,請驗證您擁有或管理要使用的網域。請務必驗證頂層網域 (如 example.com),而非子網域 (如 www.example.com)。如果您是透過 Cloud Domains 購買網域,系統會自動進行驗證。
  12. 上傳網站檔案

    將您想讓網站提供的檔案新增到值區內:

    控制台

    1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

      前往「Buckets」(值區) 頁面

    2. 在值區清單中,按一下您建立的值區名稱。

      「Bucket details」(值區詳細資料) 頁面隨即開啟,並選取「Objects」(物件) 分頁標籤。

    3. 按一下「上傳檔案」按鈕。

    4. 在檔案對話方塊中,找出並選取您需要的檔案。

    上傳完成後,您應該會看到檔案名稱與值區中顯示的檔案資訊。

    如要瞭解如何透過 Google Cloud 控制台取得 Cloud Storage 作業失敗的詳細錯誤資訊,請參閱「疑難排解」一文。

    指令列

    使用 gcloud storage cp 指令將檔案複製到 bucket。 舉例來說,如要將檔案 index.html 從目前的位置 Desktop 複製到值區 my-static-assets

    gcloud storage cp Desktop/index.html gs://my-static-assets

    如果成功,回應會類似以下範例:

    Completed files 1/1 | 164.3kiB/164.3kiB

    用戶端程式庫

    C++

    詳情請參閱 Cloud Storage C++ API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    namespace gcs = ::google::cloud::storage;
    using ::google::cloud::StatusOr;
    [](gcs::Client client, std::string const& file_name,
       std::string const& bucket_name, std::string const& object_name) {
      // Note that the client library automatically computes a hash on the
      // client-side to verify data integrity during transmission.
      StatusOr<gcs::ObjectMetadata> metadata = client.UploadFile(
          file_name, bucket_name, object_name, gcs::IfGenerationMatch(0));
      if (!metadata) throw std::move(metadata).status();
    
      std::cout << "Uploaded " << file_name << " to object " << metadata->name()
                << " in bucket " << metadata->bucket()
                << "\nFull metadata: " << *metadata << "\n";
    }

    C#

    詳情請參閱 Cloud Storage C# API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    
    using Google.Cloud.Storage.V1;
    using System;
    using System.IO;
    
    public class UploadFileSample
    {
        public void UploadFile(
            string bucketName = "your-unique-bucket-name",
            string localPath = "my-local-path/my-file-name",
            string objectName = "my-file-name")
        {
            var storage = StorageClient.Create();
            using var fileStream = File.OpenRead(localPath);
            storage.UploadObject(bucketName, objectName, null, fileStream);
            Console.WriteLine($"Uploaded {objectName}.");
        }
    }
    

    Go

    詳情請參閱 Cloud Storage Go API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    import (
    	"context"
    	"fmt"
    	"io"
    	"os"
    	"time"
    
    	"cloud.google.com/go/storage"
    )
    
    // uploadFile uploads an object.
    func uploadFile(w io.Writer, bucket, object string) error {
    	// bucket := "bucket-name"
    	// object := "object-name"
    	ctx := context.Background()
    	client, err := storage.NewClient(ctx)
    	if err != nil {
    		return fmt.Errorf("storage.NewClient: %w", err)
    	}
    	defer client.Close()
    
    	// Open local file.
    	f, err := os.Open("notes.txt")
    	if err != nil {
    		return fmt.Errorf("os.Open: %w", err)
    	}
    	defer f.Close()
    
    	ctx, cancel := context.WithTimeout(ctx, time.Second*50)
    	defer cancel()
    
    	o := client.Bucket(bucket).Object(object)
    
    	// Optional: set a generation-match precondition to avoid potential race
    	// conditions and data corruptions. The request to upload is aborted if the
    	// object's generation number does not match your precondition.
    	// For an object that does not yet exist, set the DoesNotExist precondition.
    	o = o.If(storage.Conditions{DoesNotExist: true})
    	// If the live object already exists in your bucket, set instead a
    	// generation-match precondition using the live object's generation number.
    	// attrs, err := o.Attrs(ctx)
    	// if err != nil {
    	// 	return fmt.Errorf("object.Attrs: %w", err)
    	// }
    	// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})
    
    	// Upload an object with storage.Writer.
    	wc := o.NewWriter(ctx)
    	if _, err = io.Copy(wc, f); err != nil {
    		return fmt.Errorf("io.Copy: %w", err)
    	}
    	if err := wc.Close(); err != nil {
    		return fmt.Errorf("Writer.Close: %w", err)
    	}
    	fmt.Fprintf(w, "Blob %v uploaded.\n", object)
    	return nil
    }
    

    Java

    詳情請參閱 Cloud Storage Java API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    下例示範如何上傳個別物件:

    
    import com.google.cloud.storage.BlobId;
    import com.google.cloud.storage.BlobInfo;
    import com.google.cloud.storage.Storage;
    import com.google.cloud.storage.StorageOptions;
    import java.io.IOException;
    import java.nio.file.Paths;
    
    public class UploadObject {
      public static void uploadObject(
          String projectId, String bucketName, String objectName, String filePath) throws IOException {
        // The ID of your GCP project
        // String projectId = "your-project-id";
    
        // The ID of your GCS bucket
        // String bucketName = "your-unique-bucket-name";
    
        // The ID of your GCS object
        // String objectName = "your-object-name";
    
        // The path to your file to upload
        // String filePath = "path/to/your/file"
    
        Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
        BlobId blobId = BlobId.of(bucketName, objectName);
        BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();
    
        // Optional: set a generation-match precondition to avoid potential race
        // conditions and data corruptions. The request returns a 412 error if the
        // preconditions are not met.
        Storage.BlobWriteOption precondition;
        if (storage.get(bucketName, objectName) == null) {
          // For a target object that does not yet exist, set the DoesNotExist precondition.
          // This will cause the request to fail if the object is created before the request runs.
          precondition = Storage.BlobWriteOption.doesNotExist();
        } else {
          // If the destination already exists in your bucket, instead set a generation-match
          // precondition. This will cause the request to fail if the existing object's generation
          // changes before the request runs.
          precondition =
              Storage.BlobWriteOption.generationMatch(
                  storage.get(bucketName, objectName).getGeneration());
        }
        storage.createFrom(blobInfo, Paths.get(filePath), precondition);
    
        System.out.println(
            "File " + filePath + " uploaded to bucket " + bucketName + " as " + objectName);
      }
    }

    下例示範如何同時上傳多個物件:

    import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
    import com.google.cloud.storage.transfermanager.TransferManager;
    import com.google.cloud.storage.transfermanager.TransferManagerConfig;
    import com.google.cloud.storage.transfermanager.UploadResult;
    import java.io.IOException;
    import java.nio.file.Path;
    import java.util.List;
    
    class UploadMany {
    
      public static void uploadManyFiles(String bucketName, List<Path> files) throws IOException {
        TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
        ParallelUploadConfig parallelUploadConfig =
            ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();
        List<UploadResult> results =
            transferManager.uploadFiles(files, parallelUploadConfig).getUploadResults();
        for (UploadResult result : results) {
          System.out.println(
              "Upload for "
                  + result.getInput().getName()
                  + " completed with status "
                  + result.getStatus());
        }
      }
    }

    下列範例會同時上傳具有相同前置字串的所有物件:

    import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
    import com.google.cloud.storage.transfermanager.TransferManager;
    import com.google.cloud.storage.transfermanager.TransferManagerConfig;
    import com.google.cloud.storage.transfermanager.UploadResult;
    import java.io.IOException;
    import java.nio.file.Files;
    import java.nio.file.Path;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.stream.Stream;
    
    class UploadDirectory {
    
      public static void uploadDirectoryContents(String bucketName, Path sourceDirectory)
          throws IOException {
        TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
        ParallelUploadConfig parallelUploadConfig =
            ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();
    
        // Create a list to store the file paths
        List<Path> filePaths = new ArrayList<>();
        // Get all files in the directory
        // try-with-resource to ensure pathStream is closed
        try (Stream<Path> pathStream = Files.walk(sourceDirectory)) {
          pathStream.filter(Files::isRegularFile).forEach(filePaths::add);
        }
        List<UploadResult> results =
            transferManager.uploadFiles(filePaths, parallelUploadConfig).getUploadResults();
        for (UploadResult result : results) {
          System.out.println(
              "Upload for "
                  + result.getInput().getName()
                  + " completed with status "
                  + result.getStatus());
        }
      }
    }

    Node.js

    詳情請參閱 Cloud Storage Node.js API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    下例示範如何上傳個別物件:

    /**
     * TODO(developer): Uncomment the following lines before running the sample.
     */
    // The ID of your GCS bucket
    // const bucketName = 'your-unique-bucket-name';
    
    // The path to your file to upload
    // const filePath = 'path/to/your/file';
    
    // The new ID for your GCS file
    // const destFileName = 'your-new-file-name';
    
    // Imports the Google Cloud client library
    const {Storage} = require('@google-cloud/storage');
    
    // Creates a client
    const storage = new Storage();
    
    async function uploadFile() {
      const options = {
        destination: destFileName,
        // Optional:
        // Set a generation-match precondition to avoid potential race conditions
        // and data corruptions. The request to upload is aborted if the object's
        // generation number does not match your precondition. For a destination
        // object that does not yet exist, set the ifGenerationMatch precondition to 0
        // If the destination object already exists in your bucket, set instead a
        // generation-match precondition using its generation number.
        preconditionOpts: {ifGenerationMatch: generationMatchPrecondition},
      };
    
      await storage.bucket(bucketName).upload(filePath, options);
      console.log(`${filePath} uploaded to ${bucketName}`);
    }
    
    uploadFile().catch(console.error);

    下例示範如何同時上傳多個物件:

    /**
     * TODO(developer): Uncomment the following lines before running the sample.
     */
    // The ID of your GCS bucket
    // const bucketName = 'your-unique-bucket-name';
    
    // The ID of the first GCS file to upload
    // const firstFilePath = 'your-first-file-name';
    
    // The ID of the second GCS file to upload
    // const secondFilePath = 'your-second-file-name';
    
    // Imports the Google Cloud client library
    const {Storage, TransferManager} = require('@google-cloud/storage');
    
    // Creates a client
    const storage = new Storage();
    
    // Creates a transfer manager client
    const transferManager = new TransferManager(storage.bucket(bucketName));
    
    async function uploadManyFilesWithTransferManager() {
      // Uploads the files
      await transferManager.uploadManyFiles([firstFilePath, secondFilePath]);
    
      for (const filePath of [firstFilePath, secondFilePath]) {
        console.log(`${filePath} uploaded to ${bucketName}.`);
      }
    }
    
    uploadManyFilesWithTransferManager().catch(console.error);

    下列範例會同時上傳具有相同前置字串的所有物件:

    /**
     * TODO(developer): Uncomment the following lines before running the sample.
     */
    // The ID of your GCS bucket
    // const bucketName = 'your-unique-bucket-name';
    
    // The local directory to upload
    // const directoryName = 'your-directory';
    
    // Imports the Google Cloud client library
    const {Storage, TransferManager} = require('@google-cloud/storage');
    
    // Creates a client
    const storage = new Storage();
    
    // Creates a transfer manager client
    const transferManager = new TransferManager(storage.bucket(bucketName));
    
    async function uploadDirectoryWithTransferManager() {
      // Uploads the directory
      await transferManager.uploadManyFiles(directoryName);
    
      console.log(`${directoryName} uploaded to ${bucketName}.`);
    }
    
    uploadDirectoryWithTransferManager().catch(console.error);

    PHP

    詳情請參閱 Cloud Storage PHP API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    use Google\Cloud\Storage\StorageClient;
    
    /**
     * Upload a file.
     *
     * @param string $bucketName The name of your Cloud Storage bucket.
     *        (e.g. 'my-bucket')
     * @param string $objectName The name of your Cloud Storage object.
     *        (e.g. 'my-object')
     * @param string $source The path to the file to upload.
     *        (e.g. '/path/to/your/file')
     */
    function upload_object(string $bucketName, string $objectName, string $source): void
    {
        $storage = new StorageClient();
        if (!$file = fopen($source, 'r')) {
            throw new \InvalidArgumentException('Unable to open file for reading');
        }
        $bucket = $storage->bucket($bucketName);
        $object = $bucket->upload($file, [
            'name' => $objectName
        ]);
        printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);
    }

    Python

    詳情請參閱 Cloud Storage Python API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    下例示範如何上傳個別物件:

    from google.cloud import storage
    
    
    def upload_blob(bucket_name, source_file_name, destination_blob_name):
        """Uploads a file to the bucket."""
        # The ID of your GCS bucket
        # bucket_name = "your-bucket-name"
        # The path to your file to upload
        # source_file_name = "local/path/to/file"
        # The ID of your GCS object
        # destination_blob_name = "storage-object-name"
    
        storage_client = storage.Client()
        bucket = storage_client.bucket(bucket_name)
        blob = bucket.blob(destination_blob_name)
    
        # Optional: set a generation-match precondition to avoid potential race conditions
        # and data corruptions. The request to upload is aborted if the object's
        # generation number does not match your precondition. For a destination
        # object that does not yet exist, set the if_generation_match precondition to 0.
        # If the destination object already exists in your bucket, set instead a
        # generation-match precondition using its generation number.
        generation_match_precondition = 0
    
        blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)
    
        print(
            f"File {source_file_name} uploaded to {destination_blob_name}."
        )
    
    

    下例示範如何同時上傳多個物件:

    def upload_many_blobs_with_transfer_manager(
        bucket_name, filenames, source_directory="", workers=8
    ):
        """Upload every file in a list to a bucket, concurrently in a process pool.
    
        Each blob name is derived from the filename, not including the
        `source_directory` parameter. For complete control of the blob name for each
        file (and other aspects of individual blob metadata), use
        transfer_manager.upload_many() instead.
        """
    
        # The ID of your GCS bucket
        # bucket_name = "your-bucket-name"
    
        # A list (or other iterable) of filenames to upload.
        # filenames = ["file_1.txt", "file_2.txt"]
    
        # The directory on your computer that is the root of all of the files in the
        # list of filenames. This string is prepended (with os.path.join()) to each
        # filename to get the full path to the file. Relative paths and absolute
        # paths are both accepted. This string is not included in the name of the
        # uploaded blob; it is only used to find the source files. An empty string
        # means "the current working directory". Note that this parameter allows
        # directory traversal (e.g. "/", "../") and is not intended for unsanitized
        # end user input.
        # source_directory=""
    
        # The maximum number of processes to use for the operation. The performance
        # impact of this value depends on the use case, but smaller files usually
        # benefit from a higher number of processes. Each additional process occupies
        # some CPU and memory resources until finished. Threads can be used instead
        # of processes by passing `worker_type=transfer_manager.THREAD`.
        # workers=8
    
        from google.cloud.storage import Client, transfer_manager
    
        storage_client = Client()
        bucket = storage_client.bucket(bucket_name)
    
        results = transfer_manager.upload_many_from_filenames(
            bucket, filenames, source_directory=source_directory, max_workers=workers
        )
    
        for name, result in zip(filenames, results):
            # The results list is either `None` or an exception for each filename in
            # the input list, in order.
    
            if isinstance(result, Exception):
                print("Failed to upload {} due to exception: {}".format(name, result))
            else:
                print("Uploaded {} to {}.".format(name, bucket.name))

    下列範例會同時上傳具有相同前置字串的所有物件:

    def upload_directory_with_transfer_manager(bucket_name, source_directory, workers=8):
        """Upload every file in a directory, including all files in subdirectories.
    
        Each blob name is derived from the filename, not including the `directory`
        parameter itself. For complete control of the blob name for each file (and
        other aspects of individual blob metadata), use
        transfer_manager.upload_many() instead.
        """
    
        # The ID of your GCS bucket
        # bucket_name = "your-bucket-name"
    
        # The directory on your computer to upload. Files in the directory and its
        # subdirectories will be uploaded. An empty string means "the current
        # working directory".
        # source_directory=""
    
        # The maximum number of processes to use for the operation. The performance
        # impact of this value depends on the use case, but smaller files usually
        # benefit from a higher number of processes. Each additional process occupies
        # some CPU and memory resources until finished. Threads can be used instead
        # of processes by passing `worker_type=transfer_manager.THREAD`.
        # workers=8
    
        from pathlib import Path
    
        from google.cloud.storage import Client, transfer_manager
    
        storage_client = Client()
        bucket = storage_client.bucket(bucket_name)
    
        # Generate a list of paths (in string form) relative to the `directory`.
        # This can be done in a single list comprehension, but is expanded into
        # multiple lines here for clarity.
    
        # First, recursively get all files in `directory` as Path objects.
        directory_as_path_obj = Path(source_directory)
        paths = directory_as_path_obj.rglob("*")
    
        # Filter so the list only includes files, not directories themselves.
        file_paths = [path for path in paths if path.is_file()]
    
        # These paths are relative to the current working directory. Next, make them
        # relative to `directory`
        relative_paths = [path.relative_to(source_directory) for path in file_paths]
    
        # Finally, convert them all to strings.
        string_paths = [str(path) for path in relative_paths]
    
        print("Found {} files.".format(len(string_paths)))
    
        # Start the upload.
        results = transfer_manager.upload_many_from_filenames(
            bucket, string_paths, source_directory=source_directory, max_workers=workers
        )
    
        for name, result in zip(string_paths, results):
            # The results list is either `None` or an exception for each filename in
            # the input list, in order.
    
            if isinstance(result, Exception):
                print("Failed to upload {} due to exception: {}".format(name, result))
            else:
                print("Uploaded {} to {}.".format(name, bucket.name))

    Ruby

    詳情請參閱 Cloud Storage Ruby API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    def upload_file bucket_name:, local_file_path:, file_name: nil
      # The ID of your GCS bucket
      # bucket_name = "your-unique-bucket-name"
    
      # The path to your file to upload
      # local_file_path = "/local/path/to/file.txt"
    
      # The ID of your GCS object
      # file_name = "your-file-name"
    
      require "google/cloud/storage"
    
      storage = Google::Cloud::Storage.new
      bucket  = storage.bucket bucket_name, skip_lookup: true
    
      file = bucket.create_file local_file_path, file_name
    
      puts "Uploaded #{local_file_path} as #{file.name} in bucket #{bucket_name}"
    end

    Terraform

    # Upload a simple index.html page to the bucket
    resource "google_storage_bucket_object" "indexpage" {
      name         = "index.html"
      content      = "<html><body>Hello World!</body></html>"
      content_type = "text/html"
      bucket       = google_storage_bucket.static_website.id
    }
    
    # Upload a simple 404 / error page to the bucket
    resource "google_storage_bucket_object" "errorpage" {
      name         = "404.html"
      content      = "<html><body>404!</body></html>"
      content_type = "text/html"
      bucket       = google_storage_bucket.static_website.id
    }

    REST API

    JSON API

    1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

    2. 使用 cURL 透過 POST 物件要求呼叫 JSON API。針對上傳至名為 my-static-assets 的 bucket 的檔案 index.html

      curl -X POST --data-binary @index.html \
        -H "Content-Type: text/html" \
        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
        "https://storage.googleapis.com/upload/storage/v1/b/my-static-assets/o?uploadType=media&name=index.html"

    XML API

    1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

    2. 使用 cURL 透過 PUT物件要求呼叫 XML API。針對上傳到名為 my-static-assets 的值區的 index.html 檔案:

      curl -X PUT --data-binary @index.html \
        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
        -H "Content-Type: text/html" \
        "https://storage.googleapis.com/my-static-assets/index.html"

    共用檔案

    如要將 bucket 中的所有物件設為可供公開網際網路上的任何人讀取,請按照下列步驟操作:

    控制台

    1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

      前往「Buckets」(值區) 頁面

    2. 在值區清單中,找到您要設為公開的值區名稱,然後點選這個名稱。

    3. 選取靠近頁面上方的 [Permissions] (權限) 分頁標籤。

    4. 如果「公開存取權」窗格顯示「非公開」,請按一下標示為「移除禁止公開存取功能」的按鈕,然後在隨即顯示的對話方塊中點選「確認」

    5. 按一下「授予存取權」按鈕。

      系統會顯示「新增主體」對話方塊。

    6. 在「New principals」(新增主體) 欄位中輸入 allUsers

    7. 在「請選擇角色」下拉式選單中,選取「Cloud Storage」子選單,然後點選「Storage 物件檢視者」選項。

    8. 按一下 [儲存]

    9. 按一下「Allow public access」(允許公開存取)

    公開共用後,「public access」(公開存取權) 欄中會出現各個物件的連結圖示。點選這個圖示即可取得物件的網址。

    如要瞭解如何透過 Google Cloud 控制台取得 Cloud Storage 作業失敗的詳細錯誤資訊,請參閱「疑難排解」一文。

    指令列

    使用 buckets add-iam-policy-binding 指令:

    gcloud storage buckets add-iam-policy-binding  gs://my-static-assets --member=allUsers --role=roles/storage.objectViewer

    用戶端程式庫

    C++

    詳情請參閱 Cloud Storage C++ API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    namespace gcs = ::google::cloud::storage;
    using ::google::cloud::StatusOr;
    [](gcs::Client client, std::string const& bucket_name) {
      auto current_policy = client.GetNativeBucketIamPolicy(
          bucket_name, gcs::RequestedPolicyVersion(3));
      if (!current_policy) throw std::move(current_policy).status();
    
      current_policy->set_version(3);
      current_policy->bindings().emplace_back(
          gcs::NativeIamBinding("roles/storage.objectViewer", {"allUsers"}));
    
      auto updated =
          client.SetNativeBucketIamPolicy(bucket_name, *current_policy);
      if (!updated) throw std::move(updated).status();
    
      std::cout << "Policy successfully updated: " << *updated << "\n";
    }

    C#

    詳情請參閱 Cloud Storage C# API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    
    using Google.Apis.Storage.v1.Data;
    using Google.Cloud.Storage.V1;
    using System;
    using System.Collections.Generic;
    
    public class MakeBucketPublicSample
    {
        public void MakeBucketPublic(string bucketName = "your-unique-bucket-name")
        {
            var storage = StorageClient.Create();
    
            Policy policy = storage.GetBucketIamPolicy(bucketName);
    
            policy.Bindings.Add(new Policy.BindingsData
            {
                Role = "roles/storage.objectViewer",
                Members = new List<string> { "allUsers" }
            });
    
            storage.SetBucketIamPolicy(bucketName, policy);
            Console.WriteLine(bucketName + " is now public ");
        }
    }

    Go

    詳情請參閱 Cloud Storage Go API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    import (
    	"context"
    	"fmt"
    	"io"
    
    	"cloud.google.com/go/iam"
    	"cloud.google.com/go/iam/apiv1/iampb"
    	"cloud.google.com/go/storage"
    )
    
    // setBucketPublicIAM makes all objects in a bucket publicly readable.
    func setBucketPublicIAM(w io.Writer, bucketName string) error {
    	// bucketName := "bucket-name"
    	ctx := context.Background()
    	client, err := storage.NewClient(ctx)
    	if err != nil {
    		return fmt.Errorf("storage.NewClient: %w", err)
    	}
    	defer client.Close()
    
    	policy, err := client.Bucket(bucketName).IAM().V3().Policy(ctx)
    	if err != nil {
    		return fmt.Errorf("Bucket(%q).IAM().V3().Policy: %w", bucketName, err)
    	}
    	role := "roles/storage.objectViewer"
    	policy.Bindings = append(policy.Bindings, &iampb.Binding{
    		Role:    role,
    		Members: []string{iam.AllUsers},
    	})
    	if err := client.Bucket(bucketName).IAM().V3().SetPolicy(ctx, policy); err != nil {
    		return fmt.Errorf("Bucket(%q).IAM().SetPolicy: %w", bucketName, err)
    	}
    	fmt.Fprintf(w, "Bucket %v is now publicly readable\n", bucketName)
    	return nil
    }
    

    Java

    詳情請參閱 Cloud Storage Java API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    import com.google.cloud.Identity;
    import com.google.cloud.Policy;
    import com.google.cloud.storage.Storage;
    import com.google.cloud.storage.StorageOptions;
    import com.google.cloud.storage.StorageRoles;
    
    public class MakeBucketPublic {
      public static void makeBucketPublic(String projectId, String bucketName) {
        // The ID of your GCP project
        // String projectId = "your-project-id";
    
        // The ID of your GCS bucket
        // String bucketName = "your-unique-bucket-name";
    
        Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
        Policy originalPolicy = storage.getIamPolicy(bucketName);
        storage.setIamPolicy(
            bucketName,
            originalPolicy.toBuilder()
                .addIdentity(StorageRoles.objectViewer(), Identity.allUsers()) // All users can view
                .build());
    
        System.out.println("Bucket " + bucketName + " is now publicly readable");
      }
    }

    Node.js

    詳情請參閱 Cloud Storage Node.js API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    /**
     * TODO(developer): Uncomment the following lines before running the sample.
     */
    // The ID of your GCS bucket
    // const bucketName = 'your-unique-bucket-name';
    
    // Imports the Google Cloud client library
    const {Storage} = require('@google-cloud/storage');
    
    // Creates a client
    const storage = new Storage();
    
    async function makeBucketPublic() {
      await storage.bucket(bucketName).makePublic();
    
      console.log(`Bucket ${bucketName} is now publicly readable`);
    }
    
    makeBucketPublic().catch(console.error);

    PHP

    詳情請參閱 Cloud Storage PHP API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    use Google\Cloud\Storage\StorageClient;
    
    /**
     * Update the specified bucket's IAM configuration to make it publicly accessible.
     *
     * @param string $bucketName The name of your Cloud Storage bucket.
     *        (e.g. 'my-bucket')
     */
    function set_bucket_public_iam(string $bucketName): void
    {
        $storage = new StorageClient();
        $bucket = $storage->bucket($bucketName);
    
        $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]);
        $policy['version'] = 3;
    
        $role = 'roles/storage.objectViewer';
        $members = ['allUsers'];
    
        $policy['bindings'][] = [
            'role' => $role,
            'members' => $members
        ];
    
        $bucket->iam()->setPolicy($policy);
    
        printf('Bucket %s is now public', $bucketName);
    }

    Python

    詳情請參閱 Cloud Storage Python API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    from typing import List
    
    from google.cloud import storage
    
    
    def set_bucket_public_iam(
        bucket_name: str = "your-bucket-name",
        members: List[str] = ["allUsers"],
    ):
        """Set a public IAM Policy to bucket"""
        # bucket_name = "your-bucket-name"
    
        storage_client = storage.Client()
        bucket = storage_client.bucket(bucket_name)
    
        policy = bucket.get_iam_policy(requested_policy_version=3)
        policy.bindings.append(
            {"role": "roles/storage.objectViewer", "members": members}
        )
    
        bucket.set_iam_policy(policy)
    
        print(f"Bucket {bucket.name} is now publicly readable")
    
    

    Ruby

    詳情請參閱 Cloud Storage Ruby API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    def set_bucket_public_iam bucket_name:
      # The ID of your GCS bucket
      # bucket_name = "your-unique-bucket-name"
    
      require "google/cloud/storage"
    
      storage = Google::Cloud::Storage.new
      bucket = storage.bucket bucket_name
    
      bucket.policy do |p|
        p.add "roles/storage.objectViewer", "allUsers"
      end
    
      puts "Bucket #{bucket_name} is now publicly readable"
    end

    Terraform

    # Make bucket public by granting allUsers storage.objectViewer access
    resource "google_storage_bucket_iam_member" "public_rule" {
      bucket = google_storage_bucket.static_website.name
      role   = "roles/storage.objectViewer"
      member = "allUsers"
    }

    REST API

    JSON API

    1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

    2. 建立包含下列資訊的 JSON 檔案:

      {
        "bindings":[
          {
            "role": "roles/storage.objectViewer",
            "members":["allUsers"]
          }
        ]
      }
    3. 使用 cURL 透過 PUT Bucket 要求呼叫 JSON API

      curl -X PUT --data-binary @JSON_FILE_NAME \
        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
        -H "Content-Type: application/json" \
        "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"

      其中:

      • JSON_FILE_NAME 是您在步驟 2 建立的 JSON 檔案路徑。
      • BUCKET_NAME 是要將物件設為公開的值區名稱。例如:my-static-assets

    XML API

    XML API 不支援將值區中的所有物件設為可公開讀取,請改用 Google Cloud 控制台或 gcloud storage,或為每個物件個別設定 ACL。請注意,如要為每個物件設定 ACL,必須將值區的「存取權控管」模式切換為「精細」

    如有需要,您也可以將部分值區設為公開存取

    針對非公開或不存在的檔案要求網址時,訪客會收到 http 403 回應碼。請參閱下一節,進一步瞭解如何新增顯示使用 http 404 回應碼的錯誤網頁。

    建議:指派專用頁面

    您可以指派索引頁面的尾碼和自訂錯誤網頁,這類網頁稱為特殊頁面。雖然不一定要指派,但如果您未指派索引頁面尾碼並上傳對應的索引頁面,使用者存取頂層網站時,系統會提供 XML 文件樹狀結構,其中包含值區中的公開物件清單。

    如要進一步瞭解專業頁面的行為,請參閱「專業頁面」。

    控制台

    1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

      前往「Buckets」(值區) 頁面

    2. 在值區清單中,找到您建立的值區。

    3. 按一下與值區相關聯的「Bucket overflow」(值區溢位) 選單 (),然後選取「Edit website configuration」(編輯網站設定)

    4. 在網站設定對話方塊中,指定主網頁和錯誤頁面。

    5. 按一下 [儲存]

    如要瞭解如何透過 Google Cloud 控制台取得 Cloud Storage 作業失敗的詳細錯誤資訊,請參閱「疑難排解」一文。

    指令列

    使用 buckets update 指令,並加上 --web-main-page-suffix--web-error-page 旗標。

    在下列範例中,會將 MainPageSuffix 設定為 index.html,而 NotFoundPage 設定為 404.html

    gcloud storage buckets update gs://my-static-assets --web-main-page-suffix=index.html --web-error-page=404.html

    如果執行成功,指令會傳回下列內容:

    Updating gs://www.example.com/...
      Completed 1

    用戶端程式庫

    C++

    詳情請參閱 Cloud Storage C++ API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    namespace gcs = ::google::cloud::storage;
    using ::google::cloud::StatusOr;
    [](gcs::Client client, std::string const& bucket_name,
       std::string const& main_page_suffix, std::string const& not_found_page) {
      StatusOr<gcs::BucketMetadata> original =
          client.GetBucketMetadata(bucket_name);
    
      if (!original) throw std::move(original).status();
      StatusOr<gcs::BucketMetadata> patched = client.PatchBucket(
          bucket_name,
          gcs::BucketMetadataPatchBuilder().SetWebsite(
              gcs::BucketWebsite{main_page_suffix, not_found_page}),
          gcs::IfMetagenerationMatch(original->metageneration()));
      if (!patched) throw std::move(patched).status();
    
      if (!patched->has_website()) {
        std::cout << "Static website configuration is not set for bucket "
                  << patched->name() << "\n";
        return;
      }
    
      std::cout << "Static website configuration successfully set for bucket "
                << patched->name() << "\nNew main page suffix is: "
                << patched->website().main_page_suffix
                << "\nNew not found page is: "
                << patched->website().not_found_page << "\n";
    }

    C#

    詳情請參閱 Cloud Storage C# API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    
    using Google.Apis.Storage.v1.Data;
    using Google.Cloud.Storage.V1;
    using System;
    
    public class BucketWebsiteConfigurationSample
    {
        public Bucket BucketWebsiteConfiguration(
            string bucketName = "your-bucket-name",
            string mainPageSuffix = "index.html",
            string notFoundPage = "404.html")
        {
            var storage = StorageClient.Create();
            var bucket = storage.GetBucket(bucketName);
    
            if (bucket.Website == null)
            {
                bucket.Website = new Bucket.WebsiteData();
            }
            bucket.Website.MainPageSuffix = mainPageSuffix;
            bucket.Website.NotFoundPage = notFoundPage;
    
            bucket = storage.UpdateBucket(bucket);
            Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");
            return bucket;
        }
    }

    Go

    詳情請參閱 Cloud Storage Go API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    import (
    	"context"
    	"fmt"
    	"io"
    	"time"
    
    	"cloud.google.com/go/storage"
    )
    
    // setBucketWebsiteInfo sets website configuration on a bucket.
    func setBucketWebsiteInfo(w io.Writer, bucketName, indexPage, notFoundPage string) error {
    	// bucketName := "www.example.com"
    	// indexPage := "index.html"
    	// notFoundPage := "404.html"
    	ctx := context.Background()
    	client, err := storage.NewClient(ctx)
    	if err != nil {
    		return fmt.Errorf("storage.NewClient: %w", err)
    	}
    	defer client.Close()
    
    	ctx, cancel := context.WithTimeout(ctx, time.Second*10)
    	defer cancel()
    
    	bucket := client.Bucket(bucketName)
    	bucketAttrsToUpdate := storage.BucketAttrsToUpdate{
    		Website: &storage.BucketWebsite{
    			MainPageSuffix: indexPage,
    			NotFoundPage:   notFoundPage,
    		},
    	}
    	if _, err := bucket.Update(ctx, bucketAttrsToUpdate); err != nil {
    		return fmt.Errorf("Bucket(%q).Update: %w", bucketName, err)
    	}
    	fmt.Fprintf(w, "Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n", bucketName, indexPage, notFoundPage)
    	return nil
    }
    

    Java

    詳情請參閱 Cloud Storage Java API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    import com.google.cloud.storage.Bucket;
    import com.google.cloud.storage.Storage;
    import com.google.cloud.storage.StorageOptions;
    
    public class SetBucketWebsiteInfo {
      public static void setBucketWesbiteInfo(
          String projectId, String bucketName, String indexPage, String notFoundPage) {
        // The ID of your GCP project
        // String projectId = "your-project-id";
    
        // The ID of your static website bucket
        // String bucketName = "www.example.com";
    
        // The index page for a static website bucket
        // String indexPage = "index.html";
    
        // The 404 page for a static website bucket
        // String notFoundPage = "404.html";
    
        Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
        Bucket bucket = storage.get(bucketName);
        bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();
    
        System.out.println(
            "Static website bucket "
                + bucketName
                + " is set up to use "
                + indexPage
                + " as the index page and "
                + notFoundPage
                + " as the 404 page");
      }
    }

    Node.js

    詳情請參閱 Cloud Storage Node.js API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    /**
     * TODO(developer): Uncomment the following lines before running the sample.
     */
    // The ID of your GCS bucket
    // const bucketName = 'your-unique-bucket-name';
    
    // The name of the main page
    // const mainPageSuffix = 'http://example.com';
    
    // The Name of a 404 page
    // const notFoundPage = 'http://example.com/404.html';
    
    // Imports the Google Cloud client library
    const {Storage} = require('@google-cloud/storage');
    
    // Creates a client
    const storage = new Storage();
    
    async function addBucketWebsiteConfiguration() {
      await storage.bucket(bucketName).setMetadata({
        website: {
          mainPageSuffix,
          notFoundPage,
        },
      });
    
      console.log(
        `Static website bucket ${bucketName} is set up to use ${mainPageSuffix} as the index page and ${notFoundPage} as the 404 page`
      );
    }
    
    addBucketWebsiteConfiguration().catch(console.error);

    PHP

    詳情請參閱 Cloud Storage PHP API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    use Google\Cloud\Storage\StorageClient;
    
    /**
     * Update the given bucket's website configuration.
     *
     * @param string $bucketName The name of your Cloud Storage bucket.
     *        (e.g. 'my-bucket')
     * @param string $indexPageObject the name of an object in the bucket to use as
     *        (e.g. 'index.html')
     *     an index page for a static website bucket.
     * @param string $notFoundPageObject the name of an object in the bucket to use
     *        (e.g. '404.html')
     *     as the 404 Not Found page.
     */
    function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void
    {
        $storage = new StorageClient();
        $bucket = $storage->bucket($bucketName);
    
        $bucket->update([
            'website' => [
                'mainPageSuffix' => $indexPageObject,
                'notFoundPage' => $notFoundPageObject
            ]
        ]);
    
        printf(
            'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.',
            $bucketName,
            $indexPageObject,
            $notFoundPageObject
        );
    }

    Python

    詳情請參閱 Cloud Storage Python API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    from google.cloud import storage
    
    
    def define_bucket_website_configuration(bucket_name, main_page_suffix, not_found_page):
        """Configure website-related properties of bucket"""
        # bucket_name = "your-bucket-name"
        # main_page_suffix = "index.html"
        # not_found_page = "404.html"
    
        storage_client = storage.Client()
    
        bucket = storage_client.get_bucket(bucket_name)
        bucket.configure_website(main_page_suffix, not_found_page)
        bucket.patch()
    
        print(
            "Static website bucket {} is set up to use {} as the index page and {} as the 404 page".format(
                bucket.name, main_page_suffix, not_found_page
            )
        )
        return bucket
    
    

    Ruby

    詳情請參閱 Cloud Storage Ruby API 參考說明文件

    如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

    def define_bucket_website_configuration bucket_name:, main_page_suffix:, not_found_page:
      # The ID of your static website bucket
      # bucket_name = "www.example.com"
    
      # The index page for a static website bucket
      # main_page_suffix = "index.html"
    
      # The 404 page for a static website bucket
      # not_found_page = "404.html"
    
      require "google/cloud/storage"
    
      storage = Google::Cloud::Storage.new
      bucket = storage.bucket bucket_name
    
      bucket.update do |b|
        b.website_main = main_page_suffix
        b.website_404 = not_found_page
      end
    
      puts "Static website bucket #{bucket_name} is set up to use #{main_page_suffix} as the index page and " \
           "#{not_found_page} as the 404 page"
    end

    REST API

    JSON API

    1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

    2. 建立 JSON 檔案,將 website 物件中的 mainPageSuffixnotFoundPage 屬性設為需要的網頁。

      在下列範例中,會將 mainPageSuffix 設定為 index.html,而 notFoundPage 設定為 404.html

      {
        "website":{
          "mainPageSuffix": "index.html",
          "notFoundPage": "404.html"
        }
      }
    3. 使用 cURL 透過 PATCH 值區要求呼叫 JSON API。如為 bucket my-static-assets

      curl -X PATCH --data-binary @web-config.json \
        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
        -H "Content-Type: application/json" \
        "https://storage.googleapis.com/storage/v1/b/my-static-assets"

    XML API

    1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

    2. 建立 XML 檔案,將 WebsiteConfiguration 元素中的 MainPageSuffixNotFoundPage 元素設為需要的網頁。

      在下列範例中,會將 MainPageSuffix 設定為 index.html,而 NotFoundPage 設定為 404.html

      <WebsiteConfiguration>
        <MainPageSuffix>index.html</MainPageSuffix>
        <NotFoundPage>404.html</NotFoundPage>
      </WebsiteConfiguration>
    3. 使用 cURL 透過 PUT 值區要求和 websiteConfig 查詢字串參數呼叫 XML API。針對 my-static-assets

      curl -X PUT --data-binary @web-config.xml \
        -H "Authorization: Bearer $(gcloud auth print-access-token)" \
        https://storage.googleapis.com/my-static-assets?websiteConfig

    設定負載平衡器和 SSL 憑證

    Cloud Storage 本身不支援使用 HTTPS 的自訂網域,因此您還需要設定附加至 HTTPS 負載平衡器SSL 憑證,才能透過 HTTPS 提供網站服務。本節說明如何將值區新增至負載平衡器的後端,以及如何將新的Google 代管 SSL 憑證新增至負載平衡器的前端。

    開始設定

    1. 前往 Google Cloud 控制台的「Load balancing」(負載平衡)頁面。

      前往「Load balancing」(負載平衡) 頁面

    2. 點選「建立負載平衡器」
    3. 在「Type of load balancer」(負載平衡器類型) 部分,選取「Application Load Balancer (HTTP/HTTPS)」(應用程式負載平衡器 (HTTP/HTTPS)),然後點選「Next」(下一步)
    4. 按一下 [設定]

    系統會顯示負載平衡器的設定視窗。

    基本設定

    繼續設定之前,請輸入負載平衡器名稱,例如 example-lb

    設定前端

    本節說明如何設定 HTTPS 通訊協定及建立 SSL 憑證。您也可以選取現有憑證,或上傳自行管理的 SSL 憑證

    1. 按一下「前端設定」
    2. (選用) 為前端設定命名
    3. 針對「Protocol」(通訊協定),選取「HTTPS (includes HTTP/2)」(HTTPS (含 HTTP/2))
    4. 在「IP version」(IP 版本) 部分,選取「IPv4」。如要使用 IPv6,請參閱「IPv6 終止功能」一文。
    5. 「IP 位址」欄位:

      • 在下拉式選單中,按一下「建立 IP 位址」
      • 在「Reserve a new static IP address」(保留新的靜態 IP 位址) 對話方塊中,輸入 IP 位址的「Name」(名稱),例如 example-ip
      • 按一下「保留」
    6. 在「Port」(通訊埠) 欄中,選取「443」

    7. 在「Certificate」(憑證) 欄位的下拉式選單中,選取「Create a new certificate」(建立新憑證)。 面板中會顯示憑證建立表單。設定下列項目:

      • 為憑證指定「名稱」,例如 example-ssl
      • 在「建立模式」中,選取「建立 Google 代管的憑證」
      • 在「Domains」部分輸入您的網站名稱,例如 www.example.com。如要透過其他網域 (例如根網域 example.com) 放送內容,請按 Enter 鍵,在其他行新增網域。每個憑證最多可包含 100 個網域。
    8. 點選「建立」

    9. (選用) 如要自動設定 Google Cloud 部分 HTTP 負載平衡器,以便重新導向 HTTP 流量,請選取「Enable HTTP to HTTPS redirect」(啟用從 HTTP 重新導向至 HTTPS 的功能) 旁的核取方塊。

    10. 按一下 [完成]

    設定後端

    1. 按一下「後端設定」
    2. 在「後端服務和後端值區」下拉式選單中,按一下「建立後端值區」
    3. 選擇「Backend bucket name」(後端值區名稱),例如 example-bucket您選擇的名稱可以與先前建立的值區名稱不同。
    4. 按一下「Cloud Storage bucket」(Cloud Storage 值區) 欄位中的「Browse」(瀏覽)
    5. 選取先前建立的 my-static-assets bucket,然後按一下「Select」(選取)
    6. (選用) 如要使用 Cloud CDN,請勾選「啟用 Cloud CDN」核取方塊,並視需要設定 Cloud CDN。請注意,Cloud CDN 可能會產生額外費用
    7. 點選「建立」

    設定轉送規則

    轉送規則是外部應用程式負載平衡器的網址對應元件。在本教學課程中,您應略過負載平衡器設定的這部分,因為系統會自動設定為使用您剛才設定的後端。

    檢閱設定

    1. 按一下「檢查並完成」
    2. 檢查「Frontend」(前端)、「Routing rules」(轉送規則) 和「Backend」(後端)
    3. 點選「建立」

    您可能需要稍候幾分鐘,等待系統建立負載平衡器。

    將網域連結至負載平衡器

    建立負載平衡器後,請按一下負載平衡器的名稱: example-lb。請記下與負載平衡器相關聯的 IP 位址,例如 30.90.80.100。如要將網域指向負載平衡器,請使用網域註冊服務建立 A 記錄。如果 SSL 憑證中新增了多個網域,請為每個網域新增 A 記錄,並全部指向負載平衡器的 IP 位址。舉例來說,如要為 www.example.comexample.com 建立 A 記錄,請執行以下指令:

    NAME                  TYPE     DATA
    www                   A        30.90.80.100
    @                     A        30.90.80.100

    如要進一步瞭解如何將網域連線至負載平衡器,請參閱「排解網域狀態問題」。

    佈建憑證並透過負載平衡器提供網站服務,最多可能需要 60 到 90 分鐘。 Google Cloud 如要監控憑證狀態,請按照下列步驟操作:

    控制台

    1. 前往 Google Cloud 控制台的「Load balancing」(負載平衡) 頁面。
      前往「Load balancing」(負載平衡) 頁面
    2. 按一下負載平衡器的名稱:example-lb
    3. 按一下與負載平衡器相關聯的 SSL 憑證名稱:example-ssl
    4. 「狀態」和「網域狀態」列會顯示憑證狀態。兩者都必須有效,憑證才能用於您的網站。

    指令列

    1. 如要檢查憑證狀態,請執行下列指令:

      gcloud compute ssl-certificates describe CERTIFICATE_NAME \
        --global \
        --format="get(name,managed.status)"
      
    2. 如要檢查網域狀態,請執行下列指令:

      gcloud compute ssl-certificates describe CERTIFICATE_NAME \
        --global \
        --format="get(managed.domainStatus)"
      

    如要進一步瞭解憑證狀態,請參閱「排解安全資料傳輸層 (SSL) 憑證相關問題」。

    測試網站

    SSL 憑證生效後,請前往 https://www.example.com/test.html,確認內容是從 bucket 提供。其中 test.html 是儲存在 bucket 中的物件,您將其做為後端。如果設定 MainPageSuffix 屬性,https://www.example.com 會前往 index.html

    清除所用資源

    完成本教學課程後,您可以清除所建立的資源,這樣資源就不會占用配額,您日後也無須為其付費。下列各節將說明如何刪除或關閉這些資源。

    刪除專案

    如要避免付費,最簡單的方法就是刪除您為了本教學課程所建立的專案。

    如要刪除專案:

    1. In the Google Cloud console, go to the Manage resources page.

      Go to Manage resources

    2. In the project list, select the project that you want to delete, and then click Delete.
    3. In the dialog, type the project ID, and then click Shut down to delete the project.

    刪除負載平衡器和 bucket

    如果不想刪除整個專案,請刪除您為本教學課程建立的負載平衡器和值區:

    1. 前往 Google Cloud 控制台的「Load balancing」(負載平衡) 頁面。
      前往「Load balancing」(負載平衡) 頁面
    2. 勾選 example-lb 旁邊的核取方塊。
    3. 點選「刪除」。
    4. (選用) 選取要連同負載平衡器一併刪除的資源旁的核取方塊,例如 my-static-assets 值區或 example-ssl SSL 憑證。
    5. 按一下「刪除負載平衡器」或「刪除負載平衡器和所選資源」

    釋出保留的 IP 位址

    如要刪除您在本教學課程中使用的保留 IP 位址,請按照下列步驟操作:

    1. 前往 Google Cloud 控制台的「External IP addresses」(外部 IP 位址) 頁面。

      前往「External IP addresses」(外部 IP 位址)

    2. 勾選 example-ip 旁邊的核取方塊。

    3. 按一下「Release static address」(釋放靜態位址)

    4. 按一下確認視窗中的 [Delete]

    後續步驟

    歡迎試用

    如果您未曾使用過 Google Cloud,歡迎建立帳戶,親自體驗實際使用 Cloud Storage 的成效。新客戶可以獲得價值 $300 美元的免費抵免額,可用於執行、測試及部署工作負載。

    免費試用 Cloud Storage