使用 HTTP 托管静态网站


本教程介绍如何配置 Cloud Storage 存储桶以托管域名归您所有的静态网站。静态网页可使用 HTML、CSS 和 JavaScript 等客户端技术。静态网页不能包含动态内容,例如 PHP 等服务器端脚本。

本教程介绍了如何通过 HTTP 传送内容。如需查看使用 HTTPS 的教程,请参阅托管静态网站

如需查看有关静态网页的示例和提示(包括如何为动态网站托管静态资源),请参阅静态网站页面

目标

在此教程中,您将学习以下操作:

  • 使用 CNAME 记录将您的网域指向 Cloud Storage。
  • 创建一个与您的域名相关联的存储桶。
  • 上传和共享您的网站的文件。
  • 测试网站。

费用

本教程使用 Google Cloud 的以下收费组件:

  • Cloud Storage

如需详细了解托管静态网站时可能产生的费用,请参阅关于监控存储费用的提示;如需详细了解 Cloud Storage 费用,请参阅价格页面。

准备工作

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. 准备一个归您所有或管理的网域。如果您没有现成可用的域名,可以通过多项服务(例如 Cloud Domains)注册新网域。

    本教程使用的网域是 example.com

  7. 确认您是要使用的网域的所有者或管理员。确保您所验证的网域是顶级域名(如 example.com),而不是子网域(如 www.example.com)。

    注意:如果您是要关联到存储桶的网域的所有者,则说明您过去可能已经执行此步骤。如果您的网域是通过 Cloud Domains 购买的,那么系统会自动进行验证。

将您的网域连接到 Cloud Storage

如需将您的网域连接到 Cloud Storage,请通过您的域名注册服务创建 CNAME 记录。CNAME 记录是一种 DNS 记录。它会将请求访问您域名中的网址的流量定向到您想要传送的资源(在本例中为 Cloud Storage 存储桶中的对象)。对于 www.example.comCNAME 记录可能包含以下信息:

NAME                  TYPE     DATA
www                   CNAME    c.storage.googleapis.com.

如需详细了解 CNAME 重定向,请参阅适用于 CNAME 别名的 URI

如需将您的网域连接到 Cloud Storage,请按如下所述操作:

  1. 创建一条指向 c.storage.googleapis.com.CNAME 记录。

    您使用的网域注册服务应具备供您管理网域(包括添加 CNAME 记录)的方法。例如,如果您使用 Cloud DNS,则可以在添加、修改和删除记录页面上找到添加资源记录的说明。

创建存储桶

使用与您为域名创建的 CNAME 相匹配的名称创建一个存储桶。

例如,如果您添加了从 example.comwww 子网域指向 c.storage.googleapis.com.CNAME 记录,则用于创建名称为 www.example.com 的存储桶的 Google Cloud CLI 命令会类似于以下内容:

gcloud storage buckets create gs://www.example.com --location=US

如需查看如何使用不同工具创建存储桶的完整说明,请参阅创建存储桶

上传网站的文件

要将您希望网站传送的文件添加到您的存储桶中,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您创建的存储桶的名称。

  3. 点击对象标签中的上传文件按钮。

  4. 在文件对话框中,浏览至所需文件并将其选中。

上传完成后,您应该会看到文件名和存储桶中显示的文件信息。

命令行

使用 gcloud storage cp 命令将文件复制到您的存储桶。 例如,要从文件 index.html 的当前位置 Desktop 复制此文件,请使用以下命令:

gcloud storage cp Desktop/index.html gs://www.example.com

如果成功,则响应类似如下示例:

Completed files 1/1 | 164.3kiB/164.3kiB

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& file_name,
   std::string const& bucket_name, std::string const& object_name) {
  // Note that the client library automatically computes a hash on the
  // client-side to verify data integrity during transmission.
  StatusOr<gcs::ObjectMetadata> metadata = client.UploadFile(
      file_name, bucket_name, object_name, gcs::IfGenerationMatch(0));
  if (!metadata) throw std::move(metadata).status();

  std::cout << "Uploaded " << file_name << " to object " << metadata->name()
            << " in bucket " << metadata->bucket()
            << "\nFull metadata: " << *metadata << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证


using Google.Cloud.Storage.V1;
using System;
using System.IO;

public class UploadFileSample
{
    public void UploadFile(
        string bucketName = "your-unique-bucket-name",
        string localPath = "my-local-path/my-file-name",
        string objectName = "my-file-name")
    {
        var storage = StorageClient.Create();
        using var fileStream = File.OpenRead(localPath);
        storage.UploadObject(bucketName, objectName, null, fileStream);
        Console.WriteLine($"Uploaded {objectName}.");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

import (
	"context"
	"fmt"
	"io"
	"os"
	"time"

	"cloud.google.com/go/storage"
)

// uploadFile uploads an object.
func uploadFile(w io.Writer, bucket, object string) error {
	// bucket := "bucket-name"
	// object := "object-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	// Open local file.
	f, err := os.Open("notes.txt")
	if err != nil {
		return fmt.Errorf("os.Open: %w", err)
	}
	defer f.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*50)
	defer cancel()

	o := client.Bucket(bucket).Object(object)

	// Optional: set a generation-match precondition to avoid potential race
	// conditions and data corruptions. The request to upload is aborted if the
	// object's generation number does not match your precondition.
	// For an object that does not yet exist, set the DoesNotExist precondition.
	o = o.If(storage.Conditions{DoesNotExist: true})
	// If the live object already exists in your bucket, set instead a
	// generation-match precondition using the live object's generation number.
	// attrs, err := o.Attrs(ctx)
	// if err != nil {
	// 	return fmt.Errorf("object.Attrs: %w", err)
	// }
	// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})

	// Upload an object with storage.Writer.
	wc := o.NewWriter(ctx)
	if _, err = io.Copy(wc, f); err != nil {
		return fmt.Errorf("io.Copy: %w", err)
	}
	if err := wc.Close(); err != nil {
		return fmt.Errorf("Writer.Close: %w", err)
	}
	fmt.Fprintf(w, "Blob %v uploaded.\n", object)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

以下示例会上传单个对象:


import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.file.Paths;

public class UploadObject {
  public static void uploadObject(
      String projectId, String bucketName, String objectName, String filePath) throws IOException {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    // The ID of your GCS object
    // String objectName = "your-object-name";

    // The path to your file to upload
    // String filePath = "path/to/your/file"

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    BlobId blobId = BlobId.of(bucketName, objectName);
    BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();

    // Optional: set a generation-match precondition to avoid potential race
    // conditions and data corruptions. The request returns a 412 error if the
    // preconditions are not met.
    Storage.BlobWriteOption precondition;
    if (storage.get(bucketName, objectName) == null) {
      // For a target object that does not yet exist, set the DoesNotExist precondition.
      // This will cause the request to fail if the object is created before the request runs.
      precondition = Storage.BlobWriteOption.doesNotExist();
    } else {
      // If the destination already exists in your bucket, instead set a generation-match
      // precondition. This will cause the request to fail if the existing object's generation
      // changes before the request runs.
      precondition =
          Storage.BlobWriteOption.generationMatch(
              storage.get(bucketName, objectName).getGeneration());
    }
    storage.createFrom(blobInfo, Paths.get(filePath), precondition);

    System.out.println(
        "File " + filePath + " uploaded to bucket " + bucketName + " as " + objectName);
  }
}

以下示例会同时上传多个对象:

import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
import com.google.cloud.storage.transfermanager.TransferManager;
import com.google.cloud.storage.transfermanager.TransferManagerConfig;
import com.google.cloud.storage.transfermanager.UploadResult;
import java.io.IOException;
import java.nio.file.Path;
import java.util.List;

class UploadMany {

  public static void uploadManyFiles(String bucketName, List<Path> files) throws IOException {
    TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
    ParallelUploadConfig parallelUploadConfig =
        ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();
    List<UploadResult> results =
        transferManager.uploadFiles(files, parallelUploadConfig).getUploadResults();
    for (UploadResult result : results) {
      System.out.println(
          "Upload for "
              + result.getInput().getName()
              + " completed with status "
              + result.getStatus());
    }
  }
}

以下示例并发上传具有公共前缀的所有对象:

import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
import com.google.cloud.storage.transfermanager.TransferManager;
import com.google.cloud.storage.transfermanager.TransferManagerConfig;
import com.google.cloud.storage.transfermanager.UploadResult;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Stream;

class UploadDirectory {

  public static void uploadDirectoryContents(String bucketName, Path sourceDirectory)
      throws IOException {
    TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
    ParallelUploadConfig parallelUploadConfig =
        ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();

    // Create a list to store the file paths
    List<Path> filePaths = new ArrayList<>();
    // Get all files in the directory
    // try-with-resource to ensure pathStream is closed
    try (Stream<Path> pathStream = Files.walk(sourceDirectory)) {
      pathStream.filter(Files::isRegularFile).forEach(filePaths::add);
    }
    List<UploadResult> results =
        transferManager.uploadFiles(filePaths, parallelUploadConfig).getUploadResults();
    for (UploadResult result : results) {
      System.out.println(
          "Upload for "
              + result.getInput().getName()
              + " completed with status "
              + result.getStatus());
    }
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

以下示例会上传单个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The path to your file to upload
// const filePath = 'path/to/your/file';

// The new ID for your GCS file
// const destFileName = 'your-new-file-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function uploadFile() {
  const options = {
    destination: destFileName,
    // Optional:
    // Set a generation-match precondition to avoid potential race conditions
    // and data corruptions. The request to upload is aborted if the object's
    // generation number does not match your precondition. For a destination
    // object that does not yet exist, set the ifGenerationMatch precondition to 0
    // If the destination object already exists in your bucket, set instead a
    // generation-match precondition using its generation number.
    preconditionOpts: {ifGenerationMatch: generationMatchPrecondition},
  };

  await storage.bucket(bucketName).upload(filePath, options);
  console.log(`${filePath} uploaded to ${bucketName}`);
}

uploadFile().catch(console.error);

以下示例会同时上传多个对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The ID of the first GCS file to upload
// const firstFilePath = 'your-first-file-name';

// The ID of the second GCS file to upload
// const secondFilePath = 'your-second-file-name';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadManyFilesWithTransferManager() {
  // Uploads the files
  await transferManager.uploadManyFiles([firstFilePath, secondFilePath]);

  for (const filePath of [firstFilePath, secondFilePath]) {
    console.log(`${filePath} uploaded to ${bucketName}.`);
  }
}

uploadManyFilesWithTransferManager().catch(console.error);

以下示例并发上传具有公共前缀的所有对象:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The local directory to upload
// const directoryName = 'your-directory';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadDirectoryWithTransferManager() {
  // Uploads the directory
  await transferManager.uploadManyFiles(directoryName);

  console.log(`${directoryName} uploaded to ${bucketName}.`);
}

uploadDirectoryWithTransferManager().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Upload a file.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $objectName The name of your Cloud Storage object.
 *        (e.g. 'my-object')
 * @param string $source The path to the file to upload.
 *        (e.g. '/path/to/your/file')
 */
function upload_object(string $bucketName, string $objectName, string $source): void
{
    $storage = new StorageClient();
    if (!$file = fopen($source, 'r')) {
        throw new \InvalidArgumentException('Unable to open file for reading');
    }
    $bucket = $storage->bucket($bucketName);
    $object = $bucket->upload($file, [
        'name' => $objectName
    ]);
    printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

以下示例会上传单个对象:

from google.cloud import storage


def upload_blob(bucket_name, source_file_name, destination_blob_name):
    """Uploads a file to the bucket."""
    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"
    # The path to your file to upload
    # source_file_name = "local/path/to/file"
    # The ID of your GCS object
    # destination_blob_name = "storage-object-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(destination_blob_name)

    # Optional: set a generation-match precondition to avoid potential race conditions
    # and data corruptions. The request to upload is aborted if the object's
    # generation number does not match your precondition. For a destination
    # object that does not yet exist, set the if_generation_match precondition to 0.
    # If the destination object already exists in your bucket, set instead a
    # generation-match precondition using its generation number.
    generation_match_precondition = 0

    blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)

    print(
        f"File {source_file_name} uploaded to {destination_blob_name}."
    )

以下示例会同时上传多个对象:

def upload_many_blobs_with_transfer_manager(
    bucket_name, filenames, source_directory="", workers=8
):
    """Upload every file in a list to a bucket, concurrently in a process pool.

    Each blob name is derived from the filename, not including the
    `source_directory` parameter. For complete control of the blob name for each
    file (and other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # A list (or other iterable) of filenames to upload.
    # filenames = ["file_1.txt", "file_2.txt"]

    # The directory on your computer that is the root of all of the files in the
    # list of filenames. This string is prepended (with os.path.join()) to each
    # filename to get the full path to the file. Relative paths and absolute
    # paths are both accepted. This string is not included in the name of the
    # uploaded blob; it is only used to find the source files. An empty string
    # means "the current working directory". Note that this parameter allows
    # directory traversal (e.g. "/", "../") and is not intended for unsanitized
    # end user input.
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    results = transfer_manager.upload_many_from_filenames(
        bucket, filenames, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(filenames, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

以下示例并发上传具有公共前缀的所有对象:

def upload_directory_with_transfer_manager(bucket_name, source_directory, workers=8):
    """Upload every file in a directory, including all files in subdirectories.

    Each blob name is derived from the filename, not including the `directory`
    parameter itself. For complete control of the blob name for each file (and
    other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # The directory on your computer to upload. Files in the directory and its
    # subdirectories will be uploaded. An empty string means "the current
    # working directory".
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from pathlib import Path

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    # Generate a list of paths (in string form) relative to the `directory`.
    # This can be done in a single list comprehension, but is expanded into
    # multiple lines here for clarity.

    # First, recursively get all files in `directory` as Path objects.
    directory_as_path_obj = Path(source_directory)
    paths = directory_as_path_obj.rglob("*")

    # Filter so the list only includes files, not directories themselves.
    file_paths = [path for path in paths if path.is_file()]

    # These paths are relative to the current working directory. Next, make them
    # relative to `directory`
    relative_paths = [path.relative_to(source_directory) for path in file_paths]

    # Finally, convert them all to strings.
    string_paths = [str(path) for path in relative_paths]

    print("Found {} files.".format(len(string_paths)))

    # Start the upload.
    results = transfer_manager.upload_many_from_filenames(
        bucket, string_paths, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(string_paths, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

def upload_file bucket_name:, local_file_path:, file_name: nil
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  # The path to your file to upload
  # local_file_path = "/local/path/to/file.txt"

  # The ID of your GCS object
  # file_name = "your-file-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.bucket bucket_name, skip_lookup: true

  file = bucket.create_file local_file_path, file_name

  puts "Uploaded #{local_file_path} as #{file.name} in bucket #{bucket_name}"
end

REST API

JSON API

  1. 安装并初始化 gcloud CLI,以便为 Authorization 标头生成访问令牌。

  2. 使用 cURL 通过 POST Object 请求调用 JSON API。对于 www.example.com 的索引页面:

    curl -X POST --data-binary @index.html \
      -H "Content-Type: text/html" \
      -H "Authorization: $(gcloud auth print-access-token)" \
      "https://storage.googleapis.com/upload/storage/v1/b/www.example.com/o?uploadType=media&name=index.html"

XML API

  1. 安装并初始化 gcloud CLI,以便为 Authorization 标头生成访问令牌。

  2. 使用 cURL 通过 PUT Object 请求调用 XML API。对于 www.example.com 的索引页面:

    curl -X PUT --data-binary @index.html \
      -H "Authorization: $(gcloud auth print-access-token)" \
      -H "Content-Type: text/html" \
      "https://storage.googleapis.com/www.example.com/index.html"

共享您的文件

如需将存储桶中的所有对象设为可供公共互联网上的所有人读取,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,点击您要设为公开的存储桶的名称。

  3. 选择页面顶部附近的权限标签。

  4. 如果公开访问窗格显示非公开,请点击标有移除禁止公开访问的按钮,然后在出现的对话框中点击确认

  5. 点击 授予访问权限按钮。

    此时会显示“添加主账号”对话框。

  6. 新的主账号字段中,输入 allUsers

  7. 选择角色下拉菜单中,选择 Cloud Storage 子菜单,然后点击 Storage Object Viewer 选项。

  8. 点击保存

  9. 点击允许公开访问

对象群组被公开共享后,“公共访问权限”列中会针对每个对象显示一个链接图标。您可以点击此图标来获取相应对象的网址。

命令行

使用 buckets add-iam-policy-binding 命令:

gcloud storage buckets add-iam-policy-binding gs://www.example.com --member=allUsers --role=roles/storage.objectViewer

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name) {
  auto current_policy = client.GetNativeBucketIamPolicy(
      bucket_name, gcs::RequestedPolicyVersion(3));
  if (!current_policy) throw std::move(current_policy).status();

  current_policy->set_version(3);
  current_policy->bindings().emplace_back(
      gcs::NativeIamBinding("roles/storage.objectViewer", {"allUsers"}));

  auto updated =
      client.SetNativeBucketIamPolicy(bucket_name, *current_policy);
  if (!updated) throw std::move(updated).status();

  std::cout << "Policy successfully updated: " << *updated << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;
using System.Collections.Generic;

public class MakeBucketPublicSample
{
    public void MakeBucketPublic(string bucketName = "your-unique-bucket-name")
    {
        var storage = StorageClient.Create();

        Policy policy = storage.GetBucketIamPolicy(bucketName);

        policy.Bindings.Add(new Policy.BindingsData
        {
            Role = "roles/storage.objectViewer",
            Members = new List<string> { "allUsers" }
        });

        storage.SetBucketIamPolicy(bucketName, policy);
        Console.WriteLine(bucketName + " is now public ");
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/iam"
	"cloud.google.com/go/iam/apiv1/iampb"
	"cloud.google.com/go/storage"
)

// setBucketPublicIAM makes all objects in a bucket publicly readable.
func setBucketPublicIAM(w io.Writer, bucketName string) error {
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	policy, err := client.Bucket(bucketName).IAM().V3().Policy(ctx)
	if err != nil {
		return fmt.Errorf("Bucket(%q).IAM().V3().Policy: %w", bucketName, err)
	}
	role := "roles/storage.objectViewer"
	policy.Bindings = append(policy.Bindings, &iampb.Binding{
		Role:    role,
		Members: []string{iam.AllUsers},
	})
	if err := client.Bucket(bucketName).IAM().V3().SetPolicy(ctx, policy); err != nil {
		return fmt.Errorf("Bucket(%q).IAM().SetPolicy: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Bucket %v is now publicly readable\n", bucketName)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

import com.google.cloud.Identity;
import com.google.cloud.Policy;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.StorageRoles;

public class MakeBucketPublic {
  public static void makeBucketPublic(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Policy originalPolicy = storage.getIamPolicy(bucketName);
    storage.setIamPolicy(
        bucketName,
        originalPolicy
            .toBuilder()
            .addIdentity(StorageRoles.objectViewer(), Identity.allUsers()) // All users can view
            .build());

    System.out.println("Bucket " + bucketName + " is now publicly readable");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function makeBucketPublic() {
  await storage.bucket(bucketName).makePublic();

  console.log(`Bucket ${bucketName} is now publicly readable`);
}

makeBucketPublic().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the specified bucket's IAM configuration to make it publicly accessible.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function set_bucket_public_iam(string $bucketName): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]);
    $policy['version'] = 3;

    $role = 'roles/storage.objectViewer';
    $members = ['allUsers'];

    $policy['bindings'][] = [
        'role' => $role,
        'members' => $members
    ];

    $bucket->iam()->setPolicy($policy);

    printf('Bucket %s is now public', $bucketName);
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

from typing import List

from google.cloud import storage


def set_bucket_public_iam(
    bucket_name: str = "your-bucket-name",
    members: List[str] = ["allUsers"],
):
    """Set a public IAM Policy to bucket"""
    # bucket_name = "your-bucket-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)

    policy = bucket.get_iam_policy(requested_policy_version=3)
    policy.bindings.append(
        {"role": "roles/storage.objectViewer", "members": members}
    )

    bucket.set_iam_policy(policy)

    print(f"Bucket {bucket.name} is now publicly readable")

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

def set_bucket_public_iam bucket_name:
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.policy do |p|
    p.add "roles/storage.objectViewer", "allUsers"
  end

  puts "Bucket #{bucket_name} is now publicly readable"
end

REST API

JSON API

  1. 安装并初始化 gcloud CLI,以便为 Authorization 标头生成访问令牌。

  2. 创建一个包含以下信息的 JSON 文件:

    {
      "bindings":[
        {
          "role": "roles/storage.objectViewer",
          "members":["allUsers"]
        }
      ]
    }
  3. 使用 cURL 通过 PUT Bucket 请求调用 JSON API

    curl -X PUT --data-binary @JSON_FILE_NAME \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"

    其中:

    • JSON_FILE_NAME 是您在第 2 步中创建的 JSON 文件的路径。
    • BUCKET_NAME 是您要公开其对象的存储桶的名称。例如 my-bucket

XML API

XML API 不支持将存储桶中的所有对象设为可供公开读取。请改用 Google Cloud 控制台或 gcloud storage

如果需要,您还可以将存储桶的某些部分设为可公开访问

如果访问者向网址请求非公开或不存在的文件,他们会收到 http 403 响应代码。如需了解如何添加使用 http 404 响应代码的错误页面,请参阅下一部分。

推荐:分配专用页面

您可以分配索引页面后缀(由 MainPageSuffix 属性控制)和自定义错误页面(由 NotFoundPage 属性控制)。分配这些页面都是可选的,但如果没有索引页面,则当用户访问您的顶级网站(例如 http://www.example.com)时,将没有内容可以传送。如需了解详情,请参阅网站配置示例

在以下示例中,MainPageSuffix 设置为 index.htmlNotFoundPage 设置为 404.html

控制台

  1. 在 Google Cloud 控制台中,进入 Cloud Storage 存储桶页面。

    进入“存储桶”

  2. 在存储桶列表中,找到您所创建的存储桶。

  3. 点击与存储桶关联的 Bucket overflow 菜单 (),然后选择修改网站配置

  4. 在网站配置对话框中,指定主页面和错误页面。

  5. 点击保存

命令行

使用带有 --web-main-page-suffix--web-error-page 标志的 buckets update 命令。

gcloud storage buckets update gs://www.example.com --web-main-page-suffix=index.html --web-error-page=404.html

如果成功,此命令会返回以下内容:

Updating gs://www.example.com/...
  Completed 1

客户端库

C++

如需了解详情,请参阅 Cloud Storage C++ API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& main_page_suffix, std::string const& not_found_page) {
  StatusOr<gcs::BucketMetadata> original =
      client.GetBucketMetadata(bucket_name);

  if (!original) throw std::move(original).status();
  StatusOr<gcs::BucketMetadata> patched = client.PatchBucket(
      bucket_name,
      gcs::BucketMetadataPatchBuilder().SetWebsite(
          gcs::BucketWebsite{main_page_suffix, not_found_page}),
      gcs::IfMetagenerationMatch(original->metageneration()));
  if (!patched) throw std::move(patched).status();

  if (!patched->has_website()) {
    std::cout << "Static website configuration is not set for bucket "
              << patched->name() << "\n";
    return;
  }

  std::cout << "Static website configuration successfully set for bucket "
            << patched->name() << "\nNew main page suffix is: "
            << patched->website().main_page_suffix
            << "\nNew not found page is: "
            << patched->website().not_found_page << "\n";
}

C#

如需了解详情,请参阅 Cloud Storage C# API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class BucketWebsiteConfigurationSample
{
    public Bucket BucketWebsiteConfiguration(
        string bucketName = "your-bucket-name",
        string mainPageSuffix = "index.html",
        string notFoundPage = "404.html")
    {
        var storage = StorageClient.Create();
        var bucket = storage.GetBucket(bucketName);

        if (bucket.Website == null)
        {
            bucket.Website = new Bucket.WebsiteData();
        }
        bucket.Website.MainPageSuffix = mainPageSuffix;
        bucket.Website.NotFoundPage = notFoundPage;

        bucket = storage.UpdateBucket(bucket);
        Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");
        return bucket;
    }
}

Go

如需了解详情,请参阅 Cloud Storage Go API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// setBucketWebsiteInfo sets website configuration on a bucket.
func setBucketWebsiteInfo(w io.Writer, bucketName, indexPage, notFoundPage string) error {
	// bucketName := "www.example.com"
	// indexPage := "index.html"
	// notFoundPage := "404.html"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*10)
	defer cancel()

	bucket := client.Bucket(bucketName)
	bucketAttrsToUpdate := storage.BucketAttrsToUpdate{
		Website: &storage.BucketWebsite{
			MainPageSuffix: indexPage,
			NotFoundPage:   notFoundPage,
		},
	}
	if _, err := bucket.Update(ctx, bucketAttrsToUpdate); err != nil {
		return fmt.Errorf("Bucket(%q).Update: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n", bucketName, indexPage, notFoundPage)
	return nil
}

Java

如需了解详情,请参阅 Cloud Storage Java API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

public class SetBucketWebsiteInfo {
  public static void setBucketWesbiteInfo(
      String projectId, String bucketName, String indexPage, String notFoundPage) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your static website bucket
    // String bucketName = "www.example.com";

    // The index page for a static website bucket
    // String indexPage = "index.html";

    // The 404 page for a static website bucket
    // String notFoundPage = "404.html";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Bucket bucket = storage.get(bucketName);
    bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();

    System.out.println(
        "Static website bucket "
            + bucketName
            + " is set up to use "
            + indexPage
            + " as the index page and "
            + notFoundPage
            + " as the 404 page");
  }
}

Node.js

如需了解详情,请参阅 Cloud Storage Node.js API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of the main page
// const mainPageSuffix = 'http://example.com';

// The Name of a 404 page
// const notFoundPage = 'http://example.com/404.html';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function addBucketWebsiteConfiguration() {
  await storage.bucket(bucketName).setMetadata({
    website: {
      mainPageSuffix,
      notFoundPage,
    },
  });

  console.log(
    `Static website bucket ${bucketName} is set up to use ${mainPageSuffix} as the index page and ${notFoundPage} as the 404 page`
  );
}

addBucketWebsiteConfiguration().catch(console.error);

PHP

如需了解详情,请参阅 Cloud Storage PHP API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

use Google\Cloud\Storage\StorageClient;

/**
 * Update the given bucket's website configuration.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $indexPageObject the name of an object in the bucket to use as
 *        (e.g. 'index.html')
 *     an index page for a static website bucket.
 * @param string $notFoundPageObject the name of an object in the bucket to use
 *        (e.g. '404.html')
 *     as the 404 Not Found page.
 */
function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $bucket->update([
        'website' => [
            'mainPageSuffix' => $indexPageObject,
            'notFoundPage' => $notFoundPageObject
        ]
    ]);

    printf(
        'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.',
        $bucketName,
        $indexPageObject,
        $notFoundPageObject
    );
}

Python

如需了解详情,请参阅 Cloud Storage Python API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

from google.cloud import storage


def define_bucket_website_configuration(bucket_name, main_page_suffix, not_found_page):
    """Configure website-related properties of bucket"""
    # bucket_name = "your-bucket-name"
    # main_page_suffix = "index.html"
    # not_found_page = "404.html"

    storage_client = storage.Client()

    bucket = storage_client.get_bucket(bucket_name)
    bucket.configure_website(main_page_suffix, not_found_page)
    bucket.patch()

    print(
        "Static website bucket {} is set up to use {} as the index page and {} as the 404 page".format(
            bucket.name, main_page_suffix, not_found_page
        )
    )
    return bucket

Ruby

如需了解详情,请参阅 Cloud Storage Ruby API 参考文档

如需向 Cloud Storage 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为客户端库设置身份验证

def define_bucket_website_configuration bucket_name:, main_page_suffix:, not_found_page:
  # The ID of your static website bucket
  # bucket_name = "www.example.com"

  # The index page for a static website bucket
  # main_page_suffix = "index.html"

  # The 404 page for a static website bucket
  # not_found_page = "404.html"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.update do |b|
    b.website_main = main_page_suffix
    b.website_404 = not_found_page
  end

  puts "Static website bucket #{bucket_name} is set up to use #{main_page_suffix} as the index page and " \
       "#{not_found_page} as the 404 page"
end

REST API

JSON API

  1. 安装并初始化 gcloud CLI,以便为 Authorization 标头生成访问令牌。

  2. 创建一个 JSON 文件,以将 website 对象中的 mainPageSuffixnotFoundPage 属性设置为所需页面:

    {
      "website":{
        "mainPageSuffix": "index.html",
        "notFoundPage": "404.html"
      }
    }
  3. 使用 cURL 通过 PATCH Bucket 请求调用 JSON API。对于 www.example.com:

    curl -X PATCH --data-binary @web-config.json \
      -H "Authorization: $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/www.example.com"

XML API

  1. 安装并初始化 gcloud CLI,以便为 Authorization 标头生成访问令牌。

  2. 创建一个 XML 文件,以将 WebsiteConfiguration 元素中的 MainPageSuffixNotFoundPage 元素设置为所需页面:

    <WebsiteConfiguration>
      <MainPageSuffix>index.html</MainPageSuffix>
      <NotFoundPage>404.html</NotFoundPage>
    </WebsiteConfiguration>
  3. 使用 cURL,通过 PUT Bucket 请求和 websiteConfig 查询字符串参数调用 XML API。对于 www.example.com:

    curl -X PUT --data-binary @web-config.xml \
      -H "Authorization: $(gcloud auth print-access-token)" \
      https://storage.googleapis.com/www.example.com?websiteConfig

测试网站

通过在浏览器中请求访问相应域名,您可以验证内容是否来源于相应存储桶。为实现此目的,您可以使用对象路径,也可以仅使用域名(如果设置了 MainPageSuffix 属性)。

例如,如果您的一个名为 test.html 的对象存储在名为 www.example.com 的存储桶中,请在您的浏览器中转至 www.example.com/test.html 以检查是否可以访问此对象。

清理

完成本教程后,您可以清理您创建的资源,让它们停止使用配额,以免产生费用。以下部分介绍如何删除或关闭这些资源。

删除项目

为了避免产生费用,最简单的方法是删除您为本教程创建的项目。

要删除项目,请执行以下操作:

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

后续步骤