This page describes troubleshooting methods for common errors you may encounter while using Cloud Storage.
See the Google Cloud Service Health Dashboard for information about incidents affecting Google Cloud services such as Cloud Storage.
Logging raw requests
When using tools such as gcloud
or the Cloud Storage client libraries, much
of the request and response information is handled by the tool. However, it is
sometimes useful to see details to aid in troubleshooting or when posting
questions to forums such as Stack Overflow. Use the
following instructions to return request and response headers for your tool:
Console
Viewing request and response information depends on the browser you're using to access the Google Cloud console. For the Google Chrome browser:
Click Chrome's main menu button (more_vert).
Select More Tools.
Click Developer Tools.
In the pane that appears, click the Network tab.
Command line
Use global debugging flags in your request. For example:
gcloud storage ls gs://my-bucket/my-object --log-http --verbosity=debug
Client libraries
C++
Set the environment variable
CLOUD_STORAGE_ENABLE_TRACING=http
to get the full HTTP traffic.Set the environment variable CLOUD_STORAGE_ENABLE_CLOG=yes to get logging of each RPC.
C#
Add a logger via ApplicationContext.RegisterLogger
, and set logging
options on the HttpClient
message handler. For more information, see
the FAQ entry.
Go
Set the environment variable GODEBUG=http2debug=1
. For more
information, see the Go package net/http.
If you want to log the request body as well, use a custom HTTP client.
Java
Create a file named "logging.properties" with the following contents:
# Properties file which configures the operation of the JDK logging facility. # The system will look for this config file to be specified as a system property: # -Djava.util.logging.config.file=${project_loc:googleplus-simple-cmdline-sample}/logging.properties # Set up the console handler (uncomment "level" to show more fine-grained messages) handlers = java.util.logging.ConsoleHandler java.util.logging.ConsoleHandler.level = CONFIG # Set up logging of HTTP requests and responses (uncomment "level" to show) com.google.api.client.http.level = CONFIG
Use logging.properties with Maven
mvn -Djava.util.logging.config.file=path/to/logging.properties insert_command
For more information, see Pluggable HTTP Transport.
Node.js
Set the environment variable NODE_DEBUG=https
before calling the Node
script.
PHP
Provide your own HTTP handler to the client using
httpHandler
and set up middleware to log the request
and response.
Python
Use the logging module. For example:
import logging import http.client logging.basicConfig(level=logging.DEBUG) http.client.HTTPConnection.debuglevel=5
Ruby
At the top of your .rb file
after require "google/cloud/storage"
,
add the following:
ruby Google::Apis.logger.level = Logger::DEBUG
Adding custom headers
Adding custom headers to requests is a common tool for debugging purposes, such as for enabling debug headers or for tracing a request. The following example shows how to set request headers for different Cloud Storage tools:
Command line
Use the --additional-headers
flag, which is available for most
commands. For example:
gcloud storage objects describe gs://my-bucket/my-object --additional-headers=HEADER_NAME=HEADER_VALUE
Where HEADER_NAME
and
HEADER_VALUE
define the header you are adding
to the request.
Client libraries
C++
namespace gcs = google::cloud::storage;
gcs::Client client = ...;
client.AnyFunction(... args ..., gcs::CustomHeader("header-name", "value"));
C#
The following sample adds a custom header to every request made by the client library.
using Google.Cloud.Storage.V1;
var client = StorageClient.Create();
client.Service.HttpClient.DefaultRequestHeaders.Add("custom-header", "custom-value");
var buckets = client.ListBuckets("my-project-id");
foreach (var bucket in buckets)
{
Console.WriteLine(bucket.Name);
}
Go
Adding custom headers to requests made by the Go client library require
wrapping the transport used for the client with a custom RoundTripper
.
The following example sends debug headers and logs the corresponding
response headers:
package main
import (
"context"
"io/ioutil"
"log"
"net/http"
"cloud.google.com/go/storage"
"google.golang.org/api/option"
raw "google.golang.org/api/storage/v1"
htransport "google.golang.org/api/transport/http"
)
func main() {
ctx := context.Background()
// Standard way to initialize client:
// client, err := storage.NewClient(ctx)
// if err != nil {
// // handle error
// }
// Instead, create a custom http.Client.
base := http.DefaultTransport
trans, err := htransport.NewTransport(ctx, base, option.WithScopes(raw.DevstorageFullControlScope),
option.WithUserAgent("custom-user-agent"))
if err != nil {
// Handle error.
}
c := http.Client{Transport:trans}
// Add RoundTripper to the created HTTP client.
c.Transport = withDebugHeader{c.Transport}
// Supply this client to storage.NewClient
client, err := storage.NewClient(ctx, option.WithHTTPClient(&c))
if err != nil {
// Handle error.
}
// Use client to make a request
}
type withDebugHeader struct {
rt http.RoundTripper
}
func (wdh withDebugHeader) RoundTrip(r *http.Request) (*http.Response, error) {
headerName := "X-Custom-Header"
r.Header.Add(headerName, "value")
resp, err := wdh.rt.RoundTrip(r)
if err == nil {
log.Printf("Resp Header: %+v, ", resp.Header.Get(headerName))
} else {
log.Printf("Error: %+v", err)
}
return resp, err
}
Java
import com.google.api.gax.rpc.FixedHeaderProvider;
import com.google.api.gax.rpc.HeaderProvider;
import com.google.cloud.WriteChannel;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.ByteBuffer;
import static java.nio.charset.StandardCharsets.UTF_8;
public class Example {
public void main(String args[]) throws IOException {
HeaderProvider headerProvider =
FixedHeaderProvider.create("custom-header", "custom-value");
Storage storage = StorageOptions.getDefaultInstance()
.toBuilder()
.setHeaderProvider(headerProvider)
.build().getService();
String bucketName = "example-bucket";
String blobName = "test-custom-header";
// Use client with custom header
BlobInfo blob = BlobInfo.newBuilder(bucketName, blobName).build();
byte[] stringBytes;
try (WriteChannel writer = storage.writer(blob)) {
stringBytes = "hello world".getBytes(UTF_8);
writer.write(ByteBuffer.wrap(stringBytes));
}
}
}
Node.js
const storage = new Storage();
storage.interceptors.push({
request: requestConfig => {
Object.assign(requestConfig.headers, {
'X-Custom-Header': 'value',
});
return requestConfig;
},
});
PHP
All method calls which trigger http requests accept an optional
$restOptions
argument as the last argument. You can provide custom
headers on a per-request basis, or on a per-client basis.
use Google\Cloud\Storage\StorageClient;
$client = new StorageClient([
'restOptions' => [
'headers' => [
'x-foo' => 'bat'
]
]
]);
$bucket = $client->bucket('my-bucket');
$bucket->info([
'restOptions' => [
'headers' => [
'x-foo' => 'bar'
]
]
]);
Python
from google.cloud import storage
client = storage.Client(
extra_headers={
"x-custom-header": "value"
}
)
Ruby
require "google/cloud/storage"
storage = Google::Cloud::Storage.new
storage.add_custom_headers { 'X-Custom-Header'=> 'value' }
Accessing buckets with a CORS configuration
If you have set a CORS configuration on your bucket and notice that incoming requests from client browsers are failing, try the following troubleshooting steps:
Review the CORS configuration on the target bucket. If there are multiple CORS configuration entries, make sure that the request values you use for troubleshooting map to values in a single CORS configuration entry.
When testing issuing a CORS request, check that you are not making a request to the
storage.cloud.google.com
endpoint, which doesn't allow CORS requests. For more information about supported endpoints for CORS, see Cloud Storage CORS support.Review a request and response using the tool of your choice. In a Chrome browser, you can use the standard developer tools to see this information:
- Click the Chrome menu (more_vert) on the browser toolbar.
- Select More Tools > Developer Tools.
- Click the Network tab.
- From your application or command line, send the request.
- In the pane displaying the network activity, locate the request.
- In the Name column, click the name corresponding to the request.
- Click the Headers tab to see the response headers, or the Response tab to see the content of the response.
If you don't see a request and response, it's possible that your browser has cached an earlier failed preflight request attempt. Clearing your browser's cache should also clear the preflight cache. If it doesn't, set the
MaxAgeSec
value in your CORS configuration to a lower value than the default value of1800
(30 minutes), wait for however long the oldMaxAgeSec
was, then try the request again. This performs a new preflight request, which fetches the new CORS configuration and purges the cache entries. Once you have debugged your problem, raiseMaxAgeSec
back to a higher value to reduce the preflight traffic to your bucket.Ensure that the request has an
Origin
header and that the header value matches at least one of theOrigins
values in the bucket's CORS configuration. Note that the scheme, host, and port of the values must match exactly. Some examples of acceptable matches are the following:http://origin.example.com
matcheshttp://origin.example.com:80
(because 80 is the default HTTP port) but does not matchhttps://origin.example.com
,http://origin.example.com:8080
,http://origin.example.com:5151
, orhttp://sub.origin.example.com
.https://example.com:443
matcheshttps://example.com
but nothttp://example.com
orhttp://example.com:443
.http://localhost:8080
only matches exactlyhttp://localhost:8080
and does not matchhttp://localhost:5555
orhttp://localhost.example.com:8080
.
For simple requests, ensure that the HTTP method of the request matches at least one of the
Methods
values in the bucket's CORS configuration. For preflight requests, ensure that the method specified inAccess-Control-Request-Method
matches at least one of theMethods
values.For preflight requests, check if it includes one or more
Access-Control-Request-Header
headers. If so, ensure that eachAccess-Control-Request-Header
value matches aResponseHeader
value in the bucket's CORS configuration. All headers named in theAccess-Control-Request-Header
must be in the CORS configuration for the preflight request to succeed and include CORS headers in the response.
Error codes
The following are common HTTP status codes you may encounter.
301: Moved Permanently
Issue: I'm setting up a static website, and accessing a directory path
returns an empty object and a 301
HTTP response code.
Solution: If your browser downloads a zero byte object and you get a 301
HTTP response code when accessing a directory, such as
http://www.example.com/dir/
, your bucket most likely contains an empty object
of that name. To check that this is the case and fix the issue:
- In the Google Cloud console, go to the Cloud Storage Buckets page.
- Click the Activate Cloud Shell button at the top of the Google Cloud console.
- Run
gcloud storage ls --recursive gs://www.example.com/dir/
. If the output includeshttp://www.example.com/dir/
, you have an empty object at that location. - Remove the empty object with the command:
gcloud storage rm gs://www.example.com/dir/
You can now access http://www.example.com/dir/
and have it return that
directory's index.html
file instead of the empty object.
400: Bad Request
Issue: While performing a resumable upload, I received this error and
the message Failed to parse Content-Range header.
Solution: The value you used in your Content-Range
header is invalid. For
example, Content-Range: */*
is invalid and instead should be specified as
Content-Range: bytes */*
. If you receive this error, your current resumable
upload is no longer active, and you must start a new resumable upload.
401: Unauthorized
Issue: Requests to a public bucket directly, or using Cloud CDN, are
failing with a HTTP 401: Unauthorized
and an Authentication Required
response.
Solution: Check that your client, or any intermediate proxy, is not adding an
Authorization
header to requests to Cloud Storage. Any request with
an Authorization
header, even if empty, is validated as if it were an
authentication attempt.
403: Account Disabled
Issue: I tried to create a bucket but got a 403 Account Disabled
error.
Solution: This error indicates that you have not yet turned on billing for the associated project. For steps for enabling billing, see Enable billing for a project.
If billing is turned on and you continue to receive this error message, you can reach out to support with your project ID and a description of your problem.
403: Forbidden
Issue: I should have permission to access a certain bucket or object, but
when I attempt to do so, I get a 403 - Forbidden
error with a message that is
similar to: example@email.com does not have storage.objects.get access to the
Google Cloud Storage object
.
Solution: You are missing an IAM permission for the bucket or object that is required to complete the request. If you expect to be able to make the request but cannot, perform the following checks:
Is the grantee referenced in the error message the one you expected? If the error message refers to an unexpected email address or to "Anonymous caller", then your request is not using the credentials you intended. This could be because the tool you are using to make the request was set up with the credentials from another alias or entity, or it could be because the request is being made on your behalf by a service account.
Is the permission referenced in the error message one thought you needed? If the permission is unexpected, it's likely because the tool you're using requires additional access in order to complete your request. For example, in order to bulk delete objects in a bucket,
gcloud
must first construct a list of objects in the bucket to delete. This portion of the bulk delete action requires thestorage.objects.list
permission, which might be surprising, given that the goal is object deletion, which normally requires only thestorage.objects.delete
permission. If this is the cause of your error message, make sure you're granted IAM roles that have the additional necessary permissions.Are you granted the IAM role on the intended resource or parent resource? For example, if you're granted the
Storage Object Viewer
role for a project and you're trying to download an object, make sure the object is in a bucket that's in the project; you might inadvertently have theStorage Object Viewer
permission for a different project.Is your permission to access a certain bucket or object given through a convenience value? The removal of access granted to a convenience value can cause previously enabled principals to lose access to resources.
For example, say jane@example.com has the Owner (
roles/owner
) basic role for a project namedmy-example-project
, and the project's IAM policy grants the Storage Object Creator (roles/storage.objectCreator
) role to the convenience valueprojectOwner:my-example-project
. This means that jane@example.com has the permissions associated with the Storage Object Creator role for buckets withinmy-example-project
. If this grant gets removed, jane@example.com loses the permissions associated with the Storage Object Creator role.In such a scenario, you can regain access to the bucket or object by granting yourself the necessary bucket-level or object-level permissions required to perform the actions you need.
Is there an IAM Deny policy that prevents you from using certain permissions? You can contact your organization administrator to find out whether an IAM Deny policy has been put in place.
409: Conflict
Issue: I tried to create a bucket but received the following error:
409 Conflict. Sorry, that name is not available. Please try a different one.
Solution: The bucket name you tried to use (e.g. gs://cats
or gs://dogs
)
is already taken. Cloud Storage has a global namespace so you may not
name a bucket with the same name as an existing bucket. Choose a name that is
not being used.
412: Custom constraints violated
Issue: My requests are being rejected with a 412 orgpolicy
error.
Issue: My requests are being rejected with a
412 Multiple constraints were violated
error.
Solution: Check with your security administrator team to see if the bucket to which you're sending requests is being affected by an organization policy that uses a custom constraint. Your bucket might also be affected by different organization policies that conflict with one another. For example, where one policy specifies that buckets must have the Standard storage class and another policy specifies that buckets must have the Coldline storage class.
429: Too Many Requests
Issue: My requests are being rejected with a 429 Too Many Requests
error.
Solution: You are hitting a limit to the number of requests Cloud Storage allows for a given resource. See the Cloud Storage quotas for a discussion of limits in Cloud Storage.
If your workload consists of 1000's of requests per second to a bucket, see Request rate and access distribution guidelines for a discussion of best practices, including ramping up your workload gradually and avoiding sequential filenames.
If your workload is potentially using 50 Gbps or more of network egress to specific locations, check your bandwidth usage to ensure you're not encountering a bandwidth quota.
Diagnosing Google Cloud console errors
Issue: When using the Google Cloud console to perform an operation, I get a generic error message. For example, I see an error message when trying to delete a bucket, but I don't see details for why the operation failed.
Solution: Use the Google Cloud console's notifications to see detailed information about the failed operation:
Click the Notifications button (notifications) in the Google Cloud console header.
A drop-down displays the most recent operations performed by the Google Cloud console.
Click the item you want to find out more about.
A page opens up and displays detailed information about the operation.
Click each row to expand the detailed error information.
Issue: When using the Google Cloud console, I don't see a particular column displayed.
Solution: To see a particular column displayed in the Google Cloud console, click the Column display options icon (
) and select the column you want displayed.Simulated folders and managed folders
Issue: I deleted some objects in my bucket, and now the folder that contained them does not appear in the Google Cloud console.
Solution: While the Google Cloud console displays your bucket's contents as if there was a directory structure, folders don't fundamentally exist in Cloud Storage. As a result, when you remove all objects with a common prefix from a bucket, the folder icon representing that group of objects no longer appears in the Google Cloud console.
Issue: I can't create managed folders.
Solution: To create managed folders, make sure the following requirements are met:
You have an IAM role that contains the
storage.managedfolders.create
permission, such as the Storage Object Admin (roles/storage.objectAdmin
) role. For instructions on granting roles, see Use IAM permissions.Uniform bucket-level access is enabled on the bucket in which you want to create managed folders.
There are no IAM Conditions on the bucket or the project that use the bucket resource type (
storage.googleapis.com/Bucket
) or the object resource type (storage.googleapis.com/Object
). If any bucket within a project has an IAM Condition that uses either of these resource types, managed folders cannot be created in any of the buckets within that project, even if the condition is later removed.
Issue: I can't disable uniform bucket-level access because there are managed folders in my bucket.
Solution: Uniform bucket-level access cannot be disabled if there are managed folders in the bucket. To disable uniform bucket-level access, you'll need to first delete all managed folders in the bucket.
Static website errors
The following are common issues that you may encounter when setting up a bucket to host a static website.
HTTPS serving
Issue: I want to serve my content over HTTPS without using a load balancer.
Solution: You can serve static content through HTTPS using direct URIs
such as https://storage.googleapis.com/my-bucket/my-object
. For other options
to serve your content through a custom domain over SSL, you can:
- Use a third-party Content Delivery Network with Cloud Storage.
- Serve your static website content from Firebase Hosting instead of Cloud Storage.
Domain verification
Issue: I can't verify my domain.
Solution: Normally, the verification process in Search Console directs you to upload a file to your domain, but you may not have a way to do this without first having an associated bucket, which you can only create after you have performed domain verification.
In this case, verify ownership using the Domain name provider verification method. See Ownership verification for steps to accomplish this. This verification can be done before the bucket is created.
Inaccessible page
Issue: I get an Access denied
error message for a web page served by my
website.
Solution: Check that the object is shared publicly. If it is not, see Making Data Public for instructions on how to do this.
If you previously uploaded and shared an object, but then upload a new version of it, then you must reshare the object publicly. This is because the public permission is replaced with the new upload.
Content download
Issue: I am prompted to download my page's content, instead of being able to view it in my browser.
Solution: If you specify a MainPageSuffix
as an object that does not have
a web content type, site visitors are prompted to download the content instead
of being able to see served page content. To resolve this issue, update the
Content-Type
metadata entry to a suitable value, such as text/html
.
For instructions, see Editing object metadata.
Make data public
Issue: I'm trying to make my data public but I get an organization policy error.
Solution: Some organization policy constraints can prevent you from
making your data public. For example, the Domain Restricted Sharing constraint
(constraints/iam.allowedPolicyMemberDomains
) restricts resource sharing based
on the organization's domain. For organization policy failures, contact your
administrator to grant you the project or bucket-level permissions to allow
resource sharing by editing the organization policy for the organization,
folder, or project resource. If you continue to see this error after
overriding the organization policy, then you might need to wait a few minutes
for the change to take effect.
Issue: I get a permission error when I attempt to make my data public.
Solution: Make sure that you have the storage.buckets.setIamPolicy
permission or the storage.objects.setIamPolicy
permission. These permissions
are granted, for example, in the
Storage Admin (roles/storage.admin
) role. If you have the
storage.buckets.setIamPolicy
permission or the storage.objects.setIamPolicy
permission and you still get an error, your bucket might be subject to
public access prevention, which does not allow access to allUsers
or
allAuthenticatedUsers
. Public access prevention might be set on the bucket
directly, or it might be enforced through an organization policy that is
set at a higher level.
Latency
The following are common latency issues you might encounter. In addition, the Google Cloud Service Health Dashboard provides information about incidents affecting Google Cloud services such as Cloud Storage.
Upload or download latency
Issue: I'm seeing increased latency when uploading or downloading.
Solution: Consider the following common causes of upload and download latency:
CPU or memory constraints: The affected environment's operating system should have tooling to measure local resource consumption such as CPU usage and memory usage.
Disk IO constraints: The performance impact might be caused by local disk IO.
Geographical distance: Performance can be impacted by the physical separation of your Cloud Storage bucket and affected environment, particularly in cross-continental cases. Testing with a bucket located in the same region as your affected environment can identify the extent to which geographic separation is contributing to your latency.
- If applicable, the affected environment's DNS resolver should use the EDNS(0) protocol so that requests from the environment are routed through an appropriate Google Front End.
CLI or client library latency
Issue: I'm seeing increased latency when accessing Cloud Storage with the Google Cloud CLI or one of the client libraries.
Solution: The gcloud CLI and the client libraries
automatically retry requests when it's useful to do so, and this behavior
can effectively increase latency as seen from the end user. Use the
Cloud Monitoring metric storage.googleapis.com/api/request_count
to
see if Cloud Storage is consistently serving a retryable response code,
such as 429
or 5xx
.
Proxy servers
Issue: I'm connecting through a proxy server. What do I need to do?
Solution: To access Cloud Storage through a proxy server, you must allow access to these domains:
accounts.google.com
for creating OAuth2 authentication tokensoauth2.googleapis.com
for performing OAuth2 token exchanges*.googleapis.com
for storage requests
If your proxy server or security policy doesn't support allowlisting by domain and instead only supports allowlisting by IP network block, we strongly recommend that you configure your proxy server for all Google IP address ranges. You can find the address ranges by querying WHOIS data at ARIN. As a best practice, you should periodically review your proxy settings to ensure they match Google's IP addresses.
We don't recommend configuring your proxy with individual IP addresses you
obtain from one-time lookups of oauth2.googleapis.com
and
storage.googleapis.com
. Because Google services are exposed using DNS names
that map to a large number of IP addresses that can change over time,
configuring your proxy based on a one-time lookup may lead to failures to
connect to Cloud Storage.
If your requests are being routed through a proxy server, you may need to
check with your network administrator to ensure that the Authorization
header containing your credentials is not stripped out by the proxy. Without
the Authorization
header, your requests are rejected and you receive a
MissingSecurityHeader
error.
Storage Insights errors
Issue: My inventory report configuration is generating multiple inventory reports on a daily basis.
Solution: If you have more than 1,000,000 objects in your bucket, multiple inventory reports can be generated as shards. An inventory report configuration generates one inventory report for every 1,000,000 objects in the bucket. For example, if you have a bucket with 3,500,000 objects, the inventory report configuration on the bucket will generate four inventory report shards per the frequency you specify, along with a manifest file that contains the number of generated inventory report shards and their file names.
Issue: Inventory reports aren't appearing in the destination bucket.
Solution: If you've created an inventory report configuration and don't see inventory reports being generated in the destination bucket, check the following:
Make sure the start date specified in the inventory report configuration matches your expectation for when inventory reports should be generated. For instructions on how to specify a start date, see Create an inventory report configuration.
View your inventory report history to check for failures and their root causes. To see your inventory report history, complete the following steps:
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the list of buckets, click the name of the source bucket that contains the inventory report configuration.
On the Bucket details page, click the Inventory reports tab.
In the list of inventory report configurations, click the UUID of the inventory report configuration that generated the reports you want to check.
Check for failures in the Inventory report history section. You can hold the pointer over Help (
) to get details about why a failure occurred.
- In the Google Cloud console, go to the Cloud Storage Buckets page.
Make sure the project-level service agent is granted the IAM roles required for reading and writing inventory reports. For instructions, see Grant required roles to the service agent.
Issue: I'm seeing random delays with the generation of inventory reports.
Solution: The interval of time between inventory reports being generated can vary. You might see a delay of up to one day.
What's next
- Find answers to additional questions in the Cloud Storage FAQ.
- Learn about your support options.
- Explore how Error Reporting can help you identify and understand your Cloud Storage errors.