storage package - cloud.google.com/go/storage - Go Packages (original) (raw)

Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets.

More information about Google Cloud Storage is available athttps://cloud.google.com/storage/docs.

See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a Client:

ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: Handle error. }

The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the google.golang.org/api/optionpackage. You may also use options defined in this package, such as WithJSONReads.

If you only wish to access public data, you can create an unauthenticated client with

client, err := storage.NewClient(ctx, option.WithoutAuthentication())

To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual:

// Set STORAGE_EMULATOR_HOST environment variable. err := os.Setenv("STORAGE_EMULATOR_HOST", "localhost:9000") if err != nil { // TODO: Handle error. }

// Create client as usual. client, err := storage.NewClient(ctx) if err != nil { // TODO: Handle error. }

// This request is now directed to http://localhost:9000/storage/v1/b // instead of https://storage.googleapis.com/storage/v1/b if err := client.Bucket("my-bucket").Create(ctx, projectID, nil); err != nil { // TODO: Handle error. }

Please note that there is no official emulator for Cloud Storage.

Buckets

A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle:

bkt := client.Bucket(bucketName)

A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create:

if err := bkt.Create(ctx, projectID, nil); err != nil { // TODO: Handle error. }

Note that although buckets are associated with projects, bucket names are global across all projects.

Each bucket has associated metadata, represented in this package byBucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, useBucketHandle.Attrs:

attrs, err := bkt.Attrs(ctx) if err != nil { // TODO: Handle error. } fmt.Printf("bucket %s, created at %s, is located in %s with storage class %s\n", attrs.Name, attrs.Created, attrs.Location, attrs.StorageClass)

Objects

An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Readerand io.Writer interfaces to read and write object data:

obj := bkt.Object("data") // Write something to obj. // w implements io.Writer. w := obj.NewWriter(ctx) // Write some text to obj. This will either create the object or overwrite whatever is there already. if _, err := fmt.Fprintf(w, "This object contains text.\n"); err != nil { // TODO: Handle error. } // Close, just like writing a file. if err := w.Close(); err != nil { // TODO: Handle error. }

// Read it back. r, err := obj.NewReader(ctx) if err != nil { // TODO: Handle error. } defer r.Close() if _, err := io.Copy(os.Stdout, r); err != nil { // TODO: Handle error. } // Prints "This object contains text."

Objects also have attributes, which you can fetch with ObjectHandle.Attrs:

objAttrs, err := obj.Attrs(ctx) if err != nil { // TODO: Handle error. } fmt.Printf("object %s has size %d and can be read using %s\n", objAttrs.Name, objAttrs.Size, objAttrs.MediaLink)

Listing objects

Listing objects in a bucket is done with the BucketHandle.Objects method:

query := &storage.Query{Prefix: ""}

var names []string it := bkt.Objects(ctx, query) for { attrs, err := it.Next() if err == iterator.Done { break } if err != nil { log.Fatal(err) } names = append(names, attrs.Name) }

Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used:

query := &storage.Query{ Prefix: "", StartOffset: "bar/", // Only list objects lexicographically >= "bar/" EndOffset: "foo/", // Only list objects lexicographically < "foo/" }

// ... as before

If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process:

query := &storage.Query{Prefix: ""} query.SetAttrSelection([]string{"Name"})

// ... as before

ACLs

Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs.

To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List:

acls, err := obj.ACL().List(ctx) if err != nil { // TODO: Handle error. } for _, rule := range acls { fmt.Printf("%s has role %s\n", rule.Entity, rule.Role) }

You can also set and delete ACLs.

Conditions

Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations.

For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that:

w = obj.If(storage.Conditions{GenerationMatch: objAttrs.Generation}).NewWriter(ctx) // Proceed with writing as above.

Signed URLs

You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, useBucketHandle.SignedURL.

url, err := client.Bucket(bucketName).SignedURL(objectName, opts) if err != nil { // TODO: Handle error. } fmt.Println(url)

You can also sign a URL without creating a client. See the documentation ofSignedURL for details.

url, err := storage.SignedURL(bucketName, "shared-object", opts) if err != nil { // TODO: Handle error. } fmt.Println(url)

Post Policy V4 Signed Request

A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user.

For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4.

pv4, err := client.Bucket(bucketName).GenerateSignedPostPolicyV4(objectName, opts) if err != nil { // TODO: Handle error. } fmt.Printf("URL: %s\nFields; %v\n", pv4.URL, pv4.Fields)

Credential requirements for signing

If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL andBucketHandle.GenerateSignedPostPolicyV4 if any of the following are true:

Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account.

To generate the signature, you must have:

Errors

Errors returned by this client are often of the type github.com/googleapis/gax-go/v2/apierror. The [apierror.APIError] type can wrap a google.golang.org/grpc/status.Statusif gRPC was used, or a google.golang.org/api/googleapi.Error if HTTP/REST was used. You might also encounter googleapi.Error directly from HTTP operations. These types of errors can be inspected for more information by using errors.Asto access the specific underlying error types and retrieve detailed information, including HTTP or gRPC status codes. For example:

// APIErrors often wrap a googleapi.Error (for JSON and XML calls) or a status.Status (for gRPC calls) var ae *apierror.APIError if ok := errors.As(err, &ae); ok { // ae.HTTPCode() is the HTTP status code. // ae.GRPCStatus().Code() is the gRPC status code log.Printf("APIError: HTTPCode: %d, GRPCStatusCode: %s", ae.HTTPCode(), ae.GRPCStatus().Code())

if ae.GRPCStatus().Code() == codes.Unavailable {
    // ... handle gRPC unavailable ...
}

}

// This allows a user to get more information directly from googleapi.Errors (for JSON/XML calls) var e *googleapi.Error if ok := errors.As(err, &e); ok { // e.Code is the HTTP status code. // e.Message is the error message. // e.Body is the raw response body. // e.Header contains the HTTP response headers. log.Printf("HTTP Code: %d, Message: %s", e.Code, e.Message)

if e.Code == 409 {
    // ... handle conflict ...
}

}

This library may also return other errors that are not wrapped as [apierror.APIError]. For example, errors with authentication may return cloud.google.com/go/auth.Error.

Retrying failed requests

Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation.

The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docsfor more information.

Users can configure non-default retry behavior for a single library call (usingBucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example:

o := client.Bucket(bucket).Object(object).Retryer( // Use WithBackoff to change the timing of the exponential backoff. storage.WithBackoff(gax.Backoff{ Initial: 2 * time.Second, }), // Use WithPolicy to configure the idempotency policy. RetryAlways will // retry the operation even if it is non-idempotent. storage.WithPolicy(storage.RetryAlways), )

// Use a context timeout to set an overall deadline on the call, including all // potential retries. ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel()

// Delete an object using the specified strategy and timeout. if err := o.Delete(ctx); err != nil { // Handle err. }

You can add custom headers to any API call made by this package by usingcallctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header:

ctx := context.Background() ctx = callctx.SetHeaders(ctx, "x-goog-custom-audit-", "") // Use client as usual with the context and the additional headers will be sent. client.Bucket("my-bucket").Attrs(ctx)

gRPC API

This package includes support for the Cloud Storage gRPC API. This implementation uses gRPC rather than the default JSON & XML APIs to make requests to Cloud Storage. All methods on the Client support the gRPC API, with the exception of the Client.ServiceAccount, Notification, and HMACKey methods.

The Cloud Storage gRPC API is generally available.

To create a client which will use gRPC, use the alternate constructor:

ctx := context.Background() client, err := storage.NewGRPCClient(ctx) if err != nil { // TODO: Handle error. } // Use client as usual.

One major advantage of the gRPC API is that it can use Direct Connectivity, enabling requests to skip some proxy steps and reducing response latency. Requirements to use Direct Connectivity include:

Additional requirements for Direct Connectivity are documented in theCloud Storage gRPC docs. If all requirements are met, the client will use Direct Connectivity by default without requiring any client options or environment variables. To disable Direct Connectivity, you can set the environment variable GOOGLE_CLOUD_DISABLE_DIRECT_PATH=true.

Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size.

The gRPC client is instrumented with Open Telemetry metrics which export to Cloud Monitoring by default. More information is available in thegRPC client-side metrics documentation, including information about roles which must be enabled in order to do the export successfully. To disable this export, you can use the WithDisabledClientMetrics client option.

Storage Control API

Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs atcloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.

View Source

const (

DeleteAction = "Delete"


SetStorageClassAction = "SetStorageClass"


AbortIncompleteMPUAction = "AbortIncompleteMultipartUpload"

)

View Source

const (

NoPayload = "NONE"


JSONPayload = "JSON_API_V1"

)

Values for Notification.PayloadFormat.

View Source

const (

ObjectFinalizeEvent = "OBJECT_FINALIZE"


ObjectMetadataUpdateEvent = "OBJECT_METADATA_UPDATE"


ObjectDeleteEvent = "OBJECT_DELETE"


ObjectArchiveEvent = "OBJECT_ARCHIVE"

)

Values for Notification.EventTypes.

View Source

var (

ErrBucketNotExist = [errors](/errors).[New](/errors#New)("storage: bucket doesn't exist")


ErrObjectNotExist = [errors](/errors).[New](/errors#New)("storage: object doesn't exist")

)

CheckDirectConnectivitySupported checks if gRPC direct connectivity is available for a specific bucket from the environment where the client is running. A `nil` error represents Direct Connectivity was detected. Direct connectivity is expected to be available when running from inside GCP and connecting to a bucket in the same region.

Experimental helper that's subject to change.

You can pass in option.ClientOption you plan on passing to NewGRPCClient

ShouldRetry returns true if an error is retryable, based on best practice guidance from GCS. Seehttps://cloud.google.com/storage/docs/retry-strategy#go for more information on what errors are considered retryable.

If you would like to customize retryable errors, use the WithErrorFunc to supply a RetryOption to your library calls. For example, to retry additional errors, you can write a custom func that wraps ShouldRetry and also specifies additional errors that should return true.

SignedURL returns a URL for the specified object. Signed URLs allow anyone access to a restricted resource for a limited time without needing a Google account or signing in. For more information about signed URLs, seehttps://cloud.google.com/storage/docs/accesscontrol#signed_urls_query_string_authenticationIf initializing a Storage Client, instead use the Bucket.SignedURL method which uses the Client's credentials to handle authentication.

package main

import ( "fmt" "os" "time"

"cloud.google.com/go/storage"

)

func main() { pkey, err := os.ReadFile("my-private-key.pem") if err != nil { // TODO: handle error. } url, err := storage.SignedURL("my-bucket", "my-object", &storage.SignedURLOptions{ GoogleAccessID: "xxx@developer.gserviceaccount.com", PrivateKey: pkey, Method: "GET", Expires: time.Now().Add(48 * time.Hour), }) if err != nil { // TODO: handle error. } fmt.Println(url) }

WithDisabledClientMetrics is an option that may be passed to NewClient. gRPC metrics are enabled by default in the GCS client and will export the gRPC telemetry discussed in gRFC/66 and gRFC/78 toGoogle Cloud Monitoring. The option is used to disable metrics. Google Cloud Support can use this information to more quickly diagnose problems related to GCS and gRPC. Sending this data does not incur any billing charges, and requires minimal CPU (a single RPC every few minutes) or memory (a few KiB to batch the telemetry).

The default is to enable client metrics. To opt-out of metrics collected use this option.

WithJSONReads is an option that may be passed to NewClient. It sets the client to use the Cloud Storage JSON API for object reads. Currently, the default API used for reads is XML, but JSON will become the default in a future release.

Setting this option is required to use the GenerationNotMatch condition. We also recommend using JSON reads to ensure consistency with other client operations (all of which use JSON by default).

Note that when this option is set, reads will return a zero date forReaderObjectAttrs.LastModified and may return a different value forReaderObjectAttrs.CacheControl.

WithXMLReads is an option that may be passed to NewClient. It sets the client to use the Cloud Storage XML API for object reads.

This is the current default, but the default will switch to JSON in a future release.

ACLEntity refers to a user or group. They are sometimes referred to as grantees.

It could be in the form of: "user-", "user-", "group-", "group-", "domain-" and "project-team-".

Or one of the predefined constants: AllUsers, AllAuthenticatedUsers.

const ( AllUsers ACLEntity = "allUsers" AllAuthenticatedUsers ACLEntity = "allAuthenticatedUsers" )

type ACLHandle

type ACLHandle struct {

}

ACLHandle provides operations on an access control list for a Google Cloud Storage bucket or object. ACLHandle on an object operates on the latest generation of that object by default. Selecting a specific generation of an object is not currently supported by the client.

func (*ACLHandle) Delete

Delete permanently deletes the ACL entry for the given entity.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // No longer grant access to the bucket to everyone on the Internet. if err := client.Bucket("my-bucket").ACL().Delete(ctx, storage.AllUsers); err != nil { // TODO: handle error. } }

func (*ACLHandle) List

List retrieves ACL entries.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // List the default object ACLs for my-bucket. aclRules, err := client.Bucket("my-bucket").DefaultObjectACL().List(ctx) if err != nil { // TODO: handle error. } fmt.Println(aclRules) }

func (*ACLHandle) Set

Set sets the role for the given entity.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Let any authenticated user read my-bucket/my-object. obj := client.Bucket("my-bucket").Object("my-object") if err := obj.ACL().Set(ctx, storage.AllAuthenticatedUsers, storage.RoleReader); err != nil { // TODO: handle error. } }

ACLRole is the level of access to grant.

const ( RoleOwner ACLRole = "OWNER" RoleReader ACLRole = "READER" RoleWriter ACLRole = "WRITER" )

ACLRule represents a grant for a role to an entity (user, group or team) for a Google Cloud Storage object or bucket.

type AppendableWriterOpts struct {

ChunkSize [int](/builtin#int)

ChunkRetryDeadline [time](/time).[Duration](/time#Duration)

ProgressFunc func([int64](/builtin#int64))

FinalizeOnClose [bool](/builtin#bool)

}

AppendableWriterOpts provides options to set on a Writer initialized by [NewWriterFromAppendableObject]. Writer options must be set via this struct rather than being modified on the returned Writer. All Writer fields not present in this struct cannot be set when taking over an appendable object.

AppendableWriterOpts is supported only for gRPC clients and only for objects which were created append semantics and not finalized. This feature is in preview and is not yet available for general use.

BucketAttrs represents the metadata for a Google Cloud Storage bucket. Read-only fields are ignored by BucketHandle.Create.

type BucketAttrsToUpdate struct {

VersioningEnabled [optional](/cloud.google.com/go/internal/optional).[Bool](/cloud.google.com/go/internal/optional#Bool)


RequesterPays [optional](/cloud.google.com/go/internal/optional).[Bool](/cloud.google.com/go/internal/optional#Bool)


DefaultEventBasedHold [optional](/cloud.google.com/go/internal/optional).[Bool](/cloud.google.com/go/internal/optional#Bool)


BucketPolicyOnly *[BucketPolicyOnly](#BucketPolicyOnly)


UniformBucketLevelAccess *[UniformBucketLevelAccess](#UniformBucketLevelAccess)


PublicAccessPrevention [PublicAccessPrevention](#PublicAccessPrevention)


StorageClass [string](/builtin#string)


RetentionPolicy *[RetentionPolicy](#RetentionPolicy)


CORS [][CORS](#CORS)


Encryption *[BucketEncryption](#BucketEncryption)


Lifecycle *[Lifecycle](#Lifecycle)


Logging *[BucketLogging](#BucketLogging)


Website *[BucketWebsite](#BucketWebsite)


PredefinedACL [string](/builtin#string)


PredefinedDefaultObjectACL [string](/builtin#string)


RPO [RPO](#RPO)


Autoclass *[Autoclass](#Autoclass)


SoftDeletePolicy *[SoftDeletePolicy](#SoftDeletePolicy)

}

BucketAttrsToUpdate define the attributes to update during an Update call.

DeleteLabel causes a label to be deleted when ua is used in a call to Bucket.Update.

func (ua *BucketAttrsToUpdate) SetLabel(name, value string)

SetLabel causes a label to be added or modified when ua is used in a call to Bucket.Update.

type BucketConditions struct {

MetagenerationMatch [int64](/builtin#int64)


MetagenerationNotMatch [int64](/builtin#int64)

}

BucketConditions constrain bucket methods to act on specific metagenerations.

The zero value is an empty set of constraints.

type BucketEncryption struct {

DefaultKMSKeyName [string](/builtin#string)

}

BucketEncryption is a bucket's encryption configuration.

type BucketHandle

type BucketHandle struct {

}

BucketHandle provides operations on a Google Cloud Storage bucket. Use Client.Bucket to get a handle.

package main

import ( "context" "errors" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

attrs, err := client.Bucket("my-bucket").Attrs(ctx)
if errors.Is(err, storage.ErrBucketNotExist) {
    fmt.Println("The bucket does not exist")
    return
}
if err != nil {
    // TODO: handle error.
}
fmt.Printf("The bucket exists and has attributes: %#v\n", attrs)

}

func (*BucketHandle) ACL

func (b *BucketHandle) ACL() *ACLHandle

ACL returns an ACLHandle, which provides access to the bucket's access control list. This controls who can list, create or overwrite the objects in a bucket. This call does not perform any network operations.

func (*BucketHandle) AddNotification

AddNotification adds a notification to b. You must set n's TopicProjectID, TopicID and PayloadFormat, and must not set its ID. The other fields are all optional. The returned Notification's ID can be used to refer to it. Note: gRPC is not supported.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } b := client.Bucket("my-bucket") n, err := b.AddNotification(ctx, &storage.Notification{ TopicProjectID: "my-project", TopicID: "my-topic", PayloadFormat: storage.JSONPayload, }) if err != nil { // TODO: handle error. } fmt.Println(n.ID) }

func (*BucketHandle) Attrs

Attrs returns the metadata for the bucket.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } attrs, err := client.Bucket("my-bucket").Attrs(ctx) if err != nil { // TODO: handle error. } fmt.Println(attrs) }

func (*BucketHandle) BucketName added in v1.42.0

BucketName returns the name of the bucket.

func (*BucketHandle) Create

Create creates the Bucket in the project. If attrs is nil the API defaults will be used.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } if err := client.Bucket("my-bucket").Create(ctx, "my-project", nil); err != nil { // TODO: handle error. } }

func (*BucketHandle) DefaultObjectACL

func (b *BucketHandle) DefaultObjectACL() *ACLHandle

DefaultObjectACL returns an ACLHandle, which provides access to the bucket's default object ACLs. These ACLs are applied to newly created objects in this bucket that do not have a defined ACL. This call does not perform any network operations.

func (*BucketHandle) Delete

Delete deletes the Bucket.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } if err := client.Bucket("my-bucket").Delete(ctx); err != nil { // TODO: handle error. } }

func (*BucketHandle) DeleteNotification

DeleteNotification deletes the notification with the given ID. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"

)

var notificationID string

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } b := client.Bucket("my-bucket") // TODO: Obtain notificationID from BucketHandle.AddNotification // or BucketHandle.Notifications. err = b.DeleteNotification(ctx, notificationID) if err != nil { // TODO: handle error. } }

func (*BucketHandle) GenerateSignedPostPolicyV4 added in v1.19.0

GenerateSignedPostPolicyV4 generates a PostPolicyV4 value from bucket, object and opts. The generated URL and fields will then allow an unauthenticated client to perform multipart uploads.

This method requires the Expires field in the specified PostPolicyV4Options to be non-nil. You may need to set the GoogleAccessID and PrivateKey fields in some cases. Read more on the automatic detection of credentials for this method.

To allow the unauthenticated client to upload to any object name in the bucket with a given prefix rather than a specific object name, you can pass an empty string for object and set PostPolicyV4Options.Conditions to include ConditionStartsWith("$key", "prefix").

func (*BucketHandle) IAM

IAM provides access to IAM access control for the bucket.

func (*BucketHandle) If

func (b *BucketHandle) If(conds BucketConditions) *BucketHandle

If returns a new BucketHandle that applies a set of preconditions. Preconditions already set on the BucketHandle are ignored. The supplied BucketConditions must have exactly one field set to a non-zero value; otherwise an error will be returned from any operation on the BucketHandle. Operations on the new handle will return an error if the preconditions are not satisfied. The only valid preconditions for buckets are MetagenerationMatch and MetagenerationNotMatch.

func (*BucketHandle) LockRetentionPolicy

LockRetentionPolicy locks a bucket's retention policy until a previously-configured RetentionPeriod past the EffectiveTime. Note that if RetentionPeriod is set to less than a day, the retention policy is treated as a development configuration and locking will have no effect. The BucketHandle must have a metageneration condition that matches the bucket's metageneration. See BucketHandle.If.

This feature is in private alpha release. It is not currently available to most customers. It might be changed in backwards-incompatible ways and is not subject to any SLA or deprecation policy.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } b := client.Bucket("my-bucket") attrs, err := b.Attrs(ctx) if err != nil { // TODO: handle error. } // Note that locking the bucket without first attaching a RetentionPolicy // that's at least 1 day is a no-op err = b.If(storage.BucketConditions{MetagenerationMatch: attrs.MetaGeneration}).LockRetentionPolicy(ctx) if err != nil { // TODO: handle err } }

func (*BucketHandle) Notifications

Notifications returns all the Notifications configured for this bucket, as a map indexed by notification ID. Note: gRPC is not supported.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } b := client.Bucket("my-bucket") ns, err := b.Notifications(ctx) if err != nil { // TODO: handle error. } for id, n := range ns { fmt.Printf("%s: %+v\n", id, n) } }

func (*BucketHandle) Object

Object returns an ObjectHandle, which provides operations on the named object. This call does not perform any network operations such as fetching the object or verifying its existence. Use methods on ObjectHandle to perform network operations.

name must consist entirely of valid UTF-8-encoded runes. The full specification for valid object names can be found at:

https://cloud.google.com/storage/docs/naming-objects

func (*BucketHandle) Objects

Objects returns an iterator over the objects in the bucket that match the Query q. If q is nil, no filtering is done. Objects will be iterated over lexicographically by name.

Note: The returned iterator is not safe for concurrent operations without explicit synchronization.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } it := client.Bucket("my-bucket").Objects(ctx, nil) _ = it // TODO: iterate using Next or iterator.Pager. }

func (*BucketHandle) Retryer added in v1.19.0

func (b *BucketHandle) Retryer(opts ...RetryOption) *BucketHandle

Retryer returns a bucket handle that is configured with custom retry behavior as specified by the options that are passed to it. All operations on the new handle will use the customized retry configuration. Retry options set on a object handle will take precedence over options set on the bucket handle. These retry options will merge with the client's retry configuration (if set) for the returned handle. Options passed into this method will take precedence over retry options on the client. Note that you must explicitly pass in each option you want to override.

func (*BucketHandle) SetObjectRetention added in v1.36.0

func (b *BucketHandle) SetObjectRetention(enable bool) *BucketHandle

SetObjectRetention returns a new BucketHandle that will enable object retention on bucket creation. To enable object retention, you must use the returned handle to create the bucket. This has no effect on an already existing bucket. ObjectRetention is not enabled by default. ObjectRetention cannot be configured through the gRPC API.

func (*BucketHandle) SignedURL added in v1.18.0

SignedURL returns a URL for the specified object. Signed URLs allow anyone access to a restricted resource for a limited time without needing a Google account or signing in. For more information about signed URLs, see "Overview of access control."

This method requires the Method and Expires fields in the specified SignedURLOptions to be non-nil. You may need to set the GoogleAccessID and PrivateKey fields in some cases. Read more on the automatic detection of credentialsfor this method.

func (*BucketHandle) Update

Update updates a bucket's attributes.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Enable versioning in the bucket, regardless of its previous value. attrs, err := client.Bucket("my-bucket").Update(ctx, storage.BucketAttrsToUpdate{VersioningEnabled: true}) if err != nil { // TODO: handle error. } fmt.Println(attrs) }

If your update is based on the bucket's previous attributes, match the metageneration number to make sure the bucket hasn't changed since you read it.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } b := client.Bucket("my-bucket") attrs, err := b.Attrs(ctx) if err != nil { // TODO: handle error. } var au storage.BucketAttrsToUpdate au.SetLabel("lab", attrs.Labels["lab"]+"-more") if attrs.Labels["delete-me"] == "yes" { au.DeleteLabel("delete-me") } attrs, err = b. If(storage.BucketConditions{MetagenerationMatch: attrs.MetaGeneration}). Update(ctx, au) if err != nil { // TODO: handle error. } fmt.Println(attrs) }

func (*BucketHandle) UserProject

func (b *BucketHandle) UserProject(projectID string) *BucketHandle

UserProject returns a new BucketHandle that passes the project ID as the user project for all subsequent calls. Calls with a user project will be billed to that project rather than to the bucket's owning project.

A user project is required for all operations on Requester Pays buckets.

type BucketIterator struct {

Prefix [string](/builtin#string)


ReturnPartialSuccess [bool](/builtin#bool)

}

A BucketIterator is an iterator over BucketAttrs.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

Note: This method is not safe for concurrent operations without explicit synchronization.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } it := client.Buckets(ctx, "my-project") for { bucketAttrs, err := it.Next() if err == iterator.Done { break } if err != nil { // TODO: Handle error. } fmt.Println(bucketAttrs) } }

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This method is not safe for concurrent operations without explicit synchronization.

Unreachable returns a list of bucket names that could not be reached during the iteration if ReturnPartialSuccess was set to true.

type BucketLogging struct {

LogBucket [string](/builtin#string)


LogObjectPrefix [string](/builtin#string)

}

BucketLogging holds the bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

type BucketPolicyOnly struct {

Enabled [bool](/builtin#bool)


LockedTime [time](/time).[Time](/time#Time)

}

BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of UniformBucketLevelAccess is preferred above BucketPolicyOnly.

CORS is the bucket's Cross-Origin Resource Sharing (CORS) configuration.

Client is a client for interacting with Google Cloud Storage.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

NewClient creates a new Google Cloud Storage client using the HTTP transport. The default scope is ScopeFullControl. To use a different scope, like ScopeReadOnly, use option.WithScopes.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the google.golang.org/api/optionpackage. You may also use options defined in this package, such as WithJSONReads.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() // Use Google Application Default Credentials to authorize and authenticate the client. // More information about Application Default Credentials and how to enable is at // https://developers.google.com/identity/protocols/application-default-credentials. client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Use the client.

// Close the client when finished.
if err := client.Close(); err != nil {
    // TODO: handle error.
}

}

This example shows how to create an unauthenticated client, which can be used to access public data.

package main

import ( "context"

"cloud.google.com/go/storage"
"google.golang.org/api/option"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx, option.WithoutAuthentication()) if err != nil { // TODO: handle error. } // Use the client.

// Close the client when finished.
if err := client.Close(); err != nil {
    // TODO: handle error.
}

}

NewGRPCClient creates a new Storage client using the gRPC transport and API. Client methods which have not been implemented in gRPC will return an error. In particular, methods for Cloud Pub/Sub notifications, Service Account HMAC keys, and ServiceAccount are not supported. Using a non-default universe domain is also not supported with the Storage gRPC client.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the google.golang.org/api/optionpackage.

Bucket returns a BucketHandle, which provides operations on the named bucket. This call does not perform any network operations.

The supplied name must contain only lowercase letters, numbers, dashes, underscores, and dots. The full specification for valid bucket names can be found at:

https://cloud.google.com/storage/docs/bucket-naming

Buckets returns an iterator over the buckets in the project. You may optionally set the iterator's Prefix field to restrict the list to buckets whose names begin with the prefix. By default, all buckets in the project are returned.

To receive a partial list of buckets when some are unavailable, set the iterator's ReturnPartialSuccess field to true. You can then call the iterator's Unreachable method to retrieve the names of the unreachable buckets.

Note: The returned iterator is not safe for concurrent operations without explicit synchronization.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } it := client.Buckets(ctx, "my-project") _ = it // TODO: iterate using Next or iterator.Pager. }

Close closes the Client.

Close need not be called at program exit.

CreateHMACKey invokes an RPC for Google Cloud Storage to create a new HMACKey. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

hkey, err := client.CreateHMACKey(ctx, "project-id", "service-account-email")
if err != nil {
    // TODO: handle error.
}
_ = hkey // TODO: Use the HMAC Key.

}

func (*Client) HMACKeyHandle

func (c *Client) HMACKeyHandle(projectID, accessID string) *HMACKeyHandle

HMACKeyHandle creates a handle that will be used for HMACKey operations.

ListHMACKeys returns an iterator for listing HMACKeys.

Note: This iterator is not safe for concurrent operations without explicit synchronization. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

iter := client.ListHMACKeys(ctx, "project-id")
for {
    key, err := iter.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: handle error.
    }
    _ = key // TODO: Use the key.
}

}

package main

import ( "context"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

iter := client.ListHMACKeys(ctx, "project-id", storage.ForHMACKeyServiceAccountEmail("service@account.email"))
for {
    key, err := iter.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: handle error.
    }
    _ = key // TODO: Use the key.
}

}

package main

import ( "context"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

iter := client.ListHMACKeys(ctx, "project-id", storage.ShowDeletedHMACKeys())
for {
    key, err := iter.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: handle error.
    }
    _ = key // TODO: Use the key.
}

}

ServiceAccount fetches the email address of the given project's Google Cloud Storage service account. Note: gRPC is not supported.

func (c *Client) SetRetry(opts ...RetryOption)

SetRetry configures the client with custom retry behavior as specified by the options that are passed to it. All operations using this client will use the customized retry configuration. This should be called once before using the client for network operations, as there could be indeterminate behaviour with operations in progress. Retry options set on a bucket or object handle will take precedence over these options.

type Composer struct {

[ObjectAttrs](#ObjectAttrs)


SendCRC32C [bool](/builtin#bool)

}

A Composer composes source objects into a destination object.

For Requester Pays buckets, the user project of dst is billed.

Run performs the compose operation.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } bkt := client.Bucket("bucketname") src1 := bkt.Object("o1") src2 := bkt.Object("o2") dst := bkt.Object("o3")

// Compose and modify metadata.
c := dst.ComposerFrom(src1, src2)
c.ContentType = "text/plain"

// Set the expected checksum for the destination object to be validated by
// the backend (if desired).
c.CRC32C = 42
c.SendCRC32C = true

attrs, err := c.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(attrs)
// Just compose.
attrs, err = dst.ComposerFrom(src1, src2).Run(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(attrs)

}

type Conditions struct {

GenerationMatch [int64](/builtin#int64)


GenerationNotMatch [int64](/builtin#int64)


DoesNotExist [bool](/builtin#bool)


MetagenerationMatch [int64](/builtin#int64)


MetagenerationNotMatch [int64](/builtin#int64)

}

Conditions constrain methods to act on specific generations of objects.

The zero value is an empty set of constraints. Not all conditions or combinations of conditions are applicable to all methods. See https://cloud.google.com/storage/docs/generations-preconditionsfor details on how these operate.

type Copier struct {

[ObjectAttrs](#ObjectAttrs)


RewriteToken [string](/builtin#string)


ProgressFunc func(copiedBytes, totalBytes [uint64](/builtin#uint64))


DestinationKMSKeyName [string](/builtin#string)

}

A Copier copies a source object to a destination.

Run performs the copy.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } src := client.Bucket("bucketname").Object("file1") dst := client.Bucket("another-bucketname").Object("file2")

// Copy content and modify metadata.
copier := dst.CopierFrom(src)
copier.ContentType = "text/plain"
attrs, err := copier.Run(ctx)
if err != nil {
    // TODO: Handle error, possibly resuming with copier.RewriteToken.
}
fmt.Println(attrs)

// Just copy content.
attrs, err = dst.CopierFrom(src).Run(ctx)
if err != nil {
    // TODO: Handle error. No way to resume.
}
fmt.Println(attrs)

}

package main

import ( "context" "log"

"cloud.google.com/go/storage"

)

func main() { // Display progress across multiple rewrite RPCs. ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } src := client.Bucket("bucketname").Object("file1") dst := client.Bucket("another-bucketname").Object("file2")

copier := dst.CopierFrom(src)
copier.ProgressFunc = func(copiedBytes, totalBytes uint64) {
    log.Printf("copy %.1f%% done", float64(copiedBytes)/float64(totalBytes)*100)
}
if _, err := copier.Run(ctx); err != nil {
    // TODO: handle error.
}

}

type HMACKeyAttrsToUpdate struct {

State [HMACState](#HMACState)


Etag [string](/builtin#string)

}

HMACKeyAttrsToUpdate defines the attributes of an HMACKey that will be updated.

type HMACKeyHandle

type HMACKeyHandle struct {

}

HMACKeyHandle helps provide access and management for HMAC keys.

func (*HMACKeyHandle) Delete

Delete invokes an RPC to delete the key referenced by accessID, on Google Cloud Storage. Only inactive HMAC keys can be deleted. After deletion, a key cannot be used to authenticate requests. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

hkh := client.HMACKeyHandle("project-id", "access-key-id")
// Make sure that the HMACKey being deleted has a status of inactive.
if err := hkh.Delete(ctx); err != nil {
    // TODO: handle error.
}

}

func (*HMACKeyHandle) Get

Get invokes an RPC to retrieve the HMAC key referenced by the HMACKeyHandle's accessID.

Options such as UserProjectForHMACKeys can be used to set the userProject to be billed against for operations. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

hkh := client.HMACKeyHandle("project-id", "access-key-id")
hkey, err := hkh.Get(ctx)
if err != nil {
    // TODO: handle error.
}
_ = hkey // TODO: Use the HMAC Key.

}

func (*HMACKeyHandle) Update

Update mutates the HMACKey referred to by accessID. Note: gRPC is not supported.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

hkh := client.HMACKeyHandle("project-id", "access-key-id")
ukey, err := hkh.Update(ctx, storage.HMACKeyAttrsToUpdate{
    State: storage.Inactive,
})
if err != nil {
    // TODO: handle error.
}
_ = ukey // TODO: Use the HMAC Key.

}

type HMACKeyOption interface {

}

HMACKeyOption configures the behavior of HMACKey related methods and actions.

func ForHMACKeyServiceAccountEmail(serviceAccountEmail string) HMACKeyOption

ForHMACKeyServiceAccountEmail returns HMAC Keys that are associated with the email address of a service account in the project.

Only one service account email can be used as a filter, so if multiple of these options are applied, the last email to be set will be used.

func ShowDeletedHMACKeys() HMACKeyOption

ShowDeletedHMACKeys will also list keys whose state is "DELETED".

func UserProjectForHMACKeys(userProjectID string) HMACKeyOption

UserProjectForHMACKeys will bill the request against userProjectID if userProjectID is non-empty.

Note: This is a noop right now and only provided for API compatibility.

type HMACKeysIterator struct {

}

An HMACKeysIterator is an iterator over HMACKeys.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

HMACState is the state of the HMAC key.

const (

Active [HMACState](#HMACState) = "ACTIVE"


Inactive [HMACState](#HMACState) = "INACTIVE"


Deleted [HMACState](#HMACState) = "DELETED"

)

type Lifecycle struct { Rules []LifecycleRule }

Lifecycle is the lifecycle configuration for objects in the bucket.

type LifecycleAction struct {

Type [string](/builtin#string)


StorageClass [string](/builtin#string)

}

LifecycleAction is a lifecycle configuration action.

LifecycleCondition is a set of conditions used to match objects and take an action automatically.

All configured conditions must be met for the associated action to be taken.

type LifecycleRule struct {

Action [LifecycleAction](#LifecycleAction)


Condition [LifecycleCondition](#LifecycleCondition)

}

LifecycleRule is a lifecycle configuration rule.

When all the configured conditions are met by an object in the bucket, the configured action will automatically be taken on that object.

Liveness specifies whether the object is live or not.

const (

LiveAndArchived [Liveness](#Liveness) = [iota](/builtin#iota)

Live

Archived

)

type MoveObjectDestination struct { Object string Conditions *Conditions }

MoveObjectDestination provides the destination object name and (optional) preconditions for ObjectHandle.Move.

type MultiRangeDownloader struct { Attrs ReaderObjectAttrs

}

MultiRangeDownloader reads a Cloud Storage object.

Typically, a MultiRangeDownloader opens a stream to which we can add different ranges to read from the object.

This API is currently in preview and is not yet available for general use.

package main

import ( "bytes" "context" "errors" "fmt" "log" "sync"

"cloud.google.com/go/storage"
"cloud.google.com/go/storage/experimental"

)

func main() { ctx := context.Background() // NewMultiRangeDownloader is only available on gRPC. client, err := storage.NewGRPCClient(ctx, experimental.WithGRPCBidiReads()) if err != nil { // TODO: handle error. } defer client.Close()

obj := client.Bucket("my-bucket").Object("my-object")

// The MultiRangeDownloader is the most asynchronous method for downloading
// ranges. A callback is invoked for each downloaded range.
mrd, err := obj.NewMultiRangeDownloader(ctx)
if err != nil {
    // TODO: handle error.
}

// Use a WaitGroup to wait for the error collector goroutine.
var wg sync.WaitGroup
wg.Add(1)
var rangeErrs []error

errChan := make(chan error)

// Goroutine to collect errors from the channel.
go func() {
    defer wg.Done()
    for err := range errChan {
        rangeErrs = append(rangeErrs, err)
    }
}()

// Callback registered by user to be called upon completion of a range.
callback := func(offset, length int64, err error) {
    if err != nil {
        errChan <- err
    }
}

// User creates an io.Writer (e.g. a buffer) and adds it to the
// MultiRangeDownloader with a particular range. Data will be downloaded
// into the buffer.
b1 := new(bytes.Buffer)
mrd.Add(b1, 0, 100, callback)
b2 := new(bytes.Buffer)
mrd.Add(b2, 200, 100, callback)

// Wait for all downloads to complete.
mrd.Wait()
if err := mrd.Close(); err != nil {
    // TODO: handle error on close.
}

// Close the channel to signal the collector to stop.
close(errChan)

// Wait for the collector to finish draining the channel.
wg.Wait()

if len(rangeErrs) > 0 {
    // TODO: handle error from the range download.
    log.Printf("received errors: %s", errors.Join(rangeErrs...))
}

fmt.Printf("Downloaded %d bytes to first buffer: %s\n", b1.Len(), b1.String())
fmt.Printf("Downloaded %d bytes to second buffer: %s\n", b2.Len(), b2.String())

}

Add adds a new range to MultiRangeDownloader.

The offset for the first byte to return in the read, relative to the start of the object.

A negative offset value will be interpreted as the number of bytes from the end of the object to be returned. Requesting a negative offset with magnitude larger than the size of the object will return the entire object. An offset larger than the size of the object will result in an OutOfRange error.

A limit of zero indicates that there is no limit, and a negative limit will cause an error.

This will initiate the read range but is non-blocking; call callback to process the result. Add is thread-safe and can be called simultaneously from different goroutines.

Callback will be called with the offset, length of data read, and error of the read. Note that the length of the data read may be less than the requested length if the end of the object is reached.

Close the MultiRangeDownloader. It must be called when done reading. Adding new ranges after this has been called will cause an error.

This will immediately close the stream and can result in a "stream closed early" error if a response for a range is still not processed. Call MultiRangeDownloader.Wait to avoid this error.

Error returns an error if the MultiRangeDownloader is in a permanent failure state. It returns a nil error if the MultiRangeDownloader is open and can be used.

func (*MultiRangeDownloader) GetHandle added in v1.50.0

func (mrd *MultiRangeDownloader) GetHandle() []byte

GetHandle returns the read handle. This can be used to further speed up the follow up read if the same object is read through a different stream.

func (mrd *MultiRangeDownloader) Wait()

Wait for all the responses to process on the stream. Adding new ranges after this has been called will cause an error. Wait will wait for all callbacks to finish.

A Notification describes how to send Cloud PubSub messages when certain events occur in a bucket.

ObjectAttrs represents the metadata for a Google Cloud Storage (GCS) object.

ObjectAttrsToUpdate is used to update the attributes of an object. Only fields set to non-nil values will be updated. For all fields except CustomTime and Retention, set the field to its zero value to delete it. CustomTime cannot be deleted or changed to an earlier time once set. Retention can be deleted (only if the Mode is Unlocked) by setting it to an empty value (not nil).

For example, to change ContentType and delete ContentEncoding, Metadata and Retention, use:

ObjectAttrsToUpdate{ ContentType: "text/html", ContentEncoding: "", Metadata: map[string]string{}, Retention: &ObjectRetention{}, }

type ObjectContexts struct { Custom map[string]ObjectCustomContextPayload }

ObjectContexts is a container for custom object contexts.

ObjectCustomContextPayload holds the value of a user-defined object context and other metadata. To delete a key from Custom object contexts, set Delete as true.

type ObjectHandle

type ObjectHandle struct {

}

ObjectHandle provides operations on an object in a Google Cloud Storage bucket. Use BucketHandle.Object to get a handle.

package main

import ( "context" "errors" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. }

attrs, err := client.Bucket("my-bucket").Object("my-object").Attrs(ctx)
if errors.Is(err, storage.ErrObjectNotExist) {
    fmt.Println("The object does not exist")
    return
}
if err != nil {
    // TODO: handle error.
}
fmt.Printf("The object exists and has attributes: %#v\n", attrs)

}

func (*ObjectHandle) ACL

func (o *ObjectHandle) ACL() *ACLHandle

ACL provides access to the object's access control list. This controls who can read and write this object. This call does not perform any network operations.

func (*ObjectHandle) Attrs

Attrs returns meta information about the object. ErrObjectNotExist will be returned if the object is not found.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } objAttrs, err := client.Bucket("my-bucket").Object("my-object").Attrs(ctx) if err != nil { // TODO: handle error. } fmt.Println(objAttrs) }

package main

import ( "context" "fmt" "time"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } obj := client.Bucket("my-bucket").Object("my-object") // Read the object. objAttrs1, err := obj.Attrs(ctx) if err != nil { // TODO: handle error. } // Do something else for a while. time.Sleep(5 * time.Minute) // Now read the same contents, even if the object has been written since the last read. objAttrs2, err := obj.Generation(objAttrs1.Generation).Attrs(ctx) if err != nil { // TODO: handle error. } fmt.Println(objAttrs1, objAttrs2) }

func (*ObjectHandle) BucketName

BucketName returns the name of the bucket.

func (*ObjectHandle) ComposerFrom

func (dst ObjectHandle) ComposerFrom(srcs ...ObjectHandle) *Composer

ComposerFrom creates a Composer that can compose srcs into dst. You can immediately call Run on the returned Composer, or you can configure it first.

The encryption key for the destination object will be used to decrypt all source objects and encrypt the destination object. It is an error to specify an encryption key for any of the source objects.

func (*ObjectHandle) CopierFrom

func (dst *ObjectHandle) CopierFrom(src *ObjectHandle) *Copier

CopierFrom creates a Copier that can copy src to dst. You can immediately call Run on the returned Copier, or you can configure it first.

For Requester Pays buckets, the user project of dst is billed, unless it is empty, in which case the user project of src is billed.

package main

import ( "context"

"cloud.google.com/go/storage"

)

var key1, key2 []byte

func main() { // To rotate the encryption key on an object, copy it onto itself. ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } obj := client.Bucket("bucketname").Object("obj") // Assume obj is encrypted with key1, and we want to change to key2. _, err = obj.Key(key2).CopierFrom(obj.Key(key1)).Run(ctx) if err != nil { // TODO: handle error. } }

func (*ObjectHandle) Delete

Delete deletes the single specified object.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // To delete multiple objects in a bucket, list them with an // ObjectIterator, then Delete them.

// If you are using this package on the App Engine Flex runtime,
// you can init a bucket client with your app's default bucket name.
// See http://godoc.org/google.golang.org/appengine/file#DefaultBucketName.
bucket := client.Bucket("my-bucket")
it := bucket.Objects(ctx, nil)
for {
    objAttrs, err := it.Next()
    if err != nil && err != iterator.Done {
        // TODO: Handle error.
    }
    if err == iterator.Done {
        break
    }
    if err := bucket.Object(objAttrs.Name).Delete(ctx); err != nil {
        // TODO: Handle error.
    }
}
fmt.Println("deleted all object items in the bucket specified.")

}

func (*ObjectHandle) Generation

Generation returns a new ObjectHandle that operates on a specific generation of the object. By default, the handle operates on the latest generation. Not all operations work when given a specific generation; check the API endpoints at https://cloud.google.com/storage/docs/json_api/ for details.

package main

import ( "context" "io" "os"

"cloud.google.com/go/storage"

)

var gen int64

func main() { // Read an object's contents from generation gen, regardless of the // current generation of the object. ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } obj := client.Bucket("my-bucket").Object("my-object") rc, err := obj.Generation(gen).NewReader(ctx) if err != nil { // TODO: handle error. } defer rc.Close() if _, err := io.Copy(os.Stdout, rc); err != nil { // TODO: handle error. } }

func (*ObjectHandle) If

func (o *ObjectHandle) If(conds Conditions) *ObjectHandle

If returns a new ObjectHandle that applies a set of preconditions. Preconditions already set on the ObjectHandle are ignored. The supplied Conditions must have at least one field set to a non-default value; otherwise an error will be returned from any operation on the ObjectHandle. Operations on the new handle will return an error if the preconditions are not satisfied. See https://cloud.google.com/storage/docs/generations-preconditionsfor more details.

package main

import ( "context" "io" "net/http" "os"

"cloud.google.com/go/storage"
"google.golang.org/api/googleapi"

)

var gen int64

func main() { // Read from an object only if the current generation is gen. ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } obj := client.Bucket("my-bucket").Object("my-object") rc, err := obj.If(storage.Conditions{GenerationMatch: gen}).NewReader(ctx) if err != nil { // TODO: handle error. }

if _, err := io.Copy(os.Stdout, rc); err != nil {
    // TODO: handle error.
}
if err := rc.Close(); err != nil {
    switch ee := err.(type) {
    case *googleapi.Error:
        if ee.Code == http.StatusPreconditionFailed {
            // The condition presented in the If failed.
            // TODO: handle error.
        }

        // TODO: handle other status codes here.

    default:
        // TODO: handle error.
    }
}

}

func (*ObjectHandle) Key

func (o *ObjectHandle) Key(encryptionKey []byte) *ObjectHandle

Key returns a new ObjectHandle that uses the supplied encryption key to encrypt and decrypt the object's contents.

Encryption key must be a 32-byte AES-256 key. See https://cloud.google.com/storage/docs/encryption for details.

package main

import ( "context"

"cloud.google.com/go/storage"

)

var secretKey []byte

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } obj := client.Bucket("my-bucket").Object("my-object") // Encrypt the object's contents. w := obj.Key(secretKey).NewWriter(ctx) if _, err := w.Write([]byte("top secret")); err != nil { // TODO: handle error. } if err := w.Close(); err != nil { // TODO: handle error. } }

func (*ObjectHandle) Move added in v1.49.0

Move changes the name of the object to the destination name. It can only be used to rename an object within the same bucket.

Any preconditions set on the ObjectHandle will be applied for the source object. Set preconditions on the destination object using [MoveObjectDestination.Conditions].

This API is in preview and is not yet publicly available.

func (*ObjectHandle) NewMultiRangeDownloader added in v1.50.0

NewMultiRangeDownloader creates a multi-range reader for an object. Must be called on a gRPC client created using NewGRPCClient.

This uses the gRPC-specific bi-directional read API, which is in private preview; please contact your account manager if interested. The optionexperimental.WithGRPCBidiReads or experimental.WithZonalBucketAPIsmust be selected in order to use this API.

func (*ObjectHandle) NewRangeReader

NewRangeReader reads part of an object, reading at most length bytes starting at the given offset. If length is negative, the object is read until the end. If offset is negative, the object is read abs(offset) bytes from the end, and length must also be negative to indicate all remaining bytes will be read.

If the object's metadata property "Content-Encoding" is set to "gzip" or satisfies decompressive transcoding per https://cloud.google.com/storage/docs/transcodingthat file will be served back whole, regardless of the requested range as Google Cloud Storage dictates. If decompressive transcoding occurs, [Reader.Attrs.Decompressed] will be true.

By default, reads are made using the Cloud Storage XML API. We recommend using the JSON API instead, which can be done by setting WithJSONReadswhen calling NewClient. This ensures consistency with other client operations, which all use JSON. JSON will become the default in a future release.

package main

import ( "context" "fmt" "io"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Read only the first 64K. rc, err := client.Bucket("bucketname").Object("filename1").NewRangeReader(ctx, 0, 64*1024) if err != nil { // TODO: handle error. } defer rc.Close()

slurp, err := io.ReadAll(rc)
if err != nil {
    // TODO: handle error.
}
fmt.Printf("first 64K of file contents:\n%s\n", slurp)

}

package main

import ( "context" "fmt" "io"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Read only the last 10 bytes until the end of the file. rc, err := client.Bucket("bucketname").Object("filename1").NewRangeReader(ctx, -10, -1) if err != nil { // TODO: handle error. } defer rc.Close()

slurp, err := io.ReadAll(rc)
if err != nil {
    // TODO: handle error.
}
fmt.Printf("Last 10 bytes from the end of the file:\n%s\n", slurp)

}

package main

import ( "context" "fmt" "io"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Read from the 101st byte until the end of the file. rc, err := client.Bucket("bucketname").Object("filename1").NewRangeReader(ctx, 100, -1) if err != nil { // TODO: handle error. } defer rc.Close()

slurp, err := io.ReadAll(rc)
if err != nil {
    // TODO: handle error.
}
fmt.Printf("From 101st byte until the end:\n%s\n", slurp)

}

func (*ObjectHandle) NewReader

NewReader creates a new Reader to read the contents of the object. ErrObjectNotExist will be returned if the object is not found.

The caller must call Close on the returned Reader when done reading.

By default, reads are made using the Cloud Storage XML API. We recommend using the JSON API instead, which can be done by setting WithJSONReadswhen calling NewClient. This ensures consistency with other client operations, which all use JSON. JSON will become the default in a future release.

package main

import ( "context" "fmt" "io"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } rc, err := client.Bucket("my-bucket").Object("my-object").NewReader(ctx) if err != nil { // TODO: handle error. } slurp, err := io.ReadAll(rc) rc.Close() if err != nil { // TODO: handle error. } fmt.Println("file contents:", slurp) }

func (*ObjectHandle) NewWriter

NewWriter returns a storage Writer that writes to the GCS object associated with this ObjectHandle.

A new object will be created unless an object with this name already exists. Otherwise any previous object with the same name will be replaced. The object will not be available (and any previous object will remain) until Close has been called.

Attributes can be set on the object by modifying the returned Writer's ObjectAttrs field before the first call to Write. If no ContentType attribute is specified, the content type will be automatically sniffed using net/http.DetectContentType.

Note that each Writer allocates an internal buffer of size Writer.ChunkSize. See the ChunkSize docs for more information.

It is the caller's responsibility to call Close when writing is done. To stop writing without saving the data, cancel the context.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } wc := client.Bucket("bucketname").Object("filename1").NewWriter(ctx) _ = wc // TODO: Use the Writer. }

func (*ObjectHandle) NewWriterFromAppendableObject added in v1.52.0

NewWriterFromAppendableObject opens a new Writer to an object which has been partially flushed to GCS, but not finalized. It returns the Writer as well as the current end offset of the object. All bytes written will be appended continuing from the offset.

Generation must be set on the ObjectHandle or an error will be returned.

Writer fields such as ChunkSize or ChunkRetryDuration can be set only by setting the equivalent field in AppendableWriterOpts. Attributes set on the returned Writer will not be honored since the stream to GCS has already been opened. Some fields such as ObjectAttrs and checksums cannot be set on a takeover for append.

It is the caller's responsibility to call Close when writing is complete to close the stream. Calling Close or Flush is necessary to sync any data in the pipe to GCS.

The returned Writer is not safe to use across multiple go routines. In addition, if you attempt to append to the same object from multiple Writers at the same time, an error will be returned on Flush or Close.

NewWriterFromAppendableObject is supported only for gRPC clients and only for objects which were created append semantics and not finalized. This feature is in preview and is not yet available for general use.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewGRPCClient(ctx) if err != nil { // TODO: handle error. } bucketName := "my-rapid-bucket" objectName := "appendable-obj" obj := client.Bucket(bucketName).Object(objectName)

// First get the object's generation. This is required to append to an
// existing object.
attrs, err := obj.Attrs(ctx)
if err != nil {
    // TODO: handle error.
}

// Create a writer for appending to the object.
// Set Writer fields such as ChunkSize and FinalizeOnClose here.
w, offset, err := obj.Generation(attrs.Generation).NewWriterFromAppendableObject(ctx, &storage.AppendableWriterOpts{
    ChunkSize:       8 * 1024 * 1024, // 8 MiB
    FinalizeOnClose: true,            // finalize the object; default is unfinalized.
})
if err != nil {
    // TODO: handle error
}

// TODO: Start writing data from object offset using Writer.Write().
_ = offset
if err := w.Close(); err != nil {
    // TODO: handle error.
}

}

func (*ObjectHandle) ObjectName

ObjectName returns the name of the object.

func (*ObjectHandle) OverrideUnlockedRetention added in v1.36.0

func (o *ObjectHandle) OverrideUnlockedRetention(override bool) *ObjectHandle

OverrideUnlockedRetention provides an option for overriding an Unlocked Retention policy. This must be set to true in order to change a policy from Unlocked to Locked, to set it to null, or to reduce its RetainUntil attribute. It is not required for setting the ObjectRetention for the first time nor for extending the RetainUntil time.

package main

import ( "context" "time"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Overriding the retention policy is required to shorten the retention period // for an object. retainUntilDate := time.Now().Add(24 * time.Hour) uattrs := storage.ObjectAttrsToUpdate{ Retention: &storage.ObjectRetention{ Mode: "Unlocked", RetainUntil: retainUntilDate, }, } if _, err := client.Bucket("bucketname").Object("filename1").OverrideUnlockedRetention(true).Update(ctx, uattrs); err != nil { // TODO: handle error. } }

func (*ObjectHandle) ReadCompressed

func (o *ObjectHandle) ReadCompressed(compressed bool) *ObjectHandle

ReadCompressed when true causes the read to happen without decompressing.

func (*ObjectHandle) ReadHandle added in v1.50.0

func (o *ObjectHandle) ReadHandle(r ReadHandle) *ObjectHandle

ReadHandle returns a new ObjectHandle that uses the ReadHandle to open the objects.

Objects that have already been opened can be opened an additional time, using a read handle returned in the response, at lower latency. This produces the exact same object and generation and does not check if the generation is still the newest one. Note that this will be a noop unless it's set on a gRPC client on buckets with bi-directional read API access. Also note that you can get a ReadHandle only via calling reader.ReadHandle() on a previous read of the same object.

func (*ObjectHandle) Restore added in v1.41.0

Restore will restore a soft-deleted object to a live object. Note that you must specify a generation to use this method.

func (*ObjectHandle) Retryer added in v1.19.0

func (o *ObjectHandle) Retryer(opts ...RetryOption) *ObjectHandle

Retryer returns an object handle that is configured with custom retry behavior as specified by the options that are passed to it. All operations on the new handle will use the customized retry configuration. These retry options will merge with the bucket's retryer (if set) for the returned handle. Options passed into this method will take precedence over retry options on the bucket and client. Note that you must explicitly pass in each option you want to override.

func (*ObjectHandle) SoftDeleted added in v1.41.0

func (o *ObjectHandle) SoftDeleted() *ObjectHandle

SoftDeleted returns an object handle that can be used to get an object that has been soft deleted. To get a soft deleted object, the generation must be set on the object using ObjectHandle.Generation. Note that an error will be returned if a live object is queried using this.

func (*ObjectHandle) Update

Update updates an object with the provided attributes. See ObjectAttrsToUpdate docs for details on treatment of zero values. ErrObjectNotExist will be returned if the object is not found.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } // Change only the content type of the object. objAttrs, err := client.Bucket("my-bucket").Object("my-object").Update(ctx, storage.ObjectAttrsToUpdate{ ContentType: "text/html", ContentDisposition: "", // delete ContentDisposition }) if err != nil { // TODO: handle error. } fmt.Println(objAttrs) }

type ObjectIterator struct {

}

An ObjectIterator is an iterator over ObjectAttrs.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

In addition, if Next returns an error other than iterator.Done, all subsequent calls will return the same error. To continue iteration, a new `ObjectIterator` must be created. Since objects are ordered lexicographically by name, `Query.StartOffset` can be used to create a new iterator which will start at the desired place. Seehttps://pkg.go.dev/cloud.google.com/go/storage?tab=doc#hdr-Listing_objects.

If Query.Delimiter is non-empty, some of the ObjectAttrs returned by Next will have a non-empty Prefix field, and a zero value for all other fields. These represent prefixes.

Note: This method is not safe for concurrent operations without explicit synchronization.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } it := client.Bucket("my-bucket").Objects(ctx, nil) for { objAttrs, err := it.Next() if err == iterator.Done { break } if err != nil { // TODO: Handle error. } fmt.Println(objAttrs) } }

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This method is not safe for concurrent operations without explicit synchronization.

ObjectRetention contains the retention configuration for this object.

PolicyV4Fields describes the attributes for a PostPolicyV4 request.

PostPolicyV4 describes the URL and respective form fields for a generated PostPolicyV4 request.

func GenerateSignedPostPolicyV4(bucket, object string, opts PostPolicyV4Options) (PostPolicyV4, error)

GenerateSignedPostPolicyV4 generates a PostPolicyV4 value from bucket, object and opts. The generated URL and fields will then allow an unauthenticated client to perform multipart uploads. If initializing a Storage Client, instead use the Bucket.GenerateSignedPostPolicyV4 method which uses the Client's credentials to handle authentication.

package main

import ( "bytes" "io" "mime/multipart" "net/http" "time"

"cloud.google.com/go/storage"

)

func main() { pv4, err := storage.GenerateSignedPostPolicyV4("my-bucket", "my-object.txt", &storage.PostPolicyV4Options{ GoogleAccessID: "my-access-id", PrivateKey: []byte("my-private-key"),

    // The upload expires in 2hours.
    Expires: time.Now().Add(2 * time.Hour),

    Fields: &storage.PolicyV4Fields{
        StatusCodeOnSuccess:    200,
        RedirectToURLOnSuccess: "https://example.org/",
        // It MUST only be a text file.
        ContentType: "text/plain",
    },

    // The conditions that the uploaded file will be expected to conform to.
    Conditions: []storage.PostPolicyV4Condition{
        // Make the file a maximum of 10mB.
        storage.ConditionContentLengthRange(0, 10<<20),
    },
})
if err != nil {
    // TODO: handle error.
}

// Now you can upload your file using the generated post policy
// with a plain HTTP client or even the browser.
formBuf := new(bytes.Buffer)
mw := multipart.NewWriter(formBuf)
for fieldName, value := range pv4.Fields {
    if err := mw.WriteField(fieldName, value); err != nil {
        // TODO: handle error.
    }
}
file := bytes.NewReader(bytes.Repeat([]byte("a"), 100))

mf, err := mw.CreateFormFile("file", "myfile.txt")
if err != nil {
    // TODO: handle error.
}
if _, err := io.Copy(mf, file); err != nil {
    // TODO: handle error.
}
if err := mw.Close(); err != nil {
    // TODO: handle error.
}

// Compose the request.
req, err := http.NewRequest("POST", pv4.URL, formBuf)
if err != nil {
    // TODO: handle error.
}
// Ensure the Content-Type is derived from the multipart writer.
req.Header.Set("Content-Type", mw.FormDataContentType())
res, err := http.DefaultClient.Do(req)
if err != nil {
    // TODO: handle error.
}
_ = res

}

PostPolicyV4Condition describes the constraints that the subsequent object upload's multipart form fields will be expected to conform to.

func ConditionContentLengthRange added in v1.8.0

func ConditionContentLengthRange(start, end uint64) PostPolicyV4Condition

ConditionContentLengthRange constraints the limits that the multipart upload's range header will be expected to be within.

func ConditionStartsWith(key, value string) PostPolicyV4Condition

ConditionStartsWith checks that an attributes starts with value. An empty value will cause this condition to be ignored.

type ProjectTeam struct { ProjectNumber string Team string }

ProjectTeam is the project team associated with the entity, if any.

Projection is enumerated type for Query.Projection.

const (

ProjectionDefault [Projection](#Projection) = [iota](/builtin#iota)


ProjectionFull


ProjectionNoACL

)

type PublicAccessPrevention int

PublicAccessPrevention configures the Public Access Prevention feature, which can be used to disallow public access to any data in a bucket. Seehttps://cloud.google.com/storage/docs/public-access-prevention for more information.

const (

PublicAccessPreventionUnknown [PublicAccessPrevention](#PublicAccessPrevention) = [iota](/builtin#iota)


PublicAccessPreventionUnspecified


PublicAccessPreventionEnforced


PublicAccessPreventionInherited

)

Query represents a query to filter objects from a bucket.

SetAttrSelection makes the query populate only specific attributes of objects. When iterating over objects, if you only need each object's name and size, pass []string{"Name", "Size"} to this method. Only these fields will be fetched for each object across the network; the other fields of ObjectAttr will remain at their default values. This is a performance optimization; for more information, seehttps://cloud.google.com/storage/docs/json_api/v1/how-tos/performance

type ReadHandle added in v1.50.0

ReadHandle associated with the object. This is periodically refreshed.

type Reader struct { Attrs ReaderObjectAttrs

}

Reader reads a Cloud Storage object. It implements io.Reader.

Typically, a Reader computes the CRC of the downloaded content and compares it to the stored CRC, returning an error from Read if there is a mismatch. This integrity check is skipped if transcoding occurs. See https://cloud.google.com/storage/docs/transcoding.

CacheControl returns the cache control of the object.

Deprecated: use Reader.Attrs.CacheControl.

Close closes the Reader. It must be called when done reading.

func (*Reader) ContentEncoding deprecated

func (r *Reader) ContentEncoding() string

ContentEncoding returns the content encoding of the object.

Deprecated: use Reader.Attrs.ContentEncoding.

func (*Reader) ContentType deprecated

ContentType returns the content type of the object.

Deprecated: use Reader.Attrs.ContentType.

LastModified returns the value of the Last-Modified header.

Deprecated: use Reader.Attrs.LastModified.

Metadata returns user-provided metadata, in key/value pairs.

It can be nil if no metadata is present, or if the client uses the JSON API for downloads. Only the XML and gRPC APIs support getting custom metadata via the Reader; for JSON make a separate call to ObjectHandle.Attrs.

func (*Reader) ReadHandle added in v1.50.0

func (r *Reader) ReadHandle() ReadHandle

ReadHandle returns the read handle associated with an object. ReadHandle will be periodically refreshed.

ReadHandle requires the gRPC-specific bi-directional read API, which is in private preview; please contact your account manager if interested. Note that this only valid for gRPC and only with zonal buckets.

func (*Reader) Remain

Remain returns the number of bytes left to read, or -1 if unknown. Unfinalized objects will return -1.

Size returns the size of the object in bytes. The returned value is always the same and is not affected by calls to Read or Close. Size may be out of date for a Reader to an unfinalized object.

Deprecated: use Reader.Attrs.Size.

WriteTo writes all the data from the Reader to w. Fulfills the io.WriterTo interface. This is called implicitly when calling io.Copy on a Reader.

ReaderObjectAttrs are attributes about the object being read. These are populated during the New call. This struct only holds a subset of object attributes: to get the full set of attributes, use ObjectHandle.Attrs.

Each field is read-only.

type RestoreOptions struct {

CopySourceACL [bool](/builtin#bool)

}

RestoreOptions allows you to set options when restoring an object.

RetentionPolicy enforces a minimum retention time for all objects contained in the bucket.

Any attempt to overwrite or delete objects younger than the retention period will result in an error. An unlocked retention policy can be modified or removed from the bucket via the Update method. A locked retention policy cannot be removed or shortened in duration for the lifetime of the bucket.

This feature is in private alpha release. It is not currently available to most customers. It might be changed in backwards-incompatible ways and is not subject to any SLA or deprecation policy.

type RetryOption interface {

}

RetryOption allows users to configure non-default retry behavior for API calls made to GCS.

func WithBackoff(backoff gax.Backoff) RetryOption

WithBackoff allows configuration of the backoff timing used for retries. Available configuration options (Initial, Max and Multiplier) are described at https://pkg.go.dev/github.com/googleapis/gax-go/v2#Backoff. If any fields are not supplied by the user, gax default values will be used.

func WithErrorFunc(shouldRetry func(err error) bool) RetryOption

WithErrorFunc allows users to pass a custom function to the retryer. Errors will be retried if and only if `shouldRetry(err)` returns true. By default, the following errors are retried (see ShouldRetry for the default function):

- HTTP responses with codes 408, 429, 502, 503, and 504.

- Transient network errors such as connection reset and io.ErrUnexpectedEOF.

- Errors which are considered transient using the Temporary() interface.

- Wrapped versions of these errors.

This option can be used to retry on a different set of errors than the default. Users can use the default ShouldRetry function inside their custom function if they only want to make minor modifications to default behavior.

func WithMaxAttempts(maxAttempts int) RetryOption

WithMaxAttempts configures the maximum number of times an API call can be made in the case of retryable errors. For example, if you set WithMaxAttempts(5), the operation will be attempted up to 5 times total (initial call plus 4 retries). Without this setting, operations will continue retrying indefinitely until either the context is canceled or a deadline is reached.

func WithPolicy(policy RetryPolicy) RetryOption

WithPolicy allows the configuration of which operations should be performed with retries for transient errors.

RetryPolicy describes the available policies for which operations should be retried. The default is `RetryIdempotent`.

const (

RetryIdempotent [RetryPolicy](#RetryPolicy) = [iota](/builtin#iota)


RetryAlways


RetryNever

)

SignedURLOptions allows you to restrict the access to the signed URL.

SigningScheme determines the API version to use when signing URLs.

const (

SigningSchemeDefault [SigningScheme](#SigningScheme) = [iota](/builtin#iota)


SigningSchemeV2


SigningSchemeV4

)

SoftDeletePolicy contains the bucket's soft delete policy, which defines the period of time that soft-deleted objects will be retained, and cannot be permanently deleted.

type URLStyle interface {

}

URLStyle determines the style to use for the signed URL. PathStyle is the default. All non-default options work with V4 scheme only. Seehttps://cloud.google.com/storage/docs/request-endpoints for details.

func PathStyle() URLStyle

PathStyle is the default style, and will generate a URL of the form "//". By default, is storage.googleapis.com, but setting an endpoint on the storage Client or through STORAGE_EMULATOR_HOST overrides this. Setting Hostname on SignedURLOptions or PostPolicyV4Options overrides everything else.

func VirtualHostedStyle() URLStyle

VirtualHostedStyle generates a URL relative to the bucket's virtual hostname, e.g. ".storage.googleapis.com/".

type UniformBucketLevelAccess struct {

Enabled [bool](/builtin#bool)


LockedTime [time](/time).[Time](/time#Time)

}

UniformBucketLevelAccess configures access checks to use only bucket-level IAM policies.

A Writer writes a Cloud Storage object.

func (w *Writer) Attrs() *ObjectAttrs

Attrs returns metadata about a successfully-written object. It's only valid to call it after Close returns nil.

Close completes the write operation and flushes any buffered data. If Close doesn't return an error, metadata about the written object can be retrieved by calling Attrs.

CloseWithError aborts the write operation with the provided error. CloseWithError always returns nil.

Deprecated: cancel the context passed to NewWriter instead.

Flush syncs all bytes currently in the Writer's buffer to Cloud Storage. It returns the offset of bytes that have been currently synced to Cloud Storage and an error.

If Flush is never called, Writer will sync data automatically every [Writer.ChunkSize] bytes and on Writer.Close.

[Writer.ProgressFunc] will be called on Flush if present.

Do not call Flush concurrently with Write or Close. A single Writer is not safe for unsynchronized use across threads.

Note that calling Flush very early (before 512 bytes) may interfere with automatic content sniffing in the Writer.

Flush is supported only on gRPC clients where [Writer.Append] is set to true. This feature is in preview and is not yet available for general use.

package main

import ( "context"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewGRPCClient(ctx) if err != nil { // TODO: handle error. } bucketName := "my-rapid-bucket" objectName := "appendable-obj" obj := client.Bucket(bucketName).Object(objectName)

// Create an appendable object using NewWriter, or append to an existing
// one with NewWriterFromAppendableObject.
w := obj.NewWriter(ctx)
w.Append = true

// Calling Writer.Write, the data may still be in a local buffer in the
// client.
if _, err := w.Write([]byte("hello ")); err != nil {
    // TODO: handle error.
}
// Call Writer.Flush to ensure data is synced to GCS.
if _, err := w.Flush(); err != nil {
    // TODO: Handle error.
}
// Write remaining data and close writer. Data is automatically synced
// at ChunkSize boundaries and when Close is called.
if _, err := w.Write([]byte("world!")); err != nil {
    // TODO: handle error.
}
if err := w.Close(); err != nil {
    // TODO: handle error.
}

}

Write appends to w. It implements the io.Writer interface.

Since writes happen asynchronously, Write may return a nil error even though the write failed (or will fail). Always use the error returned from Writer.Close to determine if the upload was successful.

Writes will be retried on transient errors from the server, unless Writer.ChunkSize has been set to zero.

package main

import ( "context" "fmt"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } wc := client.Bucket("bucketname").Object("filename1").NewWriter(ctx) wc.ContentType = "text/plain" wc.ACL = []storage.ACLRule{{Entity: storage.AllUsers, Role: storage.RoleReader}} if _, err := wc.Write([]byte("hello world")); err != nil { // TODO: handle error. // Note that Write may return nil in some error situations, // so always check the error from Close. } if err := wc.Close(); err != nil { // TODO: handle error. } fmt.Println("updated object:", wc.Attrs()) }

To make sure the data you write is uncorrupted, use an MD5 or CRC32c checksum. This example illustrates CRC32c.

package main

import ( "context" "fmt" "hash/crc32"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } data := []byte("verify me") wc := client.Bucket("bucketname").Object("filename1").NewWriter(ctx) wc.CRC32C = crc32.Checksum(data, crc32.MakeTable(crc32.Castagnoli)) wc.SendCRC32C = true if _, err := wc.Write([]byte("hello world")); err != nil { // TODO: handle error. // Note that Write may return nil in some error situations, // so always check the error from Close. } if err := wc.Close(); err != nil { // TODO: handle error. } fmt.Println("updated object:", wc.Attrs()) }

To limit the time to write an object (or do anything else that takes a context), use context.WithTimeout.

package main

import ( "context" "fmt" "time"

"cloud.google.com/go/storage"

)

func main() { ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: handle error. } tctx, cancel := context.WithTimeout(ctx, 30*time.Second) defer cancel() // Cancel when done, whether we time out or not. wc := client.Bucket("bucketname").Object("filename1").NewWriter(tctx) wc.ContentType = "text/plain" wc.ACL = []storage.ACLRule{{Entity: storage.AllUsers, Role: storage.RoleReader}} if _, err := wc.Write([]byte("hello world")); err != nil { // TODO: handle error. // Note that Write may return nil in some error situations, // so always check the error from Close. } if err := wc.Close(); err != nil { // TODO: handle error. } fmt.Println("updated object:", wc.Attrs()) }