Client — AWS SDK for Ruby V3 (original) (raw)

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

571 572 573 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 571 def initialize(*args) super end

Instance Method Details

#abort_multipart_upload(params = {}) ⇒ Types::AbortMultipartUploadOutput

This operation aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

To verify that all parts have been removed and prevent getting charged for the part storage, you should call the ListParts API operation and ensure that the parts list is empty.

* Directory buckets - If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed. To delete these in-progress multipart uploads, use the ListMultipartUploads operation to list the in-progress multipart uploads in the bucket and use the AbortMultipartUpload operation to abort all the in-progress multipart uploads.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to AbortMultipartUpload:

782 783 784 785 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 782 def abort_multipart_upload(params = {}, options = {}) req = build_request(:abort_multipart_upload, params) req.send_request(options) end

#complete_multipart_upload(params = {}) ⇒ Types::CompleteMultipartUploadOutput

Completes a multipart upload by assembling previously uploaded parts.

You first initiate the multipart upload and then upload all parts using the UploadPart operation or the UploadPartCopyoperation. After successfully uploading all relevant parts of an upload, you call this CompleteMultipartUpload operation to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the CompleteMultipartUpload request, you must provide the parts list and ensure that the parts list is complete. The CompleteMultipartUpload API operation concatenates the parts that you provide in the list. For each part in the list, you must provide thePartNumber value and the ETag value that are returned after that part was uploaded.

The processing of a CompleteMultipartUpload request could take several minutes to finalize. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. A request could fail after the initial 200 OK response has been sent. This means that a 200 OK response can contain either a success or an error. The error response might be embedded in the 200 OK response. If you call this API operation directly, make sure to design your application to parse the contents of the response and handle it appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply error handling per your configuration settings (including automatically retrying the request as appropriate). If the condition persists, the SDKs throw an exception (or, for the SDKs that don't use exceptions, they return an error).

Note that if CompleteMultipartUpload fails, applications should be prepared to retry any failed requests (including 500 error responses). For more information, see Amazon S3 Error Best Practices.

You can't use Content-Type: application/x-www-form-urlencoded for the CompleteMultipartUpload requests. Also, if you don't provide aContent-Type header, CompleteMultipartUpload can still return a200 OK response.

For more information about multipart uploads, see Uploading Objects Using Multipart Upload in the Amazon S3 User Guide.

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

Special errors

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to CompleteMultipartUpload:

1278 1279 1280 1281 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 1278 def complete_multipart_upload(params = {}, options = {}) req = build_request(:complete_multipart_upload, params) req.send_request(options) end

#copy_object(params = {}) ⇒ Types::CopyObjectOutput

Creates a copy of an object that is already stored in Amazon S3.

You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API.

You can copy individual objects between general purpose buckets, between directory buckets, and between general purpose buckets and directory buckets.

* Amazon S3 supports copy operations using Multi-Region Access Points only as a destination when using the Multi-Region Access Point ARN.

Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account. For more information about how to enable a Region for your account, see Enable or disable a Region for standalone accounts in the_Amazon Web Services Account Management Guide_.

Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, seeTransfer Acceleration.

Authentication and authorization

All CopyObject requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, includingx-amz-copy-source, must be signed. For more information, see REST Authentication.

Directory buckets - You must use the IAM credentials to authenticate and authorize your access to the CopyObject API operation, instead of using the temporary security credentials through the CreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.

Permissions

You must have read access to the source object and write access to the destination bucket.

Response and special errors

When the request is an HTTP 1.1 request, the response is chunk encoded. When the request is not an HTTP 1.1 request, the response would not contain the Content-Length. You always need to read the entire response body to check if the copy succeeds.

Charge

The copy request charge is based on the storage class and Region that you specify for the destination object. The request can also result in a data retrieval charge for the source if the source storage class bills for data retrieval. If the copy source is in a different region, the data transfer is billed to the copy source account. For pricing information, see Amazon S3 pricing.

HTTP Host header syntax

The following operations are related to CopyObject:

2347 2348 2349 2350 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2347 def copy_object(params = {}, options = {}) req = build_request(:copy_object, params) req.send_request(options) end

#create_bucket(params = {}) ⇒ Types::CreateBucketOutput

This action creates an Amazon S3 bucket. To create an Amazon S3 on Outposts bucket, see CreateBucket .

Creates a new S3 bucket. To create a bucket, you must set up Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.

There are two types of buckets: general purpose buckets and directory buckets. For more information about these bucket types, see Creating, configuring, and working with Amazon S3 buckets in the Amazon S3 User Guide.

* General purpose buckets - If you send your CreateBucket request to the s3.amazonaws.com global endpoint, the request goes to the us-east-1 Region. So the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets in the Amazon S3 User Guide.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to CreateBucket:

2671 2672 2673 2674 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2671 def create_bucket(params = {}, options = {}) req = build_request(:create_bucket, params) req.send_request(options) end

#create_bucket_metadata_table_configuration(params = {}) ⇒ Struct

2755 2756 2757 2758 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2755 def create_bucket_metadata_table_configuration(params = {}, options = {}) req = build_request(:create_bucket_metadata_table_configuration, params) req.send_request(options) end

#create_multipart_upload(params = {}) ⇒ Types::CreateMultipartUploadOutput

This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request. For more information about multipart uploads, see Multipart Upload Overview in the Amazon S3 User Guide.

After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stops charging you for storing them only after you either complete or abort a multipart upload.

If you have configured a lifecycle rule to abort incomplete multipart uploads, the created multipart upload must be completed within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort action and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.

* Directory buckets - S3 Lifecycle is not supported by directory buckets.

Request signing

For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (Amazon Web Services Signature Version 4) in the Amazon S3 User Guide.

Permissions

Encryption

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to CreateMultipartUpload:

3715 3716 3717 3718 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3715 def create_multipart_upload(params = {}, options = {}) req = build_request(:create_multipart_upload, params) req.send_request(options) end

#create_session(params = {}) ⇒ Types::CreateSessionOutput

Creates a session that establishes temporary security credentials to support fast authentication and authorization for the Zonal endpoint API operations on directory buckets. For more information about Zonal endpoint API operations that include the Availability Zone in the request endpoint, see S3 Express One Zone APIs in the Amazon S3 User Guide.

To make Zonal endpoint API requests on a directory bucket, use theCreateSession API operation. Specifically, you grants3express:CreateSession permission to a bucket in a bucket policy or an IAM identity-based policy. Then, you use IAM credentials to make the CreateSession API request on the bucket, which returns temporary security credentials that include the access key ID, secret access key, session token, and expiration. These credentials have associated permissions to access the Zonal endpoint API operations. After the session is created, you don’t need to use other policies to grant permissions to each Zonal endpoint API individually. Instead, in your Zonal endpoint API requests, you sign your requests by applying the temporary security credentials of the session to the request headers and following the SigV4 protocol for authentication. You also apply the session token to the x-amz-s3session-token request header for authorization. Temporary security credentials are scoped to the bucket and expire after 5 minutes. After the expiration time, any calls that you make with those credentials will fail. You must use IAM credentials again to make a CreateSession API request that generates a new set of temporary credentials for use. Temporary credentials cannot be extended or refreshed beyond the original specified interval.

If you use Amazon Web Services SDKs, SDKs handle the session token refreshes automatically to avoid service interruptions when a session expires. We recommend that you use the Amazon Web Services SDKs to initiate and manage requests to the CreateSession API. For more information, see Performance guidelines and design patterns in the Amazon S3 User Guide.

* You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

Permissions

To obtain temporary security credentials, you must create a bucket policy or an IAM identity-based policy that grantss3express:CreateSession permission to the bucket. In a policy, you can have the s3express:SessionMode condition key to control who can create a ReadWrite or ReadOnly session. For more information about ReadWrite or ReadOnly sessions, see x-amz-create-session-mode . For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the Amazon S3 User Guide.

To grant cross-account access to Zonal endpoint API operations, the bucket policy should also grant both accounts thes3express:CreateSession permission.

If you want to encrypt objects with SSE-KMS, you must also have thekms:GenerateDataKey and the kms:Decrypt permissions in IAM identity-based policies and KMS key policies for the target KMS key.

Encryption

For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your CreateSessionrequests or PUT object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, seeSpecifying server-side encryption with KMS for new object uploads.

For Zonal endpoint (object-level) API operations exceptCopyObject and UploadPartCopy, you authenticate and authorize requests through CreateSession for low latency. To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a customer managed key). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session.

Only 1 customer managed key is supported per directory bucket for the lifetime of the bucket. The Amazon Web Services managed key (aws/s3) isn't supported. After you specify SSE-KMS as your bucket's default encryption configuration with a customer managed key, you can't change the customer managed key for the bucket's SSE-KMS configuration.

In the Zonal endpoint API calls (except CopyObject andUploadPartCopy) using the REST API, you can't override the values of the encryption settings (x-amz-server-side-encryption,x-amz-server-side-encryption-aws-kms-key-id,x-amz-server-side-encryption-context, andx-amz-server-side-encryption-bucket-key-enabled) from theCreateSession request. You don't need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the CreateSessionrequest to protect new objects in the directory bucket.

When you use the CLI or the Amazon Web Services SDKs, forCreateSession, the session token refreshes automatically to avoid service interruptions when a session expires. The CLI or the Amazon Web Services SDKs use the bucket's default encryption configuration for the CreateSession request. It's not supported to override the encryption settings values in the CreateSession request. Also, in the Zonal endpoint API calls (except CopyObject andUploadPartCopy), it's not supported to override the values of the encryption settings from the CreateSession request.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

4009 4010 4011 4012 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4009 def create_session(params = {}, options = {}) req = build_request(:create_session, params) req.send_request(options) end

#delete_bucket(params = {}) ⇒ Struct

Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

* Directory buckets - If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to DeleteBucket:

4119 4120 4121 4122 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4119 def delete_bucket(params = {}, options = {}) req = build_request(:delete_bucket, params) req.send_request(options) end

#delete_bucket_analytics_configuration(params = {}) ⇒ Struct

4185 4186 4187 4188 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4185 def delete_bucket_analytics_configuration(params = {}, options = {}) req = build_request(:delete_bucket_analytics_configuration, params) req.send_request(options) end

#delete_bucket_cors(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Deletes the cors configuration information set for the bucket.

To use this operation, you must have permission to perform thes3:PutBucketCORS action. The bucket owner has this permission by default and can grant this permission to others.

For information about cors, see Enabling Cross-Origin Resource Sharing in the Amazon S3 User Guide.

Related Resources

4245 4246 4247 4248 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4245 def delete_bucket_cors(params = {}, options = {}) req = build_request(:delete_bucket_cors, params) req.send_request(options) end

#delete_bucket_encryption(params = {}) ⇒ Struct

This implementation of the DELETE action resets the default encryption for the bucket as server-side encryption with Amazon S3 managed keys (SSE-S3).

* General purpose buckets - For information about the bucket default encryption feature, see Amazon S3 Bucket Default Encryption in the Amazon S3 User Guide.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to DeleteBucketEncryption:

4350 4351 4352 4353 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4350 def delete_bucket_encryption(params = {}, options = {}) req = build_request(:delete_bucket_encryption, params) req.send_request(options) end

#delete_bucket_intelligent_tiering_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Deletes the S3 Intelligent-Tiering configuration from the specified bucket.

The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.

For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.

Operations related to DeleteBucketIntelligentTieringConfigurationinclude:

4417 4418 4419 4420 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4417 def delete_bucket_intelligent_tiering_configuration(params = {}, options = {}) req = build_request(:delete_bucket_intelligent_tiering_configuration, params) req.send_request(options) end

#delete_bucket_inventory_configuration(params = {}) ⇒ Struct

4482 4483 4484 4485 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4482 def delete_bucket_inventory_configuration(params = {}, options = {}) req = build_request(:delete_bucket_inventory_configuration, params) req.send_request(options) end

#delete_bucket_lifecycle(params = {}) ⇒ Struct

Deletes the lifecycle configuration from the specified bucket. Amazon S3 removes all the lifecycle configuration rules in the lifecycle subresource associated with the bucket. Your objects never expire, and Amazon S3 no longer automatically deletes any objects on the basis of rules contained in the deleted lifecycle configuration.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

For more information about the object expiration, see Elements to Describe Lifecycle Actions.

Related actions include:

4592 4593 4594 4595 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4592 def delete_bucket_lifecycle(params = {}, options = {}) req = build_request(:delete_bucket_lifecycle, params) req.send_request(options) end

#delete_bucket_metadata_table_configuration(params = {}) ⇒ Struct

4643 4644 4645 4646 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4643 def delete_bucket_metadata_table_configuration(params = {}, options = {}) req = build_request(:delete_bucket_metadata_table_configuration, params) req.send_request(options) end

#delete_bucket_metrics_configuration(params = {}) ⇒ Struct

4713 4714 4715 4716 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4713 def delete_bucket_metrics_configuration(params = {}, options = {}) req = build_request(:delete_bucket_metrics_configuration, params) req.send_request(options) end

#delete_bucket_ownership_controls(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Removes OwnershipControls for an Amazon S3 bucket. To use this operation, you must have the s3:PutBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying Permissions in a Policy.

For information about Amazon S3 Object Ownership, see Using Object Ownership.

The following operations are related toDeleteBucketOwnershipControls:

4763 4764 4765 4766 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4763 def delete_bucket_ownership_controls(params = {}, options = {}) req = build_request(:delete_bucket_ownership_controls, params) req.send_request(options) end

#delete_bucket_policy(params = {}) ⇒ Struct

Deletes the policy of a specified bucket.

Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the_Amazon S3 User Guide_. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

Permissions

If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must both have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner's account in order to use this operation.

If you don't have DeleteBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowed error.

To ensure that bucket owners don't inadvertently lock themselves out of their own buckets, the root principal in a bucket owner's Amazon Web Services account can perform the GetBucketPolicy,PutBucketPolicy, and DeleteBucketPolicy API actions, even if their bucket policy explicitly denies the root principal's access. Bucket owner root principals can only be blocked from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to DeleteBucketPolicy

4893 4894 4895 4896 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4893 def delete_bucket_policy(params = {}, options = {}) req = build_request(:delete_bucket_policy, params) req.send_request(options) end

#delete_bucket_replication(params = {}) ⇒ Struct

4963 4964 4965 4966 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4963 def delete_bucket_replication(params = {}, options = {}) req = build_request(:delete_bucket_replication, params) req.send_request(options) end

#delete_bucket_tagging(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Deletes the tags from the bucket.

To use this operation, you must have permission to perform thes3:PutBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

The following operations are related to DeleteBucketTagging:

5019 5020 5021 5022 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5019 def delete_bucket_tagging(params = {}, options = {}) req = build_request(:delete_bucket_tagging, params) req.send_request(options) end

#delete_bucket_website(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

This action removes the website configuration for a bucket. Amazon S3 returns a 200 OK response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OKresponse if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404 response if the bucket specified in the request does not exist.

This DELETE action requires the S3:DeleteBucketWebsite permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite permission.

For more information about hosting websites, see Hosting Websites on Amazon S3.

The following operations are related to DeleteBucketWebsite:

5087 5088 5089 5090 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5087 def delete_bucket_website(params = {}, options = {}) req = build_request(:delete_bucket_website, params) req.send_request(options) end

#delete_object(params = {}) ⇒ Types::DeleteObjectOutput

Removes an object from a bucket. The behavior depends on the bucket's versioning state:

* Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request.

To remove a specific version, you must use the versionId query parameter. Using this query parameter permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header x-amz-delete-marker to true.

If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include thex-amz-mfa request header in the DELETE versionId request. Requests that include x-amz-mfa must use HTTPS. For more information about MFA Delete, see Using MFA Delete in the Amazon S3 User Guide. To see sample requests that use versioning, see Sample Request.

Directory buckets - MFA delete is not supported by directory buckets.

You can delete objects by explicitly calling DELETE Object or calling (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject,s3:DeleteObjectVersion, and s3:PutLifeCycleConfiguration actions.

Directory buckets - S3 Lifecycle is not supported by directory buckets.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following action is related to DeleteObject:

^

5404 5405 5406 5407 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5404 def delete_object(params = {}, options = {}) req = build_request(:delete_object, params) req.send_request(options) end

#delete_object_tagging(params = {}) ⇒ Types::DeleteObjectTaggingOutput

This operation is not supported for directory buckets.

Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.

To use this operation, you must have permission to perform thes3:DeleteObjectTagging action.

To delete tags of a specific object version, add the versionId query parameter in the request. You will need permission for thes3:DeleteObjectVersionTagging action.

The following operations are related to DeleteObjectTagging:

5530 5531 5532 5533 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5530 def delete_object_tagging(params = {}, options = {}) req = build_request(:delete_object_tagging, params) req.send_request(options) end

#delete_objects(params = {}) ⇒ Types::DeleteObjectsOutput

This operation enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this operation provides a suitable alternative to sending individual delete requests, reducing per-request overhead.

The request can contain a list of up to 1,000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete operation and returns the result of that delete, success or failure, in the response. If the object specified in the request isn't found, Amazon S3 confirms the deletion by returning the result as deleted.

* Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.

The operation supports two modes for the response: verbose and quiet. By default, the operation uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete operation encountered an error. For a successful deletion in a quiet mode, the operation does not return any information about the delete in the response body.

When performing this action on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete in the Amazon S3 User Guide.

Directory buckets - MFA delete is not supported by directory buckets.

Permissions

Content-MD5 request header

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to DeleteObjects:

5913 5914 5915 5916 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5913 def delete_objects(params = {}, options = {}) req = build_request(:delete_objects, params) req.send_request(options) end

#delete_public_access_block(params = {}) ⇒ Struct

5970 5971 5972 5973 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5970 def delete_public_access_block(params = {}, options = {}) req = build_request(:delete_public_access_block, params) req.send_request(options) end

#get_bucket_accelerate_configuration(params = {}) ⇒ Types::GetBucketAccelerateConfigurationOutput

This operation is not supported for directory buckets.

This implementation of the GET action uses the acceleratesubresource to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.

To use this operation, you must have permission to perform thes3:GetAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

You set the Transfer Acceleration state of an existing bucket toEnabled or Suspended by using thePutBucketAccelerateConfiguration operation.

A GET accelerate request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.

For more information about transfer acceleration, see Transfer Acceleration in the Amazon S3 User Guide.

The following operations are related toGetBucketAccelerateConfiguration:

^

6066 6067 6068 6069 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6066 def get_bucket_accelerate_configuration(params = {}, options = {}) req = build_request(:get_bucket_accelerate_configuration, params) req.send_request(options) end

#get_bucket_acl(params = {}) ⇒ Types::GetBucketAclOutput

This operation is not supported for directory buckets.

This implementation of the GET action uses the acl subresource to return the access control list (ACL) of a bucket. To use GET to return the ACL of the bucket, you must have the READ_ACP access to the bucket. If READ_ACP permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.

When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return thebucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

The following operations are related to GetBucketAcl:

^

6162 6163 6164 6165 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6162 def get_bucket_acl(params = {}, options = {}) req = build_request(:get_bucket_acl, params) req.send_request(options) end

#get_bucket_analytics_configuration(params = {}) ⇒ Types::GetBucketAnalyticsConfigurationOutput

6248 6249 6250 6251 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6248 def get_bucket_analytics_configuration(params = {}, options = {}) req = build_request(:get_bucket_analytics_configuration, params) req.send_request(options) end

#get_bucket_cors(params = {}) ⇒ Types::GetBucketCorsOutput

This operation is not supported for directory buckets.

Returns the Cross-Origin Resource Sharing (CORS) configuration information set for the bucket.

To use this operation, you must have permission to perform thes3:GetBucketCORS action. By default, the bucket owner has this permission and can grant it to others.

When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

For more information about CORS, see Enabling Cross-Origin Resource Sharing.

The following operations are related to GetBucketCors:

6368 6369 6370 6371 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6368 def get_bucket_cors(params = {}, options = {}) req = build_request(:get_bucket_cors, params) req.send_request(options) end

#get_bucket_encryption(params = {}) ⇒ Types::GetBucketEncryptionOutput

Returns the default encryption configuration for an Amazon S3 bucket. By default, all buckets have a default encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3).

* General purpose buckets - For information about the bucket default encryption feature, see Amazon S3 Bucket Default Encryption in the Amazon S3 User Guide.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to GetBucketEncryption:

6482 6483 6484 6485 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6482 def get_bucket_encryption(params = {}, options = {}) req = build_request(:get_bucket_encryption, params) req.send_request(options) end

#get_bucket_intelligent_tiering_configuration(params = {}) ⇒ Types::GetBucketIntelligentTieringConfigurationOutput

This operation is not supported for directory buckets.

Gets the S3 Intelligent-Tiering configuration from the specified bucket.

The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.

For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.

Operations related to GetBucketIntelligentTieringConfigurationinclude:

6566 6567 6568 6569 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6566 def get_bucket_intelligent_tiering_configuration(params = {}, options = {}) req = build_request(:get_bucket_intelligent_tiering_configuration, params) req.send_request(options) end

#get_bucket_inventory_configuration(params = {}) ⇒ Types::GetBucketInventoryConfigurationOutput

6649 6650 6651 6652 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6649 def get_bucket_inventory_configuration(params = {}, options = {}) req = build_request(:get_bucket_inventory_configuration, params) req.send_request(options) end

#get_bucket_lifecycle(params = {}) ⇒ Types::GetBucketLifecycleOutput

6767 6768 6769 6770 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6767 def get_bucket_lifecycle(params = {}, options = {}) req = build_request(:get_bucket_lifecycle, params) req.send_request(options) end

#get_bucket_lifecycle_configuration(params = {}) ⇒ Types::GetBucketLifecycleConfigurationOutput

Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.

Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API, which is compatible with the new functionality. The previous version of the API supported filtering based only on an object key name prefix, which is supported for general purpose buckets for backward compatibility. For the related API description, seeGetBucketLifecycle.

Lifecyle configurations for directory buckets only support expiring objects and cancelling multipart uploads. Expiring of versioned objects, transitions and tag filters are not supported.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

GetBucketLifecycleConfiguration has the following special error:

The following operations are related toGetBucketLifecycleConfiguration:

6953 6954 6955 6956 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6953 def get_bucket_lifecycle_configuration(params = {}, options = {}) req = build_request(:get_bucket_lifecycle_configuration, params) req.send_request(options) end

#get_bucket_location(params = {}) ⇒ Types::GetBucketLocationOutput

This operation is not supported for directory buckets.

Returns the Region the bucket resides in. You set the bucket's Region using the LocationConstraint request parameter in a CreateBucketrequest. For more information, see CreateBucket.

When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

We recommend that you use HeadBucket to return the Region that a bucket resides in. For backward compatibility, Amazon S3 continues to support GetBucketLocation.

The following operations are related to GetBucketLocation:

7050 7051 7052 7053 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7050 def get_bucket_location(params = {}, options = {}) req = build_request(:get_bucket_location, params) req.send_request(options) end

#get_bucket_logging(params = {}) ⇒ Types::GetBucketLoggingOutput

This operation is not supported for directory buckets.

Returns the logging status of a bucket and the permissions users have to view and modify that status.

The following operations are related to GetBucketLogging:

7109 7110 7111 7112 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7109 def get_bucket_logging(params = {}, options = {}) req = build_request(:get_bucket_logging, params) req.send_request(options) end

#get_bucket_metadata_table_configuration(params = {}) ⇒ Types::GetBucketMetadataTableConfigurationOutput

7172 7173 7174 7175 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7172 def get_bucket_metadata_table_configuration(params = {}, options = {}) req = build_request(:get_bucket_metadata_table_configuration, params) req.send_request(options) end

#get_bucket_metrics_configuration(params = {}) ⇒ Types::GetBucketMetricsConfigurationOutput

7258 7259 7260 7261 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7258 def get_bucket_metrics_configuration(params = {}, options = {}) req = build_request(:get_bucket_metrics_configuration, params) req.send_request(options) end

#get_bucket_notification(params = {}) ⇒ Types::NotificationConfigurationDeprecated

7389 7390 7391 7392 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7389 def get_bucket_notification(params = {}, options = {}) req = build_request(:get_bucket_notification, params) req.send_request(options) end

#get_bucket_notification_configuration(params = {}) ⇒ Types::NotificationConfiguration

This operation is not supported for directory buckets.

Returns the notification configuration of a bucket.

If notifications are not enabled on the bucket, the action returns an empty NotificationConfiguration element.

By default, you must be the bucket owner to read the notification configuration of a bucket. However, the bucket owner can use a bucket policy to grant permission to other users to read this configuration with the s3:GetBucketNotification permission.

When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError is returned. For more information about InvalidAccessPointAliasError, see List of Error Codes.

For more information about setting and reading the notification configuration on a bucket, see Setting Up Notification of Bucket Events. For more information about bucket policies, see Using Bucket Policies.

The following action is related to GetBucketNotification:

^

7504 7505 7506 7507 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7504 def get_bucket_notification_configuration(params = {}, options = {}) req = build_request(:get_bucket_notification_configuration, params) req.send_request(options) end

#get_bucket_ownership_controls(params = {}) ⇒ Types::GetBucketOwnershipControlsOutput

This operation is not supported for directory buckets.

Retrieves OwnershipControls for an Amazon S3 bucket. To use this operation, you must have the s3:GetBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying permissions in a policy.

For information about Amazon S3 Object Ownership, see Using Object Ownership.

The following operations are related to GetBucketOwnershipControls:

7561 7562 7563 7564 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7561 def get_bucket_ownership_controls(params = {}, options = {}) req = build_request(:get_bucket_ownership_controls, params) req.send_request(options) end

#get_bucket_policy(params = {}) ⇒ Types::GetBucketPolicyOutput

Returns the policy of a specified bucket.

Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the_Amazon S3 User Guide_. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

Permissions

If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must both have the GetBucketPolicy permissions on the specified bucket and belong to the bucket owner's account in order to use this operation.

If you don't have GetBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror.

To ensure that bucket owners don't inadvertently lock themselves out of their own buckets, the root principal in a bucket owner's Amazon Web Services account can perform the GetBucketPolicy,PutBucketPolicy, and DeleteBucketPolicy API actions, even if their bucket policy explicitly denies the root principal's access. Bucket owner root principals can only be blocked from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.

Example bucket policies

General purpose buckets example bucket policies - See Bucket policy examples in the Amazon S3 User Guide.

Directory bucket example bucket policies - See Example bucket policies for S3 Express One Zone in the Amazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following action is related to GetBucketPolicy:

^

7726 7727 7728 7729 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7726 def get_bucket_policy(params = {}, options = {}, &block) req = build_request(:get_bucket_policy, params) req.send_request(options, &block) end

#get_bucket_policy_status(params = {}) ⇒ Types::GetBucketPolicyStatusOutput

7791 7792 7793 7794 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7791 def get_bucket_policy_status(params = {}, options = {}) req = build_request(:get_bucket_policy_status, params) req.send_request(options) end

#get_bucket_replication(params = {}) ⇒ Types::GetBucketReplicationOutput

This operation is not supported for directory buckets.

Returns the replication configuration of a bucket.

It can take a while to propagate the put or delete a replication configuration to all Amazon S3 systems. Therefore, a get request soon after put or delete can return a wrong result.

For information about replication configuration, see Replicationin the Amazon S3 User Guide.

This action requires permissions for thes3:GetReplicationConfiguration action. For more information about permissions, see Using Bucket Policies and User Policies.

If you include the Filter element in a replication configuration, you must also include the DeleteMarkerReplication and Priorityelements. The response also returns those elements.

For information about GetBucketReplication errors, see List of replication-related error codes

The following operations are related to GetBucketReplication:

7914 7915 7916 7917 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7914 def get_bucket_replication(params = {}, options = {}) req = build_request(:get_bucket_replication, params) req.send_request(options) end

#get_bucket_request_payment(params = {}) ⇒ Types::GetBucketRequestPaymentOutput

This operation is not supported for directory buckets.

Returns the request payment configuration of a bucket. To use this version of the operation, you must be the bucket owner. For more information, see Requester Pays Buckets.

The following operations are related to GetBucketRequestPayment:

^

7980 7981 7982 7983 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7980 def get_bucket_request_payment(params = {}, options = {}) req = build_request(:get_bucket_request_payment, params) req.send_request(options) end

#get_bucket_tagging(params = {}) ⇒ Types::GetBucketTaggingOutput

This operation is not supported for directory buckets.

Returns the tag set associated with the bucket.

To use this operation, you must have permission to perform thes3:GetBucketTagging action. By default, the bucket owner has this permission and can grant this permission to others.

GetBucketTagging has the following special error:

The following operations are related to GetBucketTagging:

8066 8067 8068 8069 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8066 def get_bucket_tagging(params = {}, options = {}) req = build_request(:get_bucket_tagging, params) req.send_request(options) end

#get_bucket_versioning(params = {}) ⇒ Types::GetBucketVersioningOutput

This operation is not supported for directory buckets.

Returns the versioning state of a bucket.

To retrieve the versioning state of a bucket, you must be the bucket owner.

This implementation also returns the MFA Delete status of the versioning state. If the MFA Delete status is enabled, the bucket owner must use an authentication device to change the versioning state of the bucket.

The following operations are related to GetBucketVersioning:

8143 8144 8145 8146 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8143 def get_bucket_versioning(params = {}, options = {}) req = build_request(:get_bucket_versioning, params) req.send_request(options) end

#get_bucket_website(params = {}) ⇒ Types::GetBucketWebsiteOutput

This operation is not supported for directory buckets.

Returns the website configuration for a bucket. To host website on Amazon S3, you can configure a bucket as website by adding a website configuration. For more information about hosting websites, seeHosting Websites on Amazon S3.

This GET action requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. However, bucket owners can allow other users to read the website configuration by writing a bucket policy granting them theS3:GetBucketWebsite permission.

The following operations are related to GetBucketWebsite:

8235 8236 8237 8238 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8235 def get_bucket_website(params = {}, options = {}) req = build_request(:get_bucket_website, params) req.send_request(options) end

#get_object(params = {}) ⇒ Types::GetObjectOutput

Retrieves an object from Amazon S3.

In the GetObject request, specify the full key name for the object.

General purpose buckets - Both the virtual-hosted-style requests and the path-style requests are supported. For a virtual hosted-style request example, if you have the objectphotos/2006/February/sample.jpg, specify the object key name as/photos/2006/February/sample.jpg. For a path-style request example, if you have the object photos/2006/February/sample.jpg in the bucket named examplebucket, specify the object key name as/examplebucket/photos/2006/February/sample.jpg. For more information about request types, see HTTP Host Header Bucket Specification in the Amazon S3 User Guide.

Directory buckets - Only virtual-hosted-style requests are supported. For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg in the bucket namedamzn-s3-demo-bucket--usw2-az1--x-s3, specify the object key name as/photos/2006/February/sample.jpg. Also, when you make requests to this API operation, your requests are sent to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

Storage classes

If the object you are retrieving is stored in the S3 Glacier Flexible Retrieval storage class, the S3 Glacier Deep Archive storage class, the S3 Intelligent-Tiering Archive Access tier, or the S3 Intelligent-Tiering Deep Archive Access tier, before you can retrieve the object you must first restore a copy usingRestoreObject. Otherwise, this operation returns anInvalidObjectState error. For information about restoring archived objects, see Restoring Archived Objects in the Amazon S3 User Guide.

Directory buckets - Directory buckets only supportEXPRESS_ONEZONE (the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones. Unsupported storage class values won't write a destination object and will respond with the HTTP status code 400 Bad Request.

Encryption

Encryption request headers, like x-amz-server-side-encryption, should not be sent for the GetObject requests, if your object uses server-side encryption with Amazon S3 managed encryption keys (SSE-S3), server-side encryption with Key Management Service (KMS) keys (SSE-KMS), or dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS). If you include the header in yourGetObject requests for the object that uses these types of keys, you’ll get an HTTP 400 Bad Request error.

Directory buckets - For directory buckets, there are only two supported options for server-side encryption: SSE-S3 and SSE-KMS. SSE-C isn't supported. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide.

Overriding response header values through the request

There are times when you want to override certain response header values of a GetObject response. For example, you might override the Content-Disposition response header value through yourGetObject request.

You can override values for a set of response headers. These modified response header values are included only in a successful response, that is, when the HTTP status code 200 OK is returned. The headers you can override using the following query parameters in the request are a subset of the headers that Amazon S3 accepts when you create an object.

The response headers that you can override for the GetObjectresponse are Cache-Control, Content-Disposition,Content-Encoding, Content-Language, Content-Type, andExpires.

To override values for a set of response headers in the GetObjectresponse, you can use the following query parameters in the request.

When you use these parameters, you must sign the request by using either an Authorization header or a presigned URL. These parameters cannot be used with an unsigned (anonymous) request.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to GetObject:

8903 8904 8905 8906 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8903 def get_object(params = {}, options = {}, &block) req = build_request(:get_object, params) req.send_request(options, &block) end

#get_object_acl(params = {}) ⇒ Types::GetObjectAclOutput

This operation is not supported for directory buckets.

Returns the access control list (ACL) of an object. To use this operation, you must have s3:GetObjectAcl permissions or READ_ACPaccess to the object. For more information, see Mapping of ACL permissions and access policy permissions in the Amazon S3 User Guide

This functionality is not supported for Amazon S3 on Outposts.

By default, GET returns ACL information about the current version of an object. To return ACL information about a different version, use the versionId subresource.

If your bucket uses the bucket owner enforced setting for S3 Object Ownership, requests to read ACLs are still supported and return thebucket-owner-full-control ACL with the owner being the account that created the bucket. For more information, see Controlling object ownership and disabling ACLs in the Amazon S3 User Guide.

The following operations are related to GetObjectAcl:

9089 9090 9091 9092 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9089 def get_object_acl(params = {}, options = {}) req = build_request(:get_object_acl, params) req.send_request(options) end

#get_object_attributes(params = {}) ⇒ Types::GetObjectAttributesOutput

Retrieves all the metadata from an object without returning the object itself. This operation is useful if you're interested only in an object's metadata.

GetObjectAttributes combines the functionality of HeadObject andListParts. All of the data returned with each of those individual calls can be returned with a single call to GetObjectAttributes.

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

Encryption

Encryption request headers, like x-amz-server-side-encryption, should not be sent for HEAD requests if your object uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Thex-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. If you include this header in a GET request for an object that uses these types of keys, you’ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object.

If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are:

For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.

Directory bucket permissions - For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256) and server-side encryption with KMS keys (SSE-KMS) (aws:kms). We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your CreateSession requests or PUT object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.

Versioning

Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request.

Conditional request headers

Consider the following when using request headers:

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following actions are related to GetObjectAttributes:

9457 9458 9459 9460 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9457 def get_object_attributes(params = {}, options = {}) req = build_request(:get_object_attributes, params) req.send_request(options) end

This operation is not supported for directory buckets.

Gets an object's current legal hold status. For more information, seeLocking Objects.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related to GetObjectLegalHold:

^

9555 9556 9557 9558 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9555 def get_object_legal_hold(params = {}, options = {}) req = build_request(:get_object_legal_hold, params) req.send_request(options) end

#get_object_lock_configuration(params = {}) ⇒ Types::GetObjectLockConfigurationOutput

This operation is not supported for directory buckets.

Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, seeLocking Objects.

The following action is related to GetObjectLockConfiguration:

^

9627 9628 9629 9630 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9627 def get_object_lock_configuration(params = {}, options = {}) req = build_request(:get_object_lock_configuration, params) req.send_request(options) end

#get_object_retention(params = {}) ⇒ Types::GetObjectRetentionOutput

This operation is not supported for directory buckets.

Retrieves an object's retention settings. For more information, seeLocking Objects.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related to GetObjectRetention:

^

9726 9727 9728 9729 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9726 def get_object_retention(params = {}, options = {}) req = build_request(:get_object_retention, params) req.send_request(options) end

#get_object_tagging(params = {}) ⇒ Types::GetObjectTaggingOutput

This operation is not supported for directory buckets.

Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.

To use this operation, you must have permission to perform thes3:GetObjectTagging action. By default, the GET action returns information about current version of an object. For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, use the versionId query parameter. You also need permission for the s3:GetObjectVersionTagging action.

By default, the bucket owner has this permission and can grant this permission to others.

For information about the Amazon S3 object tagging feature, seeObject Tagging.

The following actions are related to GetObjectTagging:

9897 9898 9899 9900 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9897 def get_object_tagging(params = {}, options = {}) req = build_request(:get_object_tagging, params) req.send_request(options) end

#get_object_torrent(params = {}) ⇒ Types::GetObjectTorrentOutput

This operation is not supported for directory buckets.

Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're distributing large files.

You can get torrent only for objects that are less than 5 GB in size, and that are not encrypted using server-side encryption with a customer-provided encryption key.

To use GET, you must have READ access to the object.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related to GetObjectTorrent:

^

9998 9999 10000 10001 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9998 def get_object_torrent(params = {}, options = {}, &block) req = build_request(:get_object_torrent, params) req.send_request(options, &block) end

#get_public_access_block(params = {}) ⇒ Types::GetPublicAccessBlockOutput

This operation is not supported for directory buckets.

Retrieves the PublicAccessBlock configuration for an Amazon S3 bucket. To use this operation, you must have thes3:GetBucketPublicAccessBlock permission. For more information about Amazon S3 permissions, see Specifying Permissions in a Policy.

When Amazon S3 evaluates the PublicAccessBlock configuration for a bucket or an object, it checks the PublicAccessBlock configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock settings are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.

For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".

The following operations are related to GetPublicAccessBlock:

10073 10074 10075 10076 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10073 def get_public_access_block(params = {}, options = {}) req = build_request(:get_public_access_block, params) req.send_request(options) end

#head_bucket(params = {}) ⇒ Types::HeadBucketOutput

You can use this operation to determine if a bucket exists and if you have permission to access it. The action returns a 200 OK if the bucket exists and you have permission to access it.

If the bucket does not exist or you do not have permission to access it, the HEAD request returns a generic 400 Bad Request, 403 Forbidden or 404 Not Found code. A message body is not included, so you cannot determine the exception beyond these HTTP response codes.

Authentication and authorization

General purpose buckets - Request to public buckets that grant the s3:ListBucket permission publicly do not need to be signed. All other HeadBucket requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, includingx-amz-copy-source, must be signed. For more information, see REST Authentication.

Directory buckets - You must use IAM credentials to authenticate and authorize your access to the HeadBucket API operation, instead of using the temporary security credentials through theCreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.

Permissions

:

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

The following waiters are defined for this operation (see #wait_until for detailed usage):

10255 10256 10257 10258 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10255 def head_bucket(params = {}, options = {}) req = build_request(:head_bucket, params) req.send_request(options) end

#head_object(params = {}) ⇒ Types::HeadObjectOutput

The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're interested only in an object's metadata.

A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body. Because of this, if the HEAD request generates an error, it returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not Found, 405 Method Not Allowed,412 Precondition Failed, or 304 Not Modified. It's not possible to retrieve the exact exception of these error codes.

Request headers are limited to 8 KB in size. For more information, seeCommon Request Headers.

Permissions

:

Encryption

Encryption request headers, like x-amz-server-side-encryption, should not be sent for HEAD requests if your object uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Thex-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. If you include this header in a HEAD request for an object that uses these types of keys, you’ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object.

If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are:

For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.

Directory bucket - For directory buckets, there are only two supported options for server-side encryption: SSE-S3 and SSE-KMS. SSE-C isn't supported. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide.

Versioning

* Directory buckets - Delete marker is not supported for directory buckets.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

The following actions are related to HeadObject:

The following waiters are defined for this operation (see #wait_until for detailed usage):

10789 10790 10791 10792 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10789 def head_object(params = {}, options = {}) req = build_request(:head_object, params) req.send_request(options) end

#list_bucket_analytics_configurations(params = {}) ⇒ Types::ListBucketAnalyticsConfigurationsOutput

This operation is not supported for directory buckets.

Lists the analytics configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.

This action supports list pagination and does not return more than 100 configurations at a time. You should always check the IsTruncatedelement in the response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations to list, IsTruncated is set to true, and there will be a value inNextContinuationToken. You use the NextContinuationToken value to continue the pagination of the list by passing the value in continuation-token in the request to GET the next page.

To use this operation, you must have permissions to perform thes3:GetAnalyticsConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.

The following operations are related toListBucketAnalyticsConfigurations:

10890 10891 10892 10893 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10890 def list_bucket_analytics_configurations(params = {}, options = {}) req = build_request(:list_bucket_analytics_configurations, params) req.send_request(options) end

#list_bucket_intelligent_tiering_configurations(params = {}) ⇒ Types::ListBucketIntelligentTieringConfigurationsOutput

This operation is not supported for directory buckets.

Lists the S3 Intelligent-Tiering configuration from the specified bucket.

The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.

For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.

Operations related to ListBucketIntelligentTieringConfigurationsinclude:

10982 10983 10984 10985 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10982 def list_bucket_intelligent_tiering_configurations(params = {}, options = {}) req = build_request(:list_bucket_intelligent_tiering_configurations, params) req.send_request(options) end

#list_bucket_inventory_configurations(params = {}) ⇒ Types::ListBucketInventoryConfigurationsOutput

This operation is not supported for directory buckets.

Returns a list of inventory configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.

This action supports list pagination and does not return more than 100 configurations at a time. Always check the IsTruncated element in the response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations to list, IsTruncated is set to true, and there is a value inNextContinuationToken. You use the NextContinuationToken value to continue the pagination of the list by passing the value in continuation-token in the request to GET the next page.

To use this operation, you must have permissions to perform thes3:GetInventoryConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

For information about the Amazon S3 inventory feature, see Amazon S3 Inventory

The following operations are related toListBucketInventoryConfigurations:

11084 11085 11086 11087 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11084 def list_bucket_inventory_configurations(params = {}, options = {}) req = build_request(:list_bucket_inventory_configurations, params) req.send_request(options) end

#list_bucket_metrics_configurations(params = {}) ⇒ Types::ListBucketMetricsConfigurationsOutput

This operation is not supported for directory buckets.

Lists the metrics configurations for the bucket. The metrics configurations are only for the request metrics of the bucket and do not provide information on daily storage metrics. You can have up to 1,000 configurations per bucket.

This action supports list pagination and does not return more than 100 configurations at a time. Always check the IsTruncated element in the response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations to list, IsTruncated is set to true, and there is a value inNextContinuationToken. You use the NextContinuationToken value to continue the pagination of the list by passing the value incontinuation-token in the request to GET the next page.

To use this operation, you must have permissions to perform thes3:GetMetricsConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

For more information about metrics configurations and CloudWatch request metrics, see Monitoring Metrics with Amazon CloudWatch.

The following operations are related toListBucketMetricsConfigurations:

11186 11187 11188 11189 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11186 def list_bucket_metrics_configurations(params = {}, options = {}) req = build_request(:list_bucket_metrics_configurations, params) req.send_request(options) end

#list_buckets(params = {}) ⇒ Types::ListBucketsOutput

This operation is not supported for directory buckets.

Returns a list of all buckets owned by the authenticated sender of the request. To grant IAM permission to use this operation, you must add the s3:ListAllMyBuckets policy action.

For information about Amazon S3 buckets, see Creating, configuring, and working with Amazon S3 buckets.

We strongly recommend using only paginated ListBuckets requests. Unpaginated ListBuckets requests are only supported for Amazon Web Services accounts set to the default general purpose bucket quota of 10,000. If you have an approved general purpose bucket quota above 10,000, you must send paginated ListBuckets requests to list your account’s buckets. All unpaginated ListBuckets requests will be rejected for Amazon Web Services accounts with a general purpose bucket quota greater than 10,000.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

11324 11325 11326 11327 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11324 def list_buckets(params = {}, options = {}) req = build_request(:list_buckets, params) req.send_request(options) end

#list_directory_buckets(params = {}) ⇒ Types::ListDirectoryBucketsOutput

Returns a list of all Amazon S3 directory buckets owned by the authenticated sender of the request. For more information about directory buckets, see Directory buckets in the Amazon S3 User Guide.

Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the_Amazon S3 User Guide_. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

Permissions

You must have the s3express:ListAllMyDirectoryBuckets permission in an IAM identity-based policy instead of a bucket policy. Cross-account access to this API operation isn't supported. This operation can only be performed by the Amazon Web Services account that owns the resource. For more information about directory bucket policies and permissions, see Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone in the Amazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

The BucketRegion response element is not part of theListDirectoryBuckets Response Syntax.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

11412 11413 11414 11415 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11412 def list_directory_buckets(params = {}, options = {}) req = build_request(:list_directory_buckets, params) req.send_request(options) end

#list_multipart_uploads(params = {}) ⇒ Types::ListMultipartUploadsOutput

This operation lists in-progress multipart uploads in a bucket. An in-progress multipart upload is a multipart upload that has been initiated by the CreateMultipartUpload request, but has not yet been completed or aborted.

Directory buckets - If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed. To delete these in-progress multipart uploads, use the ListMultipartUploadsoperation to list the in-progress multipart uploads in the bucket and use the AbortMultipartUpload operation to abort all the in-progress multipart uploads.

The ListMultipartUploads operation returns a maximum of 1,000 multipart uploads in the response. The limit of 1,000 multipart uploads is also the default value. You can further limit the number of uploads in a response by specifying the max-uploads request parameter. If there are more than 1,000 multipart uploads that satisfy your ListMultipartUploads request, the response returns anIsTruncated element with the value of true, a NextKeyMarkerelement, and a NextUploadIdMarker element. To list the remaining multipart uploads, you need to make subsequent ListMultipartUploadsrequests. In these requests, include two query parameters:key-marker and upload-id-marker. Set the value of key-marker to the NextKeyMarker value from the previous response. Similarly, set the value of upload-id-marker to the NextUploadIdMarker value from the previous response.

Directory buckets - The upload-id-marker element and theNextUploadIdMarker element aren't supported by directory buckets. To list the additional multipart uploads, you only need to set the value of key-marker to the NextKeyMarker value from the previous response.

For more information about multipart uploads, see Uploading Objects Using Multipart Upload in the Amazon S3 User Guide.

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

Sorting of multipart uploads in response

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to ListMultipartUploads:

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

11858 11859 11860 11861 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11858 def list_multipart_uploads(params = {}, options = {}) req = build_request(:list_multipart_uploads, params) req.send_request(options) end

#list_object_versions(params = {}) ⇒ Types::ListObjectVersionsOutput

This operation is not supported for directory buckets.

Returns metadata about all versions of the objects in a bucket. You can also use request parameters as selection criteria to return metadata about a subset of all the object versions.

To use this operation, you must have permission to perform thes3:ListBucketVersions action. Be aware of the name difference.

A 200 OK response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately.

To use this operation, you must have READ access to the bucket.

The following operations are related to ListObjectVersions:

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

12098 12099 12100 12101 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12098 def list_object_versions(params = {}, options = {}) req = build_request(:list_object_versions, params) req.send_request(options) end

#list_objects(params = {}) ⇒ Types::ListObjectsOutput

This operation is not supported for directory buckets.

Returns some or all (up to 1,000) of the objects in a bucket. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK response can contain valid or invalid XML. Be sure to design your application to parse the contents of the response and handle it appropriately.

This action has been revised. We recommend that you use the newer version, ListObjectsV2, when developing applications. For backward compatibility, Amazon S3 continues to support ListObjects.

The following operations are related to ListObjects:

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

12334 12335 12336 12337 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12334 def list_objects(params = {}, options = {}) req = build_request(:list_objects, params) req.send_request(options) end

#list_objects_v2(params = {}) ⇒ Types::ListObjectsV2Output

Returns some or all (up to 1,000) of the objects in a bucket with each request. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately. For more information about listing objects, see Listing object keys programmatically in the Amazon S3 User Guide. To get a list of your buckets, see ListBuckets.

* General purpose bucket - For general purpose buckets,ListObjectsV2 doesn't return prefixes that are related only to in-progress multipart uploads.

Permissions

Sorting order of returned objects

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

This section describes the latest revision of this action. We recommend that you use this revised API operation for application development. For backward compatibility, Amazon S3 continues to support the prior version of this API operation, ListObjects.

The following operations are related to ListObjectsV2:

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

12681 12682 12683 12684 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12681 def list_objects_v2(params = {}, options = {}) req = build_request(:list_objects_v2, params) req.send_request(options) end

#list_parts(params = {}) ⇒ Types::ListPartsOutput

Lists the parts that have been uploaded for a specific multipart upload.

To use this operation, you must provide the upload ID in the request. You obtain this uploadID by sending the initiate multipart upload request through CreateMultipartUpload.

The ListParts request returns a maximum of 1,000 uploaded parts. The limit of 1,000 parts is also the default value. You can restrict the number of parts in a response by specifying the max-parts request parameter. If your multipart upload consists of more than 1,000 parts, the response returns an IsTruncated field with the value of true, and a NextPartNumberMarker element. To list remaining uploaded parts, in subsequent ListParts requests, include thepart-number-marker query string parameter and set its value to theNextPartNumberMarker field value from the previous response.

For more information on multipart uploads, see Uploading Objects Using Multipart Upload in the Amazon S3 User Guide.

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to ListParts:

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

13010 13011 13012 13013 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13010 def list_parts(params = {}, options = {}) req = build_request(:list_parts, params) req.send_request(options) end

#put_bucket_accelerate_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the accelerate configuration of an existing bucket. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to Amazon S3.

To use this operation, you must have permission to perform thes3:PutAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

The Transfer Acceleration state of a bucket can be set to one of the following two values:

The GetBucketAccelerateConfiguration action returns the transfer acceleration state of a bucket.

After setting the Transfer Acceleration state of a bucket to Enabled, it might take up to thirty minutes before the data transfer rates to the bucket increase.

The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods (".").

For more information about transfer acceleration, see Transfer Acceleration.

The following operations are related toPutBucketAccelerateConfiguration:

13109 13110 13111 13112 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13109 def put_bucket_accelerate_configuration(params = {}, options = {}) req = build_request(:put_bucket_accelerate_configuration, params) req.send_request(options) end

#put_bucket_acl(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the permissions on an existing bucket using access control lists (ACL). For more information, see Using ACLs. To set the ACL of a bucket, you must have the WRITE_ACP permission.

You can use one of the following two ways to set a bucket's permissions:

You cannot specify access permission using both the body and the request headers.

Depending on your application needs, you may choose to set the ACL on a bucket using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, then you can continue to use that approach.

If your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. You must use policies to grant access to your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return theAccessControlListNotSupported error code. Requests to read ACLs are still supported. For more information, see Controlling object ownership in the Amazon S3 User Guide.

Permissions

You can set access permissions by using one of the following methods:

You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

Grantee Values

You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:

The following operations are related to PutBucketAcl:

13403 13404 13405 13406 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13403 def put_bucket_acl(params = {}, options = {}) req = build_request(:put_bucket_acl, params) req.send_request(options) end

#put_bucket_analytics_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets an analytics configuration for the bucket (specified by the analytics configuration ID). You can have up to 1,000 analytics configurations per bucket.

You can choose to have storage class analysis export analysis reports sent to a comma-separated values (CSV) flat file. See the DataExportrequest element. Reports are updated daily and are based on the object filters that you configure. When selecting data export, you specify a destination bucket and an optional destination prefix where the file is written. You can export the data to a destination bucket in a different account. However, the destination bucket must be in the same Region as the bucket that you are making the PUT analytics configuration to. For more information, see Amazon S3 Analytics – Storage Class Analysis.

You must create a bucket policy on the destination bucket where the exported file is written to grant permissions to Amazon S3 to write objects to the bucket. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.

To use this operation, you must have permissions to perform thes3:PutAnalyticsConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

PutBucketAnalyticsConfiguration has the following special errors:

The following operations are related toPutBucketAnalyticsConfiguration:

13539 13540 13541 13542 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13539 def put_bucket_analytics_configuration(params = {}, options = {}) req = build_request(:put_bucket_analytics_configuration, params) req.send_request(options) end

#put_bucket_cors(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the cors configuration for your bucket. If the configuration exists, Amazon S3 replaces it.

To use this operation, you must be allowed to perform thes3:PutBucketCORS action. By default, the bucket owner has this permission and can grant it to others.

You set this configuration on a bucket so that the bucket can service cross-origin requests. For example, you might want to enable a request whose origin is http://www.example.com to access your Amazon S3 bucket at my.example.bucket.com by using the browser'sXMLHttpRequest capability.

To enable cross-origin resource sharing (CORS) on a bucket, you add the cors subresource to the bucket. The cors subresource is an XML document in which you configure rules that identify origins and the HTTP methods that can be executed on your bucket. The document is limited to 64 KB in size.

When Amazon S3 receives a cross-origin request (or a pre-flight OPTIONS request) against a bucket, it evaluates the corsconfiguration on the bucket and uses the first CORSRule rule that matches the incoming browser request to enable a cross-origin request. For a rule to match, the following conditions must be met:

For more information about CORS, go to Enabling Cross-Origin Resource Sharing in the Amazon S3 User Guide.

The following operations are related to PutBucketCors:

13719 13720 13721 13722 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13719 def put_bucket_cors(params = {}, options = {}) req = build_request(:put_bucket_cors, params) req.send_request(options) end

#put_bucket_encryption(params = {}) ⇒ Struct

This operation configures default encryption and Amazon S3 Bucket Keys for an existing bucket.

Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the_Amazon S3 User Guide_. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

By default, all buckets have a default encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3).

* General purpose buckets

If you're specifying a customer managed KMS key, we recommend using a fully qualified KMS key ARN. If you use a KMS key alias instead, then KMS resolves the key within the requester’s account. This behavior can result in data that's encrypted with a KMS key that belongs to the requester, and not the bucket owner.

Also, this action requires Amazon Web Services Signature Version 4. For more information, see Authenticating Requests (Amazon Web Services Signature Version 4).

Permissions

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to PutBucketEncryption:

13953 13954 13955 13956 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13953 def put_bucket_encryption(params = {}, options = {}) req = build_request(:put_bucket_encryption, params) req.send_request(options) end

#put_bucket_intelligent_tiering_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Puts a S3 Intelligent-Tiering configuration to the specified bucket. You can have up to 1,000 S3 Intelligent-Tiering configurations per bucket.

The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead. S3 Intelligent-Tiering delivers automatic cost savings in three low latency and high throughput access tiers. To get the lowest storage cost on data that can be accessed in minutes to hours, you can choose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the Frequent Access tier rates in the S3 Intelligent-Tiering storage class.

For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.

Operations related to PutBucketIntelligentTieringConfigurationinclude:

You only need S3 Intelligent-Tiering enabled on a bucket if you want to automatically move objects stored in the S3 Intelligent-Tiering storage class to the Archive Access or Deep Archive Access tier.

PutBucketIntelligentTieringConfiguration has the following special errors:

HTTP 400 Bad Request Error

Code: InvalidArgument

Cause: Invalid Argument

HTTP 400 Bad Request Error

Code: TooManyConfigurations

Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.

HTTP 403 Forbidden Error

Cause: You are not the owner of the specified bucket, or you do not have the s3:PutIntelligentTieringConfiguration bucket permission to set the configuration on the bucket.

14078 14079 14080 14081 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14078 def put_bucket_intelligent_tiering_configuration(params = {}, options = {}) req = build_request(:put_bucket_intelligent_tiering_configuration, params) req.send_request(options) end

#put_bucket_inventory_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

This implementation of the PUT action adds an inventory configuration (identified by the inventory ID) to the bucket. You can have up to 1,000 inventory configurations per bucket.

Amazon S3 inventory generates inventories of the objects in the bucket on a daily or weekly basis, and the results are published to a flat file. The bucket that is inventoried is called the source bucket, and the bucket where the inventory flat file is stored is called the_destination_ bucket. The destination bucket must be in the same Amazon Web Services Region as the source bucket.

When you configure an inventory for a source bucket, you specify the_destination_ bucket where you want the inventory to be stored, and whether to generate the inventory daily or weekly. You can also configure what object metadata to include and whether to inventory all object versions or only current versions. For more information, seeAmazon S3 Inventory in the Amazon S3 User Guide.

You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to write objects to the bucket in the defined location. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.

Permissions

To use this operation, you must have permission to perform thes3:PutInventoryConfiguration action. The bucket owner has this permission by default and can grant this permission to others.

The s3:PutInventoryConfiguration permission allows a user to create an S3 Inventory report that includes all object metadata fields available and to specify the destination bucket to store the inventory. A user with read access to objects in the destination bucket can also access all object metadata fields that are available in the inventory report.

To restrict access to an inventory report, see Restricting access to an Amazon S3 Inventory report in the Amazon S3 User Guide. For more information about the metadata fields available in S3 Inventory, see Amazon S3 Inventory lists in the Amazon S3 User Guide. For more information about permissions, see Permissions related to bucket subresource operations and Identity and access management in Amazon S3 in the Amazon S3 User Guide.

PutBucketInventoryConfiguration has the following special errors:

HTTP 400 Bad Request Error

Code: InvalidArgument

Cause: Invalid Argument

HTTP 400 Bad Request Error

Code: TooManyConfigurations

Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.

HTTP 403 Forbidden Error

Cause: You are not the owner of the specified bucket, or you do not have the s3:PutInventoryConfiguration bucket permission to set the configuration on the bucket.

The following operations are related toPutBucketInventoryConfiguration:

14230 14231 14232 14233 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14230 def put_bucket_inventory_configuration(params = {}, options = {}) req = build_request(:put_bucket_inventory_configuration, params) req.send_request(options) end

#put_bucket_lifecycle(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

For an updated version of this API, seePutBucketLifecycleConfiguration. This version has been deprecated. Existing lifecycle configurations will work. For new lifecycle configurations, use the updated API.

This operation is not supported for directory buckets.

Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. For information about lifecycle configuration, see Object Lifecycle Management in the Amazon S3 User Guide.

By default, all Amazon S3 resources, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration) are private. Only the resource owner, the Amazon Web Services account that created the resource, can access it. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, users must get thes3:PutLifecycleConfiguration permission.

You can also explicitly deny permissions. Explicit denial also supersedes any other permissions. If you want to prevent users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:

For more information about permissions, see Managing Access Permissions to your Amazon S3 Resources in the Amazon S3 User Guide.

For more examples of transitioning objects to storage classes such as STANDARD_IA or ONEZONE_IA, see Examples of Lifecycle Configuration.

The following operations are related to PutBucketLifecycle:

14385 14386 14387 14388 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14385 def put_bucket_lifecycle(params = {}, options = {}) req = build_request(:put_bucket_lifecycle, params) req.send_request(options) end

#put_bucket_lifecycle_configuration(params = {}) ⇒ Types::PutBucketLifecycleConfigurationOutput

Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. Keep in mind that this will overwrite an existing lifecycle configuration, so if you want to retain any configuration details, they must be included in the new lifecycle configuration. For information about lifecycle configuration, see Managing your storage lifecycle.

Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, seePutBucketLifecycle.

Rules

Permissions

HTTP Host header syntax

You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable.

Bucket lifecycle configuration supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility for general purpose buckets. For the related API description, see PutBucketLifecycle.

Lifecyle configurations for directory buckets only support expiring objects and cancelling multipart uploads. Expiring of versioned objects,transitions and tag filters are not supported.

A lifecycle rule consists of the following:

For more information, see Object Lifecycle Management andLifecycle Configuration Elements.

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

The following operations are related toPutBucketLifecycleConfiguration:

14679 14680 14681 14682 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14679 def put_bucket_lifecycle_configuration(params = {}, options = {}) req = build_request(:put_bucket_lifecycle_configuration, params) req.send_request(options) end

#put_bucket_logging(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Set the logging parameters for a bucket and to specify permissions for who can view and modify the logging parameters. All logs are saved to buckets in the same Amazon Web Services Region as the source bucket. To set the logging status of a bucket, you must be the bucket owner.

The bucket owner is automatically granted FULL_CONTROL to all logs. You use the Grantee request element to grant access to other people. The Permissions request element specifies the kind of access the grantee has to the logs.

If the target bucket for log delivery uses the bucket owner enforced setting for S3 Object Ownership, you can't use the Grantee request element to grant access to others. Permissions can only be granted using policies. For more information, see Permissions for server access log delivery in the Amazon S3 User Guide.

Grantee Values

You can specify the person (grantee) to whom you're assigning access rights (by using request elements) in the following ways:

To enable logging, you use LoggingEnabled and its children request elements. To disable logging, you use an empty BucketLoggingStatusrequest element:

<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01" />

For more information about server access logging, see Server Access Logging in the Amazon S3 User Guide.

For more information about creating a bucket, see CreateBucket. For more information about returning the logging status of a bucket, see GetBucketLogging.

The following operations are related to PutBucketLogging:

14862 14863 14864 14865 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14862 def put_bucket_logging(params = {}, options = {}) req = build_request(:put_bucket_logging, params) req.send_request(options) end

#put_bucket_metrics_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets a metrics configuration (specified by the metrics configuration ID) for the bucket. You can have up to 1,000 metrics configurations per bucket. If you're updating an existing metrics configuration, note that this is a full replacement of the existing metrics configuration. If you don't include the elements you want to keep, they are erased.

To use this operation, you must have permissions to perform thes3:PutMetricsConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

For information about CloudWatch request metrics for Amazon S3, seeMonitoring Metrics with Amazon CloudWatch.

The following operations are related toPutBucketMetricsConfiguration:

PutBucketMetricsConfiguration has the following special error:

14966 14967 14968 14969 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14966 def put_bucket_metrics_configuration(params = {}, options = {}) req = build_request(:put_bucket_metrics_configuration, params) req.send_request(options) end

#put_bucket_notification(params = {}) ⇒ Struct

15052 15053 15054 15055 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15052 def put_bucket_notification(params = {}, options = {}) req = build_request(:put_bucket_notification, params) req.send_request(options) end

#put_bucket_notification_configuration(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Enables notifications of specified events for a bucket. For more information about event notifications, see Configuring Event Notifications.

Using this API, you can replace an existing notification configuration. The configuration is an XML file that defines the event types that you want Amazon S3 to publish and the destination where you want Amazon S3 to publish an event notification when it detects an event of the specified type.

By default, your bucket has no event notifications configured. That is, the notification configuration will be an emptyNotificationConfiguration.

<NotificationConfiguration>

</NotificationConfiguration>

This action replaces the existing notification configuration with the configuration you include in the request body.

After Amazon S3 receives this request, it first verifies that any Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS) destination exists, and that the bucket owner has permission to publish to it by sending a test notification. In the case of Lambda destinations, Amazon S3 verifies that the Lambda function permissions grant Amazon S3 permission to invoke the function from the Amazon S3 bucket. For more information, see Configuring Notifications for Amazon S3 Events.

You can disable notifications by adding the empty NotificationConfiguration element.

For more information about the number of event notification configurations that you can create per bucket, see Amazon S3 service quotas in Amazon Web Services General Reference.

By default, only the bucket owner can configure notifications on a bucket. However, bucket owners can use a bucket policy to grant permission to other users to set this configuration with the requireds3:PutBucketNotification permission.

The PUT notification is an atomic operation. For example, suppose your notification configuration includes SNS topic, SQS queue, and Lambda function configurations. When you send a PUT request with this configuration, Amazon S3 sends test messages to your SNS topic. If the message fails, the entire PUT action will fail, and Amazon S3 will not add the configuration to your bucket.

If the configuration in the request body includes only oneTopicConfiguration specifying only thes3:ReducedRedundancyLostObject event type, the response will also include the x-amz-sns-test-message-id header containing the message ID of the test notification sent to the topic.

The following action is related toPutBucketNotificationConfiguration:

^

15236 15237 15238 15239 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15236 def put_bucket_notification_configuration(params = {}, options = {}) req = build_request(:put_bucket_notification_configuration, params) req.send_request(options) end

#put_bucket_ownership_controls(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Creates or modifies OwnershipControls for an Amazon S3 bucket. To use this operation, you must have the s3:PutBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying permissions in a policy.

For information about Amazon S3 Object Ownership, see Using object ownership.

The following operations are related to PutBucketOwnershipControls:

15305 15306 15307 15308 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15305 def put_bucket_ownership_controls(params = {}, options = {}) req = build_request(:put_bucket_ownership_controls, params) req.send_request(options) end

#put_bucket_policy(params = {}) ⇒ Struct

Applies an Amazon S3 bucket policy to an Amazon S3 bucket.

Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the_Amazon S3 User Guide_. For more information about endpoints in Local Zones, see Concepts for directory buckets in Local Zones in the_Amazon S3 User Guide_.

Permissions

If you are using an identity other than the root user of the Amazon Web Services account that owns the bucket, the calling identity must both have the PutBucketPolicy permissions on the specified bucket and belong to the bucket owner's account in order to use this operation.

If you don't have PutBucketPolicy permissions, Amazon S3 returns a 403 Access Denied error. If you have the correct permissions, but you're not using an identity that belongs to the bucket owner's account, Amazon S3 returns a 405 Method Not Allowederror.

To ensure that bucket owners don't inadvertently lock themselves out of their own buckets, the root principal in a bucket owner's Amazon Web Services account can perform the GetBucketPolicy,PutBucketPolicy, and DeleteBucketPolicy API actions, even if their bucket policy explicitly denies the root principal's access. Bucket owner root principals can only be blocked from performing these API actions by VPC endpoint policies and Amazon Web Services Organizations policies.

Example bucket policies

General purpose buckets example bucket policies - See Bucket policy examples in the Amazon S3 User Guide.

Directory bucket example bucket policies - See Example bucket policies for S3 Express One Zone in the Amazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related to PutBucketPolicy:

15512 15513 15514 15515 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15512 def put_bucket_policy(params = {}, options = {}) req = build_request(:put_bucket_policy, params) req.send_request(options) end

#put_bucket_replication(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Creates a replication configuration or replaces an existing one. For more information, see Replication in the Amazon S3 User Guide.

Specify the replication configuration in the request body. In the replication configuration, you provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects, the IAM role that Amazon S3 can assume to replicate objects on your behalf, and other relevant information. You can invoke this request for a specific Amazon Web Services Region by using the aws:RequestedRegion condition key.

A replication configuration must include at least one rule, and can contain a maximum of 1,000. Each rule identifies a subset of objects to replicate by filtering the objects in the source bucket. To choose additional subsets of objects to replicate, add a rule for each subset.

To specify a subset of the objects in the source bucket to apply a replication rule to, add the Filter element as a child of the Rule element. You can filter objects based on an object key prefix, one or more object tags, or both. When you add the Filter element in the configuration, you must also add the following elements:DeleteMarkerReplication, Status, and Priority.

If you are using an earlier version of the replication configuration, Amazon S3 handles replication of delete markers differently. For more information, see Backward Compatibility.

For information about enabling versioning on a bucket, see Using Versioning.

Handling Replication of Encrypted Objects

By default, Amazon S3 doesn't replicate objects that are stored at rest using server-side encryption with KMS keys. To replicate Amazon Web Services KMS-encrypted objects, add the following:SourceSelectionCriteria, SseKmsEncryptedObjects, Status,EncryptionConfiguration, and ReplicaKmsKeyID. For information about replication configuration, see Replicating Objects Created with SSE Using KMS keys.

For information on PutBucketReplication errors, see List of replication-related error codes

Permissions

To create a PutBucketReplication request, you must haves3:PutReplicationConfiguration permissions for the bucket.

By default, a resource owner, in this case the Amazon Web Services account that created the bucket, can perform this operation. The resource owner can also grant others permissions to perform the operation. For more information about permissions, see Specifying Permissions in a Policy and Managing Access Permissions to Your Amazon S3 Resources.

To perform this operation, the user or role performing the action must have the iam:PassRole permission.

The following operations are related to PutBucketReplication:

15750 15751 15752 15753 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15750 def put_bucket_replication(params = {}, options = {}) req = build_request(:put_bucket_replication, params) req.send_request(options) end

#put_bucket_request_payment(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download. For more information, see Requester Pays Buckets.

The following operations are related to PutBucketRequestPayment:

15847 15848 15849 15850 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15847 def put_bucket_request_payment(params = {}, options = {}) req = build_request(:put_bucket_request_payment, params) req.send_request(options) end

#put_bucket_tagging(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the tags for a bucket.

Use tags to organize your Amazon Web Services bill to reflect your own cost structure. To do this, sign up to get your Amazon Web Services account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging and Using Cost Allocation in Amazon S3 Bucket Tags.

When this operation sets the tags for a bucket, it will overwrite any current tags the bucket already has. You cannot use this operation to add tags to an existing list of tags.

To use this operation, you must have permissions to perform thes3:PutBucketTagging action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.

PutBucketTagging has the following special errors. For more Amazon S3 errors see, Error Responses.

The following operations are related to PutBucketTagging:

15998 15999 16000 16001 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15998 def put_bucket_tagging(params = {}, options = {}) req = build_request(:put_bucket_tagging, params) req.send_request(options) end

#put_bucket_versioning(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

When you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. While this change is propagating, you might encounter intermittentHTTP 404 NoSuchKey errors for requests to objects created or updated after enabling versioning. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT orDELETE) on objects in the bucket.

Sets the versioning state of an existing bucket.

You can set the versioning state with one of the following values:

Enabled—Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID.

Suspended—Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null.

If the versioning state has never been set on a bucket, it has no versioning state; a GetBucketVersioning request does not return a versioning state value.

In order to enable MFA Delete, you must be the bucket owner. If you are the bucket owner and want to enable MFA Delete in the bucket versioning configuration, you must include the x-amz-mfa requestheader and the Status and the MfaDelete request elements in a request to set the versioning state of the bucket.

If you have an object expiration lifecycle configuration in your non-versioned bucket and you want to maintain the same permanent delete behavior when you enable versioning, you must add a noncurrent expiration policy. The noncurrent expiration lifecycle configuration will manage the deletes of the noncurrent object versions in the version-enabled bucket. (A version-enabled bucket maintains one current and zero or more noncurrent object versions.) For more information, see Lifecycle and Versioning.

The following operations are related to PutBucketVersioning:

16139 16140 16141 16142 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16139 def put_bucket_versioning(params = {}, options = {}) req = build_request(:put_bucket_versioning, params) req.send_request(options) end

#put_bucket_website(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Sets the configuration of the website that is specified in thewebsite subresource. To configure a bucket as a website, you can add this subresource on the bucket with website configuration information such as the file name of the index document and any redirect rules. For more information, see Hosting Websites on Amazon S3.

This PUT action requires the S3:PutBucketWebsite permission. By default, only the bucket owner can configure the website attached to a bucket; however, bucket owners can allow other users to set the website configuration by writing a bucket policy that grants them theS3:PutBucketWebsite permission.

To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.

If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.

Amazon S3 has a limitation of 50 routing rules per website configuration. If you require more than 50 routing rules, you can use object redirect. For more information, see Configuring an Object Redirect in the Amazon S3 User Guide.

The maximum request length is limited to 128 KB.

16323 16324 16325 16326 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16323 def put_bucket_website(params = {}, options = {}) req = build_request(:put_bucket_website, params) req.send_request(options) end

#put_object(params = {}) ⇒ Types::PutObjectOutput

Adds an object to a bucket.

* Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket. You cannot use PutObject to only update a single piece of metadata for an existing object. You must put the entire object with updated metadata if you want to update some values.

Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. However, Amazon S3 provides features that can modify this behavior:

Permissions

Data integrity with Content-MD5

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

For more information about related Amazon S3 APIs, see the following:

17345 17346 17347 17348 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17345 def put_object(params = {}, options = {}) req = build_request(:put_object, params) req.send_request(options) end

#put_object_acl(params = {}) ⇒ Types::PutObjectAclOutput

This operation is not supported for directory buckets.

Uses the acl subresource to set the access control list (ACL) permissions for a new or existing object in an S3 bucket. You must have the WRITE_ACP permission to set the ACL of an object. For more information, see What permissions can I grant? in the Amazon S3 User Guide.

This functionality is not supported for Amazon S3 on Outposts.

Depending on your application needs, you can choose to set the ACL on an object using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, you can continue to use that approach. For more information, see Access Control List (ACL) Overview in the_Amazon S3 User Guide_.

If your bucket uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. You must use policies to grant access to your bucket and the objects in it. Requests to set ACLs or update ACLs fail and return theAccessControlListNotSupported error code. Requests to read ACLs are still supported. For more information, see Controlling object ownership in the Amazon S3 User Guide.

Permissions

You can set access permissions using one of the following methods:

You can use either a canned ACL or specify access permissions explicitly. You cannot do both.

Grantee Values

You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:

Versioning

The ACL of an object is set at the object version level. By default, PUT sets the ACL of the current version of an object. To set the ACL of a different version, use the versionId subresource.

The following operations are related to PutObjectAcl:

17716 17717 17718 17719 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17716 def put_object_acl(params = {}, options = {}) req = build_request(:put_object_acl, params) req.send_request(options) end

This operation is not supported for directory buckets.

Applies a legal hold configuration to the specified object. For more information, see Locking Objects.

This functionality is not supported for Amazon S3 on Outposts.

17837 17838 17839 17840 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17837 def put_object_legal_hold(params = {}, options = {}) req = build_request(:put_object_legal_hold, params) req.send_request(options) end

#put_object_lock_configuration(params = {}) ⇒ Types::PutObjectLockConfigurationOutput

This operation is not supported for directory buckets.

Places an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see Locking Objects.

* The DefaultRetention settings require both a mode and a period.

17956 17957 17958 17959 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17956 def put_object_lock_configuration(params = {}, options = {}) req = build_request(:put_object_lock_configuration, params) req.send_request(options) end

#put_object_retention(params = {}) ⇒ Types::PutObjectRetentionOutput

This operation is not supported for directory buckets.

Places an Object Retention configuration on an object. For more information, see Locking Objects. Users or accounts require thes3:PutObjectRetention permission in order to place an Object Retention configuration on objects. Bypassing a Governance Retention configuration requires the s3:BypassGovernanceRetention permission.

This functionality is not supported for Amazon S3 on Outposts.

18087 18088 18089 18090 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18087 def put_object_retention(params = {}, options = {}) req = build_request(:put_object_retention, params) req.send_request(options) end

#put_object_tagging(params = {}) ⇒ Types::PutObjectTaggingOutput

This operation is not supported for directory buckets.

Sets the supplied tag-set to an object that already exists in a bucket. A tag is a key-value pair. For more information, see Object Tagging.

You can associate tags with an object by sending a PUT request against the tagging subresource that is associated with the object. You can retrieve tags by sending a GET request. For more information, seeGetObjectTagging.

For tagging-related restrictions related to characters and encodings, see Tag Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per object.

To use this operation, you must have permission to perform thes3:PutObjectTagging action. By default, the bucket owner has this permission and can grant this permission to others.

To put tags of any other version, use the versionId query parameter. You also need permission for the s3:PutObjectVersionTagging action.

PutObjectTagging has the following special errors. For more Amazon S3 errors see, Error Responses.

The following operations are related to PutObjectTagging:

18288 18289 18290 18291 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18288 def put_object_tagging(params = {}, options = {}) req = build_request(:put_object_tagging, params) req.send_request(options) end

#put_public_access_block(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Creates or modifies the PublicAccessBlock configuration for an Amazon S3 bucket. To use this operation, you must have thes3:PutBucketPublicAccessBlock permission. For more information about Amazon S3 permissions, see Specifying Permissions in a Policy.

When Amazon S3 evaluates the PublicAccessBlock configuration for a bucket or an object, it checks the PublicAccessBlock configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock configurations are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.

For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of "Public".

The following operations are related to PutPublicAccessBlock:

18396 18397 18398 18399 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18396 def put_public_access_block(params = {}, options = {}) req = build_request(:put_public_access_block, params) req.send_request(options) end

#restore_object(params = {}) ⇒ Types::RestoreObjectOutput

This operation is not supported for directory buckets.

Restores an archived copy of an object back into Amazon S3

This functionality is not supported for Amazon S3 on Outposts.

This action performs the following types of requests:

^

For more information about the S3 structure in the request body, see the following:

Permissions

To use this operation, you must have permissions to perform thes3:RestoreObject action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon S3 User Guide.

Restoring objects

Objects that you archive to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive tiers, are not accessible in real time. For objects in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes, you must first initiate a restore request, and then wait until a temporary copy of the object is available. If you want a permanent copy of the object, create a copy of it in the Amazon S3 Standard storage class in your S3 bucket. To access an archived object, you must restore the object for the duration (number of days) that you specify. For objects in the Archive Access or Deep Archive Access tiers of S3 Intelligent-Tiering, you must first initiate a restore request, and then wait until the object is moved into the Frequent Access tier.

To restore a specific object version, you can provide a version ID. If you don't provide a version ID, Amazon S3 restores the current version.

When restoring an archived object, you can specify one of the following data access tier options in the Tier element of the request body:

For more information about archive retrieval options and provisioned capacity for Expedited data access, see Restoring Archived Objects in the Amazon S3 User Guide.

You can use Amazon S3 restore speed upgrade to change the restore speed to a faster speed while it is in progress. For more information, see Upgrading the speed of an in-progress restorein the Amazon S3 User Guide.

To get the status of object restoration, you can send a HEADrequest. Operations return the x-amz-restore header, which provides information about the restoration status, in the response. You can use Amazon S3 event notifications to notify you when a restore is initiated or completed. For more information, seeConfiguring Amazon S3 Event Notifications in the Amazon S3 User Guide.

After restoring an archived object, you can update the restoration period by reissuing the request with a new period. Amazon S3 updates the restoration period relative to the current time and charges only for the request-there are no data transfer charges. You cannot update the restoration period when Amazon S3 is actively processing your current restore request for the object.

If your bucket has a lifecycle configuration with a rule that includes an expiration action, the object expiration overrides the life span that you specify in a restore request. For example, if you restore an object copy for 10 days, but the object is scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. For more information about lifecycle configuration, seePutBucketLifecycleConfiguration and Object Lifecycle Management in Amazon S3 User Guide.

Responses

A successful action returns either the 200 OK or 202 Acceptedstatus code.

The following operations are related to RestoreObject:

18780 18781 18782 18783 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18780 def restore_object(params = {}, options = {}) req = build_request(:restore_object, params) req.send_request(options) end

#select_object_content(params = {}) ⇒ Types::SelectObjectContentOutput

This operation is not supported for directory buckets.

This action filters the contents of an Amazon S3 object based on a simple structured query language (SQL) statement. In the request, along with the SQL expression, you must also specify a data serialization format (JSON, CSV, or Apache Parquet) of the object. Amazon S3 uses this format to parse object data into records, and returns only records that match the specified SQL expression. You must also specify the data serialization format for the response.

This functionality is not supported for Amazon S3 on Outposts.

For more information about Amazon S3 Select, see Selecting Content from Objects and SELECT Command in the Amazon S3 User Guide.

Permissions

You must have the s3:GetObject permission for this operation. Amazon S3 Select does not support anonymous access. For more information about permissions, see Specifying Permissions in a Policy in the Amazon S3 User Guide.

Object Data Formats

You can use Amazon S3 Select to query objects that have the following format properties:

Working with the Response Body

Given the response size is unknown, Amazon S3 Select streams the response as a series of messages and includes a Transfer-Encodingheader with chunked as its value in the response. For more information, see Appendix: SelectObjectContent Response.

GetObject Support

The SelectObjectContent action does not support the followingGetObject functionality. For more information, see GetObject.

Special Errors

For a list of special errors for this operation, see List of SELECT Object Content Error Codes

The following operations are related to SelectObjectContent:

19183 19184 19185 19186 19187 19188 19189 19190 19191 19192 19193 19194 19195 19196 19197 19198 19199 19200 19201 19202 19203 19204 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19183 def select_object_content(params = {}, options = {}, &block) params = params.dup event_stream_handler = case handler = params.delete(:event_stream_handler) when EventStreams::SelectObjectContentEventStream then handler when Proc then EventStreams::SelectObjectContentEventStream.new.tap(&handler) when nil then EventStreams::SelectObjectContentEventStream.new else msg = "expected :event_stream_handler to be a block or "\ "instance of Aws::S3::EventStreams::SelectObjectContentEventStream"\ ", got `#{handler.inspect}` instead" raise ArgumentError, msg end yield(event_stream_handler) if block_given? req = build_request(:select_object_content, params) req.context[:event_stream_handler] = event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 95) req.send_request(options, &block) end

#upload_part(params = {}) ⇒ Types::UploadPartOutput

Uploads a part in a multipart upload.

In this operation, you provide new data as a part of an object in your request. However, you have an option to specify your existing Amazon S3 object as a data source for the part you are uploading. To upload a part from an existing object, you use the UploadPartCopyoperation.

You must initiate a multipart upload (see CreateMultipartUpload) before you can upload any part. In response to your initiate request, Amazon S3 returns an upload ID, a unique identifier that you must include in your upload part request.

Part numbers can be any number from 1 to 10,000, inclusive. A part number uniquely identifies a part and also defines its position within the object being created. If you upload a new part using the same part number that was used with a previous part, the previously uploaded part is overwritten.

For information about maximum and minimum part sizes and other multipart upload specifications, see Multipart upload limits in the Amazon S3 User Guide.

After you initiate multipart upload and upload one or more parts, you must either complete or abort multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort multipart upload, Amazon S3 frees up the parts storage and stops charging you for the parts storage.

For more information on multipart uploads, go to Multipart Upload Overview in the Amazon S3 User Guide .

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Permissions

Data integrity

General purpose bucket - To ensure that data is not corrupted traversing the network, specify the Content-MD5 header in the upload part request. Amazon S3 checks the part data against the provided MD5 value. If they do not match, Amazon S3 returns an error. If the upload request is signed with Signature Version 4, then Amazon Web Services S3 uses the x-amz-content-sha256 header as a checksum instead of Content-MD5. For more information seeAuthenticating Requests: Using the Authorization Header (Amazon Web Services Signature Version 4).

Directory buckets - MD5 is not supported by directory buckets. You can use checksum algorithms to check object integrity.

Encryption

Special errors

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to UploadPart:

19672 19673 19674 19675 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19672 def upload_part(params = {}, options = {}) req = build_request(:upload_part, params) req.send_request(options) end

#upload_part_copy(params = {}) ⇒ Types::UploadPartCopyOutput

Uploads a part by copying data from an existing object as data source. To specify the data source, you add the request headerx-amz-copy-source in your request. To specify a byte range, you add the request header x-amz-copy-source-range in your request.

For information about maximum and minimum part sizes and other multipart upload specifications, see Multipart upload limits in the Amazon S3 User Guide.

Instead of copying data from an existing object as part data, you might use the UploadPart action to upload new data as a part of an object in your request.

You must initiate a multipart upload before you can upload any part. In response to your initiate request, Amazon S3 returns the upload ID, a unique identifier that you must include in your upload part request.

For conceptual information about multipart uploads, see Uploading Objects Using Multipart Upload in the Amazon S3 User Guide. For information about copying objects using a single atomic action vs. a multipart upload, see Operations on Objects in the Amazon S3 User Guide.

Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information about endpoints in Availability Zones, see Regional and Zonal endpoints for directory buckets in Availability Zones in the Amazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in the Amazon S3 User Guide.

Authentication and authorization

All UploadPartCopy requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz- prefix, includingx-amz-copy-source, must be signed. For more information, see REST Authentication.

Directory buckets - You must use IAM credentials to authenticate and authorize your access to the UploadPartCopy API operation, instead of using the temporary security credentials through theCreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.

Permissions

You must have READ access to the source object and WRITE access to the destination bucket.

Encryption

Special errors

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax is Bucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related to UploadPartCopy:

20253 20254 20255 20256 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20253 def upload_part_copy(params = {}, options = {}) req = build_request(:upload_part_copy, params) req.send_request(options) end

#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒ Boolean

Polls an API operation until a resource enters a desired state.

Basic Usage

A waiter will call an API operation until:

In between attempts, the waiter will sleep.

# polls in a loop, sleeping between attempts
client.wait_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You can pass configuration as the final arguments hash.

# poll for ~25 seconds
client.wait_until(waiter_name, params, {
  max_attempts: 5,
  delay: 5,
})

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(waiter_name, params, {

  # disable max attempts
  max_attempts: nil,

  # poll for 1 hour, instead of a number of attempts
  before_wait: -> (attempts, response) do
    throw :failure if Time.now - started_at > 3600
  end
})

Handling Errors

When a waiter is unsuccessful, it will raise an error. All of the failure errors extend fromWaiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Valid Waiters

The following table lists the valid waiter names, the operations they call, and the default :delay and :max_attempts values.

waiter_name params :delay :max_attempts
bucket_exists #head_bucket 5 20
bucket_not_exists #head_bucket 5 20
object_exists #head_object 5 20
object_not_exists #head_object 5 20
20779 20780 20781 20782 20783 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20779 def wait_until(waiter_name, params = {}, options = {}) w = waiter(waiter_name, options) yield(w.waiter) if block_given? w.wait(params) end

#write_get_object_response(params = {}) ⇒ Struct

This operation is not supported for directory buckets.

Passes transformed objects to a GetObject operation when using Object Lambda access points. For information about Object Lambda access points, see Transforming objects with Object Lambda access points in the Amazon S3 User Guide.

This operation supports metadata that can be returned byGetObject, in addition to RequestRoute, RequestToken,StatusCode, ErrorCode, and ErrorMessage. The GetObjectresponse metadata is supported so that the WriteGetObjectResponsecaller, typically an Lambda function, can provide the same metadata when it internally invokes GetObject. When WriteGetObjectResponseis called by a customer-owned Lambda function, the metadata returned to the end user GetObject call might differ from what Amazon S3 would normally return.

You can include any number of metadata headers. When including a metadata header, it should be prefaced with x-amz-meta. For example,x-amz-meta-my-custom-header: MyCustomValue. The primary use case for this is to forward GetObject metadata.

Amazon Web Services provides some prebuilt Lambda functions that you can use with S3 Object Lambda to detect and redact personally identifiable information (PII) and decompress S3 objects. These Lambda functions are available in the Amazon Web Services Serverless Application Repository, and can be selected through the Amazon Web Services Management Console when you create your Object Lambda access point.

Example 1: PII Access Control - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically detects personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket.

Example 2: PII Redaction - This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. It automatically redacts personally identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket.

Example 3: Decompression - The Lambda function S3ObjectLambdaDecompression, is equipped to decompress objects stored in S3 in one of six compressed file formats including bzip2, gzip, snappy, zlib, zstandard and ZIP.

For information on how to view and use these functions, see Using Amazon Web Services built Lambda functions in the Amazon S3 User Guide.

20661 20662 20663 20664 # File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20661 def write_get_object_response(params = {}, options = {}) req = build_request(:write_get_object_response, params) req.send_request(options) end