Client — AWS SDK for Ruby V3 (original) (raw)
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
505 506 507 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 505 def initialize(*args) super end |
---|
Instance Method Details
#batch_execute_statement(params = {}) ⇒ Types::BatchExecuteStatementOutput
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL. Each read statement in aBatchExecuteStatement
must specify an equality condition on all key attributes. This enforces that each SELECT
statement in a batch returns at most a single item. For more information, see Running batch operations with PartiQL for DynamoDB .
The entire batch must consist of either read statements or write statements, you cannot mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement succeeded. Error details for individual statements can be found under the Error field of theBatchStatementResponse
for each statement.
603 604 605 606 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 603 def batch_execute_statement(params = {}, options = {}) req = build_request(:batch_execute_statement, params) req.send_request(options) end |
---|
#batch_get_item(params = {}) ⇒ Types::BatchGetItemOutput
The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem
returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can use this value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns aValidationException
with the message "Too many items requested for the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriateUnprocessedKeys
value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, thenBatchGetItem
returns a ProvisionedThroughputExceededException
. If_at least one_ of the items is successfully processed, thenBatchGetItem
completes successfully, while returning the keys of the unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead
to true
for any or all tables.
In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
BatchGetItem
will result in a ValidationException
if the same key is specified multiple times.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
887 888 889 890 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 887 def batch_get_item(params = {}, options = {}) req = build_request(:batch_get_item, params) req.send_request(options) end |
---|
#batch_write_item(params = {}) ⇒ Types::BatchWriteItemOutput
The BatchWriteItem
operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem
can transmit up to 16MB of data over the network, consisting of up to 25 item put or delete operations. While individual items can be up to 400 KB once stored, it's important to note that an item's representation might be greater than 400KB while being sent in DynamoDB's JSON format for the API call. For more details on this distinction, see Naming Rules and Data Types.
BatchWriteItem
cannot update items. If you perform aBatchWriteItem
operation on an existing item, that item's values will be overwritten by the operation and it will appear like it was updated. To update items, we recommend you use the UpdateItem
action.
The individual PutItem
and DeleteItem
operations specified inBatchWriteItem
are atomic; however BatchWriteItem
as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems
response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem
in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem
request with those unprocessed items until all items have been processed.
For tables and indexes with provisioned capacity, if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem
returns aProvisionedThroughputExceededException
. For all tables and indexes, if none of the items can be processed due to other throttling scenarios (such as exceeding partition level limits), thenBatchWriteItem
returns a ThrottlingException
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem
, you can efficiently write or delete large amounts of data, such as from Amazon EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem
does not behave in the same way as individual PutItem
and DeleteItem
calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem
does not return deleted items in the response.
If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, you must update or delete the specified items one at a time. In both situations, BatchWriteItem
performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
- One or more tables specified in the
BatchWriteItem
request does not exist. - Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
- You try to perform multiple operations on the same item in the same
BatchWriteItem
request. For example, you cannot put and delete the same item in the sameBatchWriteItem
request. - Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
- There are more than 25 requests in the batch.
- Any individual item in a batch exceeds 400 KB.
- The total request size exceeds 16 MB.
- Any individual items with keys exceeding the key length limits. For a partition key, the limit is 2048 bytes and for a sort key, the limit is 1024 bytes.
1153 1154 1155 1156 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1153 def batch_write_item(params = {}, options = {}) req = build_request(:batch_write_item, params) req.send_request(options) end |
---|
#create_backup(params = {}) ⇒ Types::CreateBackupOutput
Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call CreateBackup
at a maximum rate of 50 times per second.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
- Global secondary indexes (GSIs)
- Local secondary indexes (LSIs)
- Streams
- Provisioned read and write capacity
1223 1224 1225 1226 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1223 def create_backup(params = {}, options = {}) req = build_request(:create_backup, params) req.send_request(options) end |
---|
#create_global_table(params = {}) ⇒ Types::CreateGlobalTableOutput
Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
- The table must have the same primary key as all of the other replicas.
- The table must have the same name as all of the other replicas.
- The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
- None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
- The global secondary indexes must have the same name.
- The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
- The local secondary indexes must have the same name.
- The local secondary indexes must have the same hash key and sort key (if present).
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes.
If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table.
1341 1342 1343 1344 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1341 def create_global_table(params = {}, options = {}) req = build_request(:create_global_table, params) req.send_request(options) end |
---|
#create_table(params = {}) ⇒ Types::CreateTableOutput
The CreateTable
operation adds a new table to your account. In an Amazon Web Services account, table names must be unique within each Region. That is, you can have two tables with same name if you create the tables in different Regions.
CreateTable
is an asynchronous operation. Upon receiving aCreateTable
request, DynamoDB immediately returns a response with aTableStatus
of CREATING
. After the table is created, DynamoDB sets the TableStatus
to ACTIVE
. You can perform read and write operations only on an ACTIVE
table.
You can optionally define secondary indexes on the new table, as part of the CreateTable
operation. If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially. Only one table with secondary indexes can be in theCREATING
state at any given time.
You can use the DescribeTable
action to check the table status.
1883 1884 1885 1886 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1883 def create_table(params = {}, options = {}) req = build_request(:create_table, params) req.send_request(options) end |
---|
#delete_backup(params = {}) ⇒ Types::DeleteBackupOutput
Deletes an existing backup of a table.
You can call DeleteBackup
at a maximum rate of 10 times per second.
1961 1962 1963 1964 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1961 def delete_backup(params = {}, options = {}) req = build_request(:delete_backup, params) req.send_request(options) end |
---|
#delete_item(params = {}) ⇒ Types::DeleteItemOutput
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in the same operation, using the ReturnValues
parameter.
Unless you specify conditions, the DeleteItem
is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response.
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
2247 2248 2249 2250 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2247 def delete_item(params = {}, options = {}) req = build_request(:delete_item, params) req.send_request(options) end |
---|
#delete_resource_policy(params = {}) ⇒ Types::DeleteResourcePolicyOutput
Deletes the resource-based policy attached to the resource, which can be a table or stream.
DeleteResourcePolicy
is an idempotent operation; running it multiple times on the same resource doesn't result in an error response, unless you specify an ExpectedRevisionId
, which will then return aPolicyNotFoundException
.
To make sure that you don't inadvertently lock yourself out of your own resources, the root principal in your Amazon Web Services account can perform DeleteResourcePolicy
requests, even if your resource-based policy explicitly denies the root principal's access.
DeleteResourcePolicy
is an asynchronous operation. If you issue aGetResourcePolicy
request immediately after running theDeleteResourcePolicy
request, DynamoDB might still return the deleted policy. This is because the policy for your resource might not have been deleted yet. Wait for a few seconds, and then try theGetResourcePolicy
request again.
2308 2309 2310 2311 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2308 def delete_resource_policy(params = {}, options = {}) req = build_request(:delete_resource_policy, params) req.send_request(options) end |
---|
#delete_table(params = {}) ⇒ Types::DeleteTableOutput
The DeleteTable
operation deletes a table and all of its items. After a DeleteTable
request, the specified table is in theDELETING
state until DynamoDB completes the deletion. If the table is in the ACTIVE
state, you can delete it. If a table is inCREATING
or UPDATING
states, then DynamoDB returns aResourceInUseException
. If the specified table does not exist, DynamoDB returns a ResourceNotFoundException
. If table is already in the DELETING
state, no error is returned.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
DynamoDB might continue to accept data read and write operations, such as GetItem
and PutItem
, on a table in the DELETING
state until the table deletion is complete. For the full list of table states, seeTableStatus.
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the DISABLED
state, and the stream is automatically deleted after 24 hours.
Use the DescribeTable
action to check the status of the table.
2489 2490 2491 2492 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2489 def delete_table(params = {}, options = {}) req = build_request(:delete_table, params) req.send_request(options) end |
---|
#describe_backup(params = {}) ⇒ Types::DescribeBackupOutput
Describes an existing backup of a table.
You can call DescribeBackup
at a maximum rate of 10 times per second.
2568 2569 2570 2571 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2568 def describe_backup(params = {}, options = {}) req = build_request(:describe_backup, params) req.send_request(options) end |
---|
#describe_continuous_backups(params = {}) ⇒ Types::DescribeContinuousBackupsOutput
Checks the status of continuous backups and point in time recovery on the specified table. Continuous backups are ENABLED
on all tables at table creation. If point in time recovery is enabled,PointInTimeRecoveryStatus
will be set to ENABLED.
After continuous backups and point in time recovery are enabled, you can restore to any point in time within EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table to any point in time in the last 35 days. You can set the recovery period to any value between 1 and 35 days.
You can call DescribeContinuousBackups
at a maximum rate of 10 times per second.
2619 2620 2621 2622 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2619 def describe_continuous_backups(params = {}, options = {}) req = build_request(:describe_continuous_backups, params) req.send_request(options) end |
---|
#describe_contributor_insights(params = {}) ⇒ Types::DescribeContributorInsightsOutput
Returns information about contributor insights for a given table or global secondary index.
2665 2666 2667 2668 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2665 def describe_contributor_insights(params = {}, options = {}) req = build_request(:describe_contributor_insights, params) req.send_request(options) end |
---|
#describe_endpoints(params = {}) ⇒ Types::DescribeEndpointsResponse
2691 2692 2693 2694 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2691 def describe_endpoints(params = {}, options = {}) req = build_request(:describe_endpoints, params) req.send_request(options) end |
---|
#describe_export(params = {}) ⇒ Types::DescribeExportOutput
Describes an existing table export.
2741 2742 2743 2744 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2741 def describe_export(params = {}, options = {}) req = build_request(:describe_export, params) req.send_request(options) end |
---|
#describe_global_table(params = {}) ⇒ Types::DescribeGlobalTableOutput
Returns information about the specified global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
2810 2811 2812 2813 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2810 def describe_global_table(params = {}, options = {}) req = build_request(:describe_global_table, params) req.send_request(options) end |
---|
#describe_global_table_settings(params = {}) ⇒ Types::DescribeGlobalTableSettingsOutput
Describes Region-specific settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
2910 2911 2912 2913 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2910 def describe_global_table_settings(params = {}, options = {}) req = build_request(:describe_global_table_settings, params) req.send_request(options) end |
---|
#describe_import(params = {}) ⇒ Types::DescribeImportOutput
Represents the properties of the import.
2989 2990 2991 2992 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2989 def describe_import(params = {}, options = {}) req = build_request(:describe_import, params) req.send_request(options) end |
---|
#describe_kinesis_streaming_destination(params = {}) ⇒ Types::DescribeKinesisStreamingDestinationOutput
Returns information about the status of Kinesis streaming.
3024 3025 3026 3027 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3024 def describe_kinesis_streaming_destination(params = {}, options = {}) req = build_request(:describe_kinesis_streaming_destination, params) req.send_request(options) end |
---|
#describe_limits(params = {}) ⇒ Types::DescribeLimitsOutput
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the increase is not instantaneous. The DescribeLimits
action lets you write code to compare the capacity you are currently using to those quotas imposed by your account so that you have enough time to apply for an increase before you hit a quota.
For example, you could use one of the Amazon Web Services SDKs to do the following:
- Call
DescribeLimits
for a particular Region to obtain your current account quotas on provisioned capacity there. - Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
- Call
ListTables
to obtain a list of all your DynamoDB tables. - For each table name listed by
ListTables
, do the following:- Call
DescribeTable
with the table name. - Use the data returned by
DescribeTable
to add the read capacity units and write capacity units provisioned for the table itself to your variables. - If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
- Call
- Report the account quotas for that Region returned by
DescribeLimits
, along with the total current provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
DescribeLimits
should only be called periodically. You can expect throttling errors if you call it more than once in a minute.
The DescribeLimits
Request element has no content.
3133 3134 3135 3136 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3133 def describe_limits(params = {}, options = {}) req = build_request(:describe_limits, params) req.send_request(options) end |
---|
#describe_table(params = {}) ⇒ Types::DescribeTableOutput
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
If you issue a DescribeTable
request immediately after aCreateTable
request, DynamoDB might return aResourceNotFoundException
. This is because DescribeTable
uses an eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a few seconds, and then try theDescribeTable
request again.
The following waiters are defined for this operation (see #wait_until for detailed usage):
- table_exists
- table_not_exists
3326 3327 3328 3329 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3326 def describe_table(params = {}, options = {}) req = build_request(:describe_table, params) req.send_request(options) end |
---|
#describe_table_replica_auto_scaling(params = {}) ⇒ Types::DescribeTableReplicaAutoScalingOutput
Describes auto scaling settings across replicas of the global table at once.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
3406 3407 3408 3409 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3406 def describe_table_replica_auto_scaling(params = {}, options = {}) req = build_request(:describe_table_replica_auto_scaling, params) req.send_request(options) end |
---|
#describe_time_to_live(params = {}) ⇒ Types::DescribeTimeToLiveOutput
Gives a description of the Time to Live (TTL) status on the specified table.
3437 3438 3439 3440 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3437 def describe_time_to_live(params = {}, options = {}) req = build_request(:describe_time_to_live, params) req.send_request(options) end |
---|
#disable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources.
3484 3485 3486 3487 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3484 def disable_kinesis_streaming_destination(params = {}, options = {}) req = build_request(:disable_kinesis_streaming_destination, params) req.send_request(options) end |
---|
#enable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE.
3534 3535 3536 3537 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3534 def enable_kinesis_streaming_destination(params = {}, options = {}) req = build_request(:enable_kinesis_streaming_destination, params) req.send_request(options) end |
---|
#execute_statement(params = {}) ⇒ Types::ExecuteStatementOutput
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (SELECT
statement), if the total number of processed items exceeds the maximum dataset size limit of 1 MB, the read stops and results are returned to the user as aLastEvaluatedKey
value to continue the read in a subsequent operation. If the filter criteria in WHERE
clause does not match any data, the read will return an empty result set.
A single SELECT
statement response can return up to the maximum number of items (if using the Limit parameter) or a maximum of 1 MB of data (and then apply any filtering to the results using WHERE
clause). If LastEvaluatedKey
is present in the response, you need to paginate the result set. If NextToken
is present, you need to paginate the result set and include NextToken
.
3655 3656 3657 3658 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3655 def execute_statement(params = {}, options = {}) req = build_request(:execute_statement, params) req.send_request(options) end |
---|
#execute_transaction(params = {}) ⇒ Types::ExecuteTransactionOutput
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
The entire transaction must consist of either read statements or write statements, you cannot mix both in one transaction. The EXISTS function is an exception and can be used to check the condition of specific attributes of the item in a similar manner toConditionCheck
in the TransactWriteItems API.
3740 3741 3742 3743 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3740 def execute_transaction(params = {}, options = {}) req = build_request(:execute_transaction, params) req.send_request(options) end |
---|
#export_table_to_point_in_time(params = {}) ⇒ Types::ExportTableToPointInTimeOutput
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window.
3871 3872 3873 3874 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3871 def export_table_to_point_in_time(params = {}, options = {}) req = build_request(:export_table_to_point_in_time, params) req.send_request(options) end |
---|
#get_item(params = {}) ⇒ Types::GetItemOutput
The GetItem
operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem
does not return any data and there will be no Item
element in the response.
GetItem
provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead
to true
. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.
4066 4067 4068 4069 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4066 def get_item(params = {}, options = {}) req = build_request(:get_item, params) req.send_request(options) end |
---|
#get_resource_policy(params = {}) ⇒ Types::GetResourcePolicyOutput
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
GetResourcePolicy
follows an eventually consistent model. The following list describes the outcomes when you issue theGetResourcePolicy
request immediately after issuing another request:
- If you issue a
GetResourcePolicy
request immediately after aPutResourcePolicy
request, DynamoDB might return aPolicyNotFoundException
. - If you issue a
GetResourcePolicy
request immediately after aDeleteResourcePolicy
request, DynamoDB might return the policy that was present before the deletion request. - If you issue a
GetResourcePolicy
request immediately after aCreateTable
request, which includes a resource-based policy, DynamoDB might return aResourceNotFoundException
or aPolicyNotFoundException
.
Because GetResourcePolicy
uses an eventually consistent query, the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then retry the GetResourcePolicy
request.
After a GetResourcePolicy
request returns a policy created using thePutResourcePolicy
request, the policy will be applied in the authorization of requests to the resource. Because this process is eventually consistent, it will take some time to apply the policy to all requests to a resource. Policies that you attach while creating a table using the CreateTable
request will always be applied to all requests for that table.
4133 4134 4135 4136 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4133 def get_resource_policy(params = {}, options = {}) req = build_request(:get_resource_policy, params) req.send_request(options) end |
---|
#import_table(params = {}) ⇒ Types::ImportTableOutput
Imports table data from an S3 bucket.
4312 4313 4314 4315 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4312 def import_table(params = {}, options = {}) req = build_request(:import_table, params) req.send_request(options) end |
---|
#list_backups(params = {}) ⇒ Types::ListBackupsOutput
List DynamoDB backups that are associated with an Amazon Web Services account and weren't made with Amazon Web Services Backup. To list these backups for a given table, specify TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of items in a page. You can also specify a maximum number of entries to be returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call ListBackups
a maximum of five times per second.
If you want to retrieve the complete list of backups made with Amazon Web Services Backup, use the Amazon Web Services Backup list API.
4407 4408 4409 4410 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4407 def list_backups(params = {}, options = {}) req = build_request(:list_backups, params) req.send_request(options) end |
---|
#list_contributor_insights(params = {}) ⇒ Types::ListContributorInsightsOutput
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4452 4453 4454 4455 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4452 def list_contributor_insights(params = {}, options = {}) req = build_request(:list_contributor_insights, params) req.send_request(options) end |
---|
#list_exports(params = {}) ⇒ Types::ListExportsOutput
Lists completed exports within the past 90 days.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4497 4498 4499 4500 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4497 def list_exports(params = {}, options = {}) req = build_request(:list_exports, params) req.send_request(options) end |
---|
#list_global_tables(params = {}) ⇒ Types::ListGlobalTablesOutput
Lists all global tables that have a replica in the specified Region.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
4562 4563 4564 4565 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4562 def list_global_tables(params = {}, options = {}) req = build_request(:list_global_tables, params) req.send_request(options) end |
---|
#list_imports(params = {}) ⇒ Types::ListImportsOutput
Lists completed imports within the past 90 days.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4615 4616 4617 4618 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4615 def list_imports(params = {}, options = {}) req = build_request(:list_imports, params) req.send_request(options) end |
---|
#list_tables(params = {}) ⇒ Types::ListTablesOutput
Returns an array of table names associated with the current account and endpoint. The output from ListTables
is paginated, with each page returning a maximum of 100 table names.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4675 4676 4677 4678 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4675 def list_tables(params = {}, options = {}) req = build_request(:list_tables, params) req.send_request(options) end |
---|
#list_tags_of_resource(params = {}) ⇒ Types::ListTagsOfResourceOutput
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
4722 4723 4724 4725 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4722 def list_tags_of_resource(params = {}, options = {}) req = build_request(:list_tags_of_resource, params) req.send_request(options) end |
---|
#put_item(params = {}) ⇒ Types::PutItemOutput
Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values. You can return the item's attribute values in the same operation, using theReturnValues
parameter.
When you add an item, the primary key attributes are the only required attributes.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with aValidationException
exception.
To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists
function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, theattribute_not_exists
function will only succeed if no matching item exists.
For more information about PutItem
, see Working with Items in the Amazon DynamoDB Developer Guide.
5051 5052 5053 5054 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5051 def put_item(params = {}, options = {}) req = build_request(:put_item, params) req.send_request(options) end |
---|
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyOutput
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent .
PutResourcePolicy
is an idempotent operation; running it multiple times on the same resource using the same policy document will return the same revision ID. If you specify an ExpectedRevisionId
that doesn't match the current policy's RevisionId
, thePolicyNotFoundException
will be returned.
PutResourcePolicy
is an asynchronous operation. If you issue aGetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return your previous policy, if there was one, or return the PolicyNotFoundException
. This is becauseGetResourcePolicy
uses an eventually consistent query, and the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then try the GetResourcePolicy
request again.
5152 5153 5154 5155 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5152 def put_resource_policy(params = {}, options = {}) req = build_request(:put_resource_policy, params) req.send_request(options) end |
---|
#query(params = {}) ⇒ Types::QueryOutput
You must provide the name of the partition key attribute and a single value for that attribute. Query
returns all items with that partition key value. Optionally, you can provide a sort key attribute and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The Query
operation will return all of the items from the table or index with that partition key value. You can optionally narrow the scope of the Query
operation by specifying a sort key value and a comparison operator in KeyConditionExpression
. To further refine the Query
results, you can optionally provide aFilterExpression
. A FilterExpression
determines which items within the results should be returned to you. All of the other results are discarded.
A Query
operation always returns a result set. If no matching items are found, the result set will be empty. Queries that do not return results consume the minimum number of read capacity units for that type of read operation.
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that is returned to an application. The number of capacity units consumed will be the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number will also be the same whether or not you use a FilterExpression
.
Query
results are always sorted by the sort key value. If the data type of the sort key is Number, the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By default, the sort order is ascending. To reverse the order, set theScanIndexForward
parameter to false.
A single Query
operation will read up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using FilterExpression
. IfLastEvaluatedKey
is present in the response, you will need to paginate the result set. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the results are returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to specify those attributes in the KeyConditionExpression
.
A Query
operation can return an empty result set and aLastEvaluatedKey
if all the items read for the page of results are filtered out.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead
when querying a global secondary index.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
5690 5691 5692 5693 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5690 def query(params = {}, options = {}) req = build_request(:query, params) req.send_request(options) end |
---|
#restore_table_from_backup(params = {}) ⇒ Types::RestoreTableFromBackupOutput
Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
You can call RestoreTableFromBackup
at a maximum rate of 10 times per second.
You must manually set up the following on the restored table:
- Auto scaling policies
- IAM policies
- Amazon CloudWatch metrics and alarms
- Tags
- Stream settings
- Time to Live (TTL) settings
5920 5921 5922 5923 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5920 def restore_table_from_backup(params = {}, options = {}) req = build_request(:restore_table_from_backup, params) req.send_request(options) end |
---|
#restore_table_to_point_in_time(params = {}) ⇒ Types::RestoreTableToPointInTimeOutput
Restores the specified table to the specified point in time withinEarliestRestorableDateTime
and LatestRestorableDateTime
. You can restore your table to any point in time in the last 35 days. You can set the recovery period to any value between 1 and 35 days. Any number of users can execute up to 50 concurrent restores (any type of restore) in a given account.
When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
- Global secondary indexes (GSIs)
- Local secondary indexes (LSIs)
- Provisioned read and write capacity
- Encryption settings
All these settings come from the current settings of the source table at the time of restore.
You must manually set up the following on the restored table:
- Auto scaling policies
- IAM policies
- Amazon CloudWatch metrics and alarms
- Tags
- Stream settings
- Time to Live (TTL) settings
- Point in time recovery settings
6185 6186 6187 6188 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 6185 def restore_table_to_point_in_time(params = {}, options = {}) req = build_request(:restore_table_to_point_in_time, params) req.send_request(options) end |
---|
#scan(params = {}) ⇒ Types::ScanOutput
The Scan
operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
operation.
If the total size of scanned items exceeds the maximum dataset size limit of 1 MB, the scan completes and results are returned to the user. The LastEvaluatedKey
value is also returned and the requestor can use the LastEvaluatedKey
to continue the scan in a subsequent operation. Each scan response also includes number of items that were scanned (ScannedCount) as part of the request. If using aFilterExpression
, a scan result can result in no items meeting the criteria and the Count
will result in zero. If you did not use aFilterExpression
in the scan request, then Count
is the same asScannedCount
.
Count
and ScannedCount
only return the count of items specific to a single scan request and, unless the table is less than 1MB, do not represent the total number of items in the table.
A single Scan
operation first reads up to the maximum number of items set (if using the Limit
parameter) or a maximum of 1 MB of data and then applies any filtering to the results if aFilterExpression
is provided. If LastEvaluatedKey
is present in the response, pagination is required to complete the full table scan. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.
Scan
operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan
operation by providing the Segment
andTotalSegments
parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.
By default, a Scan
uses eventually consistent reads when accessing the items in a table. Therefore, the results from an eventually consistent Scan
may not include the latest item changes at the time the scan iterates through each item in the table. If you require a strongly consistent read of each item as the scan iterates through the items in the table, you can set the ConsistentRead
parameter to true. Strong consistency only relates to the consistency of the read at the item level.
DynamoDB does not provide snapshot isolation for a scan operation when the ConsistentRead
parameter is set to true. Thus, a DynamoDB scan operation does not guarantee that all reads in a scan see a consistent snapshot of the table when the scan operation was requested.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
6653 6654 6655 6656 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 6653 def scan(params = {}, options = {}) req = build_request(:scan, params) req.send_request(options) end |
---|
#stub_data(operation_name, data = {}) ⇒ Object
7 8 9 10 11 12 13 14 15 16 17 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/customizations/client.rb', line 7 def stub_data(operation_name, data = {}) if config.simple_attributes rules = config.api.operation(operation_name).output translator = Plugins::SimpleAttributes::ValueTranslator data = translator.apply(rules, :marshal, data) data = super(operation_name, data) translator.apply(rules, :unmarshal, data) else super end end |
---|
#tag_resource(params = {}) ⇒ Struct
Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
TagResource
is an asynchronous operation. If you issue a ListTagsOfResource request immediately after aTagResource
request, DynamoDB might return your previous tag set, if there was one, or an empty tag set. This is becauseListTagsOfResource
uses an eventually consistent query, and the metadata for your tags or table might not be available at that moment. Wait for a few seconds, and then try theListTagsOfResource
request again.- The application or removal of tags using
TagResource
andUntagResource
APIs is eventually consistent.ListTagsOfResource
API will only reflect the changes after a few seconds.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
6707 6708 6709 6710 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 6707 def tag_resource(params = {}, options = {}) req = build_request(:tag_resource, params) req.send_request(options) end |
---|
#transact_get_items(params = {}) ⇒ Types::TransactGetItemsOutput
TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or more tables (but not from indexes) in a single account and Region. A TransactGetItems
call can contain up to 100 TransactGetItem
objects, each of which contains aGet
structure that specifies an item to retrieve from a table in the account and Region. A call to TransactGetItems
cannot retrieve items from tables in more than one Amazon Web Services account or Region. The aggregate size of the items in the transaction cannot exceed 4 MB.
DynamoDB rejects the entire TransactGetItems
request if any of the following is true:
- A conflicting operation is in the process of updating an item to be read.
- There is insufficient provisioned capacity for the transaction to be completed.
- There is a user error, such as an invalid data format.
- The aggregate size of the items in the transaction exceeded 4 MB.
6794 6795 6796 6797 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 6794 def transact_get_items(params = {}, options = {}) req = build_request(:transact_get_items, params) req.send_request(options) end |
---|
#transact_write_items(params = {}) ⇒ Types::TransactWriteItemsOutput
TransactWriteItems
is a synchronous write operation that groups up to 100 action requests. These actions can target items in different tables, but not in different Amazon Web Services accounts or Regions, and no two actions can target the same item. For example, you cannot both ConditionCheck
and Update
the same item. The aggregate size of the items in the transaction cannot exceed 4 MB.
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
Put
— Initiates aPutItem
operation to write a new item. This structure specifies the primary key of the item to be written, the name of the table to write it in, an optional condition expression that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether to retrieve the item's attributes if the condition is not met.Update
— Initiates anUpdateItem
operation to update an existing item. This structure specifies the primary key of the item to be updated, the name of the table where it resides, an optional condition expression that must be satisfied for the update to succeed, an expression that defines one or more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition is not met.Delete
— Initiates aDeleteItem
operation to delete an existing item. This structure specifies the primary key of the item to be deleted, the name of the table where it resides, an optional condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.ConditionCheck
— Applies a condition to an item that is not being modified by the transaction. This structure specifies the primary key of the item to be checked, the name of the table where it resides, a condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.
DynamoDB rejects the entire TransactWriteItems
request if any of the following is true:
- A condition in one of the condition expressions is not met.
- An ongoing operation is in the process of updating the same item.
- There is insufficient provisioned capacity for the transaction to be completed.
- An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
- The aggregate size of the items in the transaction exceeds 4 MB.
- There is a user error, such as an invalid data format.
7016 7017 7018 7019 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7016 def transact_write_items(params = {}, options = {}) req = build_request(:transact_write_items, params) req.send_request(options) end |
---|
#untag_resource(params = {}) ⇒ Struct
Removes the association of tags from an Amazon DynamoDB resource. You can call UntagResource
up to five times per second, per account.
UntagResource
is an asynchronous operation. If you issue a ListTagsOfResource request immediately after anUntagResource
request, DynamoDB might return your previous tag set, if there was one, or an empty tag set. This is becauseListTagsOfResource
uses an eventually consistent query, and the metadata for your tags or table might not be available at that moment. Wait for a few seconds, and then try theListTagsOfResource
request again.- The application or removal of tags using
TagResource
andUntagResource
APIs is eventually consistent.ListTagsOfResource
API will only reflect the changes after a few seconds.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
7064 7065 7066 7067 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7064 def untag_resource(params = {}, options = {}) req = build_request(:untag_resource, params) req.send_request(options) end |
---|
#update_continuous_backups(params = {}) ⇒ Types::UpdateContinuousBackupsOutput
UpdateContinuousBackups
enables or disables point in time recovery for the specified table. A successful UpdateContinuousBackups
call returns the current ContinuousBackupsDescription
. Continuous backups are ENABLED
on all tables at table creation. If point in time recovery is enabled, PointInTimeRecoveryStatus
will be set to ENABLED.
Once continuous backups and point in time recovery are enabled, you can restore to any point in time within EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table to any point in time in the last 35 days. You can set the RecoveryPeriodInDays
to any value between 1 and 35 days.
7118 7119 7120 7121 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7118 def update_continuous_backups(params = {}, options = {}) req = build_request(:update_continuous_backups, params) req.send_request(options) end |
---|
#update_contributor_insights(params = {}) ⇒ Types::UpdateContributorInsightsOutput
Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table.
7166 7167 7168 7169 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7166 def update_contributor_insights(params = {}, options = {}) req = build_request(:update_contributor_insights, params) req.send_request(options) end |
---|
#update_global_table(params = {}) ⇒ Types::UpdateGlobalTableOutput
Adds or removes replicas in the specified global table. The global table must already exist to be able to use this operation. Any replica to be added must be empty, have the same name as the global table, have the same key schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version). If you are using global tablesVersion 2019.11.21 you can use UpdateTable instead.
Although you can use UpdateGlobalTable
to add replicas and remove replicas in a single request, for simplicity we recommend that you issue separate requests for adding or removing replicas.
If global secondary indexes are specified, then the following conditions must also be met:
- The global secondary indexes must have the same name.
- The global secondary indexes must have the same hash key and sort key (if present).
- The global secondary indexes must have the same provisioned and maximum write capacity units.
7275 7276 7277 7278 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7275 def update_global_table(params = {}, options = {}) req = build_request(:update_global_table, params) req.send_request(options) end |
---|
#update_global_table_settings(params = {}) ⇒ Types::UpdateGlobalTableSettingsOutput
Updates settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), seeUpgrading global tables.
7490 7491 7492 7493 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7490 def update_global_table_settings(params = {}, options = {}) req = build_request(:update_global_table_settings, params) req.send_request(options) end |
---|
#update_item(params = {}) ⇒ Types::UpdateItemOutput
Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the sameUpdateItem
operation using the ReturnValues
parameter.
7891 7892 7893 7894 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7891 def update_item(params = {}, options = {}) req = build_request(:update_item, params) req.send_request(options) end |
---|
#update_kinesis_streaming_destination(params = {}) ⇒ Types::UpdateKinesisStreamingDestinationOutput
The command to update the Kinesis stream destination.
7936 7937 7938 7939 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 7936 def update_kinesis_streaming_destination(params = {}, options = {}) req = build_request(:update_kinesis_streaming_destination, params) req.send_request(options) end |
---|
#update_table(params = {}) ⇒ Types::UpdateTableOutput
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
You can only perform one of the following operations at once:
- Modify the provisioned throughput settings of the table.
- Remove a global secondary index from the table.
- Create a new global secondary index on the table. After the index begins backfilling, you can use
UpdateTable
to perform other operations.
UpdateTable
is an asynchronous operation; while it's executing, the table status changes from ACTIVE
to UPDATING
. While it'sUPDATING
, you can't issue another UpdateTable
request. When the table returns to the ACTIVE
state, the UpdateTable
operation is complete.
8382 8383 8384 8385 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 8382 def update_table(params = {}, options = {}) req = build_request(:update_table, params) req.send_request(options) end |
---|
#update_table_replica_auto_scaling(params = {}) ⇒ Types::UpdateTableReplicaAutoScalingOutput
Updates auto scaling settings on your global tables at once.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
8548 8549 8550 8551 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 8548 def update_table_replica_auto_scaling(params = {}, options = {}) req = build_request(:update_table_replica_auto_scaling, params) req.send_request(options) end |
---|
#update_time_to_live(params = {}) ⇒ Types::UpdateTimeToLiveOutput
The UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table. A successful UpdateTimeToLive
call returns the current TimeToLiveSpecification
. It can take up to one hour for the change to fully process. Any additional UpdateTimeToLive
calls for the same table during this one hour duration result in aValidationException
.
TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.
The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.
DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.
DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.
As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation.
For more information, see Time To Live in the Amazon DynamoDB Developer Guide.
8621 8622 8623 8624 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 8621 def update_time_to_live(params = {}, options = {}) req = build_request(:update_time_to_live, params) req.send_request(options) end |
---|
#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒ Boolean
Polls an API operation until a resource enters a desired state.
Basic Usage
A waiter will call an API operation until:
- It is successful
- It enters a terminal state
- It makes the maximum number of attempts
In between attempts, the waiter will sleep.
# polls in a loop, sleeping between attempts
client.wait_until(waiter_name, params)
Configuration
You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You can pass configuration as the final arguments hash.
# poll for ~25 seconds
client.wait_until(waiter_name, params, {
max_attempts: 5,
delay: 5,
})
Callbacks
You can be notified before each polling attempt and before each delay. If you throw :success
or :failure
from these callbacks, it will terminate the waiter.
started_at = Time.now
client.wait_until(waiter_name, params, {
# disable max attempts
max_attempts: nil,
# poll for 1 hour, instead of a number of attempts
before_wait: -> (attempts, response) do
throw :failure if Time.now - started_at > 3600
end
})
Handling Errors
When a waiter is unsuccessful, it will raise an error. All of the failure errors extend fromWaiters::Errors::WaiterFailed.
begin
client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
# resource did not enter the desired state in time
end
Valid Waiters
The following table lists the valid waiter names, the operations they call, and the default :delay
and :max_attempts
values.
waiter_name | params | :delay | :max_attempts |
---|---|---|---|
table_exists | #describe_table | 20 | 25 |
table_not_exists | #describe_table | 20 | 25 |
8737 8738 8739 8740 8741 | # File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 8737 def wait_until(waiter_name, params = {}, options = {}) w = waiter(waiter_name, options) yield(w.waiter) if block_given? w.wait(params) end |
---|