Skip to content

Commit

Permalink
docs(client-dynamodb): Documentation updates for DynamoDB.
Browse files Browse the repository at this point in the history
  • Loading branch information
awstools committed Mar 3, 2023
1 parent bc4b706 commit 9407fe1
Show file tree
Hide file tree
Showing 47 changed files with 3,196 additions and 3,883 deletions.
2 changes: 0 additions & 2 deletions clients/client-dynamodb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,11 @@ and predictable performance with seamless scalability. DynamoDB lets you
offload the administrative burdens of operating and scaling a distributed database, so
that you don't have to worry about hardware provisioning, setup and configuration,
replication, software patching, or cluster scaling.</p>

<p>With DynamoDB, you can create database tables that can store and retrieve
any amount of data, and serve any level of request traffic. You can scale up or scale
down your tables' throughput capacity without downtime or performance degradation, and
use the Amazon Web Services Management Console to monitor resource utilization and performance
metrics.</p>

<p>DynamoDB automatically spreads the data and traffic for your tables over
a sufficient number of servers to handle your throughput and storage requirements, while
maintaining consistent and fast performance. All of your data is stored on solid state
Expand Down
564 changes: 314 additions & 250 deletions clients/client-dynamodb/src/DynamoDB.ts

Large diffs are not rendered by default.

9 changes: 3 additions & 6 deletions clients/client-dynamodb/src/DynamoDBClient.ts
Original file line number Diff line number Diff line change
Expand Up @@ -440,20 +440,17 @@ export interface DynamoDBClientResolvedConfig extends DynamoDBClientResolvedConf

/**
* <fullname>Amazon DynamoDB</fullname>
*
* <p>Amazon DynamoDB is a fully managed NoSQL database service that provides fast
* <p>Amazon DynamoDB is a fully managed NoSQL database service that provides fast
* and predictable performance with seamless scalability. DynamoDB lets you
* offload the administrative burdens of operating and scaling a distributed database, so
* that you don't have to worry about hardware provisioning, setup and configuration,
* replication, software patching, or cluster scaling.</p>
*
* <p>With DynamoDB, you can create database tables that can store and retrieve
* <p>With DynamoDB, you can create database tables that can store and retrieve
* any amount of data, and serve any level of request traffic. You can scale up or scale
* down your tables' throughput capacity without downtime or performance degradation, and
* use the Amazon Web Services Management Console to monitor resource utilization and performance
* metrics.</p>
*
* <p>DynamoDB automatically spreads the data and traffic for your tables over
* <p>DynamoDB automatically spreads the data and traffic for your tables over
* a sufficient number of servers to handle your throughput and storage requirements, while
* maintaining consistent and fast performance. All of your data is stored on solid state
* disks (SSDs) and automatically replicated across multiple Availability Zones in an
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,15 @@ export interface BatchExecuteStatementCommandOutput extends BatchExecuteStatemen
* using PartiQL. Each read statement in a <code>BatchExecuteStatement</code> must specify
* an equality condition on all key attributes. This enforces that each <code>SELECT</code>
* statement in a batch returns at most a single item.</p>
* <note>
* <note>
* <p>The entire batch must consist of either read statements or write statements, you
* cannot mix both in one batch.</p>
* </note>
* <important>
* </note>
* <important>
* <p>A HTTP 200 response does not mean that all statements in the BatchExecuteStatement
* succeeded. Error details for individual statements can be found under the <a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchStatementResponse.html#DDB-Type-BatchStatementResponse-Error">Error</a> field of the <code>BatchStatementResponse</code> for each
* statement.</p>
* </important>
* </important>
* @example
* Use a bare-bones client and the command you need to make an API call.
* ```javascript
Expand Down
22 changes: 11 additions & 11 deletions clients/client-dynamodb/src/commands/BatchGetItemCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,30 +37,30 @@ export interface BatchGetItemCommandOutput extends BatchGetItemOutput, __Metadat
/**
* <p>The <code>BatchGetItem</code> operation returns the attributes of one or more items
* from one or more tables. You identify requested items by primary key.</p>
* <p>A single operation can retrieve up to 16 MB of data, which can contain as many as 100
* <p>A single operation can retrieve up to 16 MB of data, which can contain as many as 100
* items. <code>BatchGetItem</code> returns a partial result if the response size limit is
* exceeded, the table's provisioned throughput is exceeded, or an internal processing
* failure occurs. If a partial result is returned, the operation returns a value for
* <code>UnprocessedKeys</code>. You can use this value to retry the operation starting
* with the next item to get.</p>
* <important>
* <important>
* <p>If you request more than 100 items, <code>BatchGetItem</code> returns a
* <code>ValidationException</code> with the message "Too many items requested for
* the BatchGetItem call."</p>
* </important>
* <p>For example, if you ask to retrieve 100 items, but each individual item is 300 KB in
* </important>
* <p>For example, if you ask to retrieve 100 items, but each individual item is 300 KB in
* size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns
* an appropriate <code>UnprocessedKeys</code> value so you can get the next page of
* results. If desired, your application can include its own logic to assemble the pages of
* results into one dataset.</p>
* <p>If <i>none</i> of the items can be processed due to insufficient
* <p>If <i>none</i> of the items can be processed due to insufficient
* provisioned throughput on all of the tables in the request, then
* <code>BatchGetItem</code> returns a
* <code>ProvisionedThroughputExceededException</code>. If <i>at least
* one</i> of the items is successfully processed, then
* <code>BatchGetItem</code> completes successfully, while returning the keys of the
* unread items in <code>UnprocessedKeys</code>.</p>
* <important>
* <important>
* <p>If DynamoDB returns any unprocessed items, you should retry the batch operation on
* those items. However, <i>we strongly recommend that you use an exponential
* backoff algorithm</i>. If you retry the batch operation immediately, the
Expand All @@ -69,16 +69,16 @@ export interface BatchGetItemCommandOutput extends BatchGetItemOutput, __Metadat
* requests in the batch are much more likely to succeed.</p>
* <p>For more information, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations">Batch Operations and Error Handling</a> in the <i>Amazon DynamoDB
* Developer Guide</i>.</p>
* </important>
* <p>By default, <code>BatchGetItem</code> performs eventually consistent reads on every
* </important>
* <p>By default, <code>BatchGetItem</code> performs eventually consistent reads on every
* table in the request. If you want strongly consistent reads instead, you can set
* <code>ConsistentRead</code> to <code>true</code> for any or all tables.</p>
* <p>In order to minimize response latency, <code>BatchGetItem</code> retrieves items in
* <p>In order to minimize response latency, <code>BatchGetItem</code> retrieves items in
* parallel.</p>
* <p>When designing your application, keep in mind that DynamoDB does not return items in
* <p>When designing your application, keep in mind that DynamoDB does not return items in
* any particular order. To help parse the response by item, include the primary key values
* for the items in your request in the <code>ProjectionExpression</code> parameter.</p>
* <p>If a requested item does not exist, it is not returned in the result. Requests for
* <p>If a requested item does not exist, it is not returned in the result. Requests for
* nonexistent items consume the minimum read capacity units according to the type of read.
* For more information, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#CapacityUnitCalculations">Working with Tables</a> in the <i>Amazon DynamoDB Developer
* Guide</i>.</p>
Expand Down
39 changes: 19 additions & 20 deletions clients/client-dynamodb/src/commands/BatchWriteItemCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,14 @@ export interface BatchWriteItemCommandOutput extends BatchWriteItemOutput, __Met
* individual items can be up to 400 KB once stored, it's important to note that an item's
* representation might be greater than 400KB while being sent in DynamoDB's JSON format
* for the API call. For more details on this distinction, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html">Naming Rules and Data Types</a>.</p>
* <note>
* <note>
* <p>
* <code>BatchWriteItem</code> cannot update items. If you perform a <code>BatchWriteItem</code>
* <code>BatchWriteItem</code> cannot update items. If you perform a <code>BatchWriteItem</code>
* operation on an existing item, that item's values will be overwritten by the
* operation and it will appear like it was updated. To update items, we recommend you
* use the <code>UpdateItem</code> action.</p>
* </note>
* <p>The individual <code>PutItem</code> and <code>DeleteItem</code> operations specified
* </note>
* <p>The individual <code>PutItem</code> and <code>DeleteItem</code> operations specified
* in <code>BatchWriteItem</code> are atomic; however <code>BatchWriteItem</code> as a
* whole is not. If any requested operations fail because the table's provisioned
* throughput is exceeded or an internal processing failure occurs, the failed operations
Expand All @@ -57,11 +57,11 @@ export interface BatchWriteItemCommandOutput extends BatchWriteItemOutput, __Met
* <code>BatchWriteItem</code> in a loop. Each iteration would check for unprocessed
* items and submit a new <code>BatchWriteItem</code> request with those unprocessed items
* until all items have been processed.</p>
* <p>If <i>none</i> of the items can be processed due to insufficient
* <p>If <i>none</i> of the items can be processed due to insufficient
* provisioned throughput on all of the tables in the request, then
* <code>BatchWriteItem</code> returns a
* <code>ProvisionedThroughputExceededException</code>.</p>
* <important>
* <important>
* <p>If DynamoDB returns any unprocessed items, you should retry the batch operation on
* those items. However, <i>we strongly recommend that you use an exponential
* backoff algorithm</i>. If you retry the batch operation immediately, the
Expand All @@ -70,52 +70,51 @@ export interface BatchWriteItemCommandOutput extends BatchWriteItemOutput, __Met
* requests in the batch are much more likely to succeed.</p>
* <p>For more information, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#Programming.Errors.BatchOperations">Batch Operations and Error Handling</a> in the <i>Amazon DynamoDB
* Developer Guide</i>.</p>
* </important>
*
* <p>With <code>BatchWriteItem</code>, you can efficiently write or delete large amounts of
* </important>
* <p>With <code>BatchWriteItem</code>, you can efficiently write or delete large amounts of
* data, such as from Amazon EMR, or copy data from another database into DynamoDB. In
* order to improve performance with these large-scale operations,
* <code>BatchWriteItem</code> does not behave in the same way as individual
* <code>PutItem</code> and <code>DeleteItem</code> calls would. For example, you
* cannot specify conditions on individual put and delete requests, and
* <code>BatchWriteItem</code> does not return deleted items in the response.</p>
* <p>If you use a programming language that supports concurrency, you can use threads to
* <p>If you use a programming language that supports concurrency, you can use threads to
* write items in parallel. Your application must include the necessary logic to manage the
* threads. With languages that don't support threading, you must update or delete the
* specified items one at a time. In both situations, <code>BatchWriteItem</code> performs
* the specified put and delete operations in parallel, giving you the power of the thread
* pool approach without having to introduce complexity into your application.</p>
* <p>Parallel processing reduces latency, but each specified put and delete request
* <p>Parallel processing reduces latency, but each specified put and delete request
* consumes the same number of write capacity units whether it is processed in parallel or
* not. Delete operations on nonexistent items consume one write capacity unit.</p>
* <p>If one or more of the following is true, DynamoDB rejects the entire batch write
* <p>If one or more of the following is true, DynamoDB rejects the entire batch write
* operation:</p>
* <ul>
* <ul>
* <li>
* <p>One or more tables specified in the <code>BatchWriteItem</code> request does
* <p>One or more tables specified in the <code>BatchWriteItem</code> request does
* not exist.</p>
* </li>
* <li>
* <p>Primary key attributes specified on an item in the request do not match those
* <p>Primary key attributes specified on an item in the request do not match those
* in the corresponding table's primary key schema.</p>
* </li>
* <li>
* <p>You try to perform multiple operations on the same item in the same
* <p>You try to perform multiple operations on the same item in the same
* <code>BatchWriteItem</code> request. For example, you cannot put and delete
* the same item in the same <code>BatchWriteItem</code> request. </p>
* </li>
* <li>
* <p> Your request contains at least two items with identical hash and range keys
* <p> Your request contains at least two items with identical hash and range keys
* (which essentially is two put operations). </p>
* </li>
* <li>
* <p>There are more than 25 requests in the batch.</p>
* <p>There are more than 25 requests in the batch.</p>
* </li>
* <li>
* <p>Any individual item in a batch exceeds 400 KB.</p>
* <p>Any individual item in a batch exceeds 400 KB.</p>
* </li>
* <li>
* <p>The total request size exceeds 16 MB.</p>
* <p>The total request size exceeds 16 MB.</p>
* </li>
* </ul>
* @example
Expand Down
22 changes: 11 additions & 11 deletions clients/client-dynamodb/src/commands/CreateBackupCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,33 +36,33 @@ export interface CreateBackupCommandOutput extends CreateBackupOutput, __Metadat

/**
* <p>Creates a backup for an existing table.</p>
* <p> Each time you create an on-demand backup, the entire table data is backed up. There
* <p> Each time you create an on-demand backup, the entire table data is backed up. There
* is no limit to the number of on-demand backups that can be taken. </p>
* <p> When you create an on-demand backup, a time marker of the request is cataloged, and
* <p> When you create an on-demand backup, a time marker of the request is cataloged, and
* the backup is created asynchronously, by applying all changes until the time of the
* request to the last full table snapshot. Backup requests are processed instantaneously
* and become available for restore within minutes. </p>
* <p>You can call <code>CreateBackup</code> at a maximum rate of 50 times per
* <p>You can call <code>CreateBackup</code> at a maximum rate of 50 times per
* second.</p>
* <p>All backups in DynamoDB work without consuming any provisioned throughput on the
* <p>All backups in DynamoDB work without consuming any provisioned throughput on the
* table.</p>
* <p> If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to
* <p> If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to
* contain all data committed to the table up to 14:24:00, and data committed after
* 14:26:00 will not be. The backup might contain data modifications made between 14:24:00
* and 14:26:00. On-demand backup does not support causal consistency. </p>
* <p> Along with data, the following are also included on the backups: </p>
* <ul>
* <p> Along with data, the following are also included on the backups: </p>
* <ul>
* <li>
* <p>Global secondary indexes (GSIs)</p>
* <p>Global secondary indexes (GSIs)</p>
* </li>
* <li>
* <p>Local secondary indexes (LSIs)</p>
* <p>Local secondary indexes (LSIs)</p>
* </li>
* <li>
* <p>Streams</p>
* <p>Streams</p>
* </li>
* <li>
* <p>Provisioned read and write capacity</p>
* <p>Provisioned read and write capacity</p>
* </li>
* </ul>
* @example
Expand Down
Loading

0 comments on commit 9407fe1

Please sign in to comment.