From 020182a8434fc9c370885696db30b481ee50bfc3 Mon Sep 17 00:00:00 2001
From: Jonathan Eskew
Creates a new project.
", + "CreateUpload": "Uploads an app or test scripts.
", + "GetDevice": "Gets information about a unique device type.
", + "GetDevicePool": "Gets information about a device pool.
", + "GetDevicePoolCompatibility": "Gets information about compatibility with a device pool.
", + "GetJob": "Gets information about a job.
", + "GetProject": "Gets information about a project.
", + "GetRun": "Gets information about a run.
", + "GetSuite": "Gets information about a suite.
", + "GetTest": "Gets information about a test.
", + "GetUpload": "Gets information about an upload.
", + "ListArtifacts": "Gets information about artifacts.
", + "ListDevicePools": "Gets information about device pools.
", + "ListDevices": "Gets information about unique device types.
", + "ListJobs": "Gets information about jobs.
", + "ListProjects": "Gets information about projects.
", + "ListRuns": "Gets information about runs.
", + "ListSamples": "Gets information about samples.
", + "ListSuites": "Gets information about suites.
", + "ListTests": "Gets information about tests.
", + "ListUniqueProblems": "Gets information about unique problems.
", + "ListUploads": "Gets information about uploads.
", + "ScheduleRun": "Schedules a run.
" + }, + "service": "AWS Device Farm is a service that enables mobile app developers to test Android and Fire OS apps on physical phones, tablets, and other devices in the cloud.
", + "shapes": { + "AmazonResourceName": { + "base": null, + "refs": { + "AmazonResourceNames$member": null, + "Artifact$arn": "The artifact's ARN.
", + "CreateDevicePoolRequest$projectArn": "The ARN of the project for the device pool.
", + "CreateUploadRequest$projectArn": "The ARN of the project for the upload.
", + "Device$arn": "The device's ARN.
", + "DevicePool$arn": "The device pool's ARN.
", + "GetDevicePoolCompatibilityRequest$devicePoolArn": "The device pool's ARN.
", + "GetDevicePoolCompatibilityRequest$appArn": "The ARN of the app that is associated with the specified device pool.
", + "GetDevicePoolRequest$arn": "The device pool's ARN.
", + "GetDeviceRequest$arn": "The device type's ARN.
", + "GetJobRequest$arn": "The job's ARN.
", + "GetProjectRequest$arn": "The project's ARN.
", + "GetRunRequest$arn": "The run's ARN.
", + "GetSuiteRequest$arn": "The suite's ARN.
", + "GetTestRequest$arn": "The test's ARN.
", + "GetUploadRequest$arn": "The upload's ARN.
", + "Job$arn": "The job's ARN.
", + "ListArtifactsRequest$arn": "The artifacts' ARNs.
", + "ListDevicePoolsRequest$arn": "The project ARN.
", + "ListDevicesRequest$arn": "The device types' ARNs.
", + "ListJobsRequest$arn": "The jobs' ARNs.
", + "ListProjectsRequest$arn": "The projects' ARNs.
", + "ListRunsRequest$arn": "The runs' ARNs.
", + "ListSamplesRequest$arn": "The samples' ARNs.
", + "ListSuitesRequest$arn": "The suites' ARNs.
", + "ListTestsRequest$arn": "The tests' ARNs.
", + "ListUniqueProblemsRequest$arn": "The unique problems' ARNs.
", + "ListUploadsRequest$arn": "The uploads' ARNs.
", + "ProblemDetail$arn": "The problem detail's ARN.
", + "Project$arn": "The project's ARN.
", + "Run$arn": "The run's ARN.
", + "Sample$arn": "The sample's ARN.
", + "ScheduleRunConfiguration$extraDataPackageArn": "The ARN of the extra data for the run. The extra data is a .zip file that AWS Device Farm will extract to external data.
", + "ScheduleRunConfiguration$networkProfileArn": "Reserved for internal use.
", + "ScheduleRunRequest$projectArn": "The ARN of the project for the run to be scheduled.
", + "ScheduleRunRequest$appArn": "The ARN of the app to schedule a run.
", + "ScheduleRunRequest$devicePoolArn": "The ARN of the device pool for the run to be scheduled.
", + "ScheduleRunTest$testPackageArn": "The ARN of the uploaded test that will be run.
", + "Suite$arn": "The suite's ARN.
", + "Test$arn": "The test's ARN.
", + "Upload$arn": "The upload's ARN.
" + } + }, + "AmazonResourceNames": { + "base": null, + "refs": { + "ScheduleRunConfiguration$auxiliaryApps": "A list of auxiliary apps for the run.
" + } + }, + "ArgumentException": { + "base": "An invalid argument was specified.
", + "refs": { + } + }, + "Artifact": { + "base": "Represents the output of a test. Examples of artifacts include logs and screenshots.
", + "refs": { + "Artifacts$member": null + } + }, + "ArtifactCategory": { + "base": null, + "refs": { + "ListArtifactsRequest$type": "The artifacts' type.
Allowed values include:
The artifact's type.
Allowed values include the following:
APPIUM_JAVA_OUTPUT: The Appium Java output type.
APPIUM_JAVA_XML_OUTPUT: The Appium Java XML output type.
APPIUM_SERVER_OUTPUT: The Appium server output type.
AUTOMATION_OUTPUT: The automation output type.
CALABASH_JSON_OUTPUT: The Calabash JSON output type.
CALABASH_JAVA_XML_OUTPUT: The Calabash Java XML output type.
CALABASH_PRETTY_OUTPUT: The Calabash pretty output type.
CALABASH_STANDARD_OUTPUT: The Calabash standard output type.
DEVICE_LOG: The device log type.
EXERCISER_MONKEY_OUTPUT: The artifact (log) generated by a fuzz test.
INSTRUMENTATION_OUTPUT: The instrumentation type.
MESSAGE_LOG: The message log type.
RESULT_LOG: The result log type.
SCREENSHOT: The screenshot type.
SERVICE_LOG: The service log type.
UNKNOWN: An unknown type.
Information about the artifacts.
" + } + }, + "Boolean": { + "base": null, + "refs": { + "DevicePoolCompatibilityResult$compatible": "Whether the result was compatible with the device pool.
", + "Radios$wifi": "True if Wi-Fi is enabled at the beginning of the test; otherwise, false.
", + "Radios$bluetooth": "True if Bluetooth is enabled at the beginning of the test; otherwise, false.
", + "Radios$nfc": "True if NFC is enabled at the beginning of the test; otherwise, false.
", + "Radios$gps": "True if GPS is enabled at the beginning of the test; otherwise, false.
" + } + }, + "CPU": { + "base": "Represents the amount of CPU that an app is using on a physical device.
Note that this does not represent system-wide CPU usage.
", + "refs": { + "Device$cpu": "Information about the device's CPU.
" + } + }, + "ContentType": { + "base": null, + "refs": { + "CreateUploadRequest$contentType": "The upload's content type (for example, \"application/octet-stream\").
", + "Upload$contentType": "The upload's content type (for example, \"application/octet-stream\").
" + } + }, + "Counters": { + "base": "Represents entity counters.
", + "refs": { + "Job$counters": "The job's result counters.
", + "Run$counters": "The run's result counters.
", + "Suite$counters": "The suite's result counters.
", + "Test$counters": "The test's result counters.
" + } + }, + "CreateDevicePoolRequest": { + "base": "Represents a request to the create device pool operation.
", + "refs": { + } + }, + "CreateDevicePoolResult": { + "base": "Represents the result of a create device pool request.
", + "refs": { + } + }, + "CreateProjectRequest": { + "base": "Represents a request to the create project operation.
", + "refs": { + } + }, + "CreateProjectResult": { + "base": "Represents the result of a create project request.
", + "refs": { + } + }, + "CreateUploadRequest": { + "base": "Represents a request to the create upload operation.
", + "refs": { + } + }, + "CreateUploadResult": { + "base": "Represents the result of a create upload request.
", + "refs": { + } + }, + "DateTime": { + "base": null, + "refs": { + "Job$created": "When the job was created.
", + "Job$started": "The job's start time.
", + "Job$stopped": "The job's stop time.
", + "Project$created": "When the project was created.
", + "Run$created": "When the run was created.
", + "Run$started": "The run's start time.
", + "Run$stopped": "The run's stop time.
", + "Suite$created": "When the suite was created.
", + "Suite$started": "The suite's start time.
", + "Suite$stopped": "The suite's stop time.
", + "Test$created": "When the test was created.
", + "Test$started": "The test's start time.
", + "Test$stopped": "The test's stop time.
", + "Upload$created": "When the upload was created.
" + } + }, + "Device": { + "base": "Represents a device type that an app is tested against.
", + "refs": { + "DevicePoolCompatibilityResult$device": null, + "Devices$member": null, + "GetDeviceResult$device": null, + "Job$device": null, + "Problem$device": "Information about the associated device.
" + } + }, + "DeviceAttribute": { + "base": null, + "refs": { + "IncompatibilityMessage$type": "The type of incompatibility.
Allowed values include:
ARN: The ARN.
FORM_FACTOR: The form factor (for example, phone or tablet).
MANUFACTURER: The manufacturer.
PLATFORM: The platform.
The rule's attribute.
Allowed values include:
ARN: The ARN.
FORM_FACTOR: The form factor (for example, phone or tablet).
MANUFACTURER: The manufacturer.
PLATFORM: The platform.
The device's form factor.
Allowed values include:
PHONE: The phone form factor.
TABLET: The tablet form factor.
The device's platform.
Allowed values include:
ANDROID: The Android platform.
The run's platform.
Allowed values include:
ANDROID: The Android platform.
Represents a collection of device types.
", + "refs": { + "CreateDevicePoolResult$devicePool": "The newly created device pool.
", + "DevicePools$member": null, + "GetDevicePoolResult$devicePool": null + } + }, + "DevicePoolCompatibilityResult": { + "base": "Represents a device pool compatibility result.
", + "refs": { + "DevicePoolCompatibilityResults$member": null + } + }, + "DevicePoolCompatibilityResults": { + "base": null, + "refs": { + "GetDevicePoolCompatibilityResult$compatibleDevices": "Information about compatible devices.
", + "GetDevicePoolCompatibilityResult$incompatibleDevices": "Information about incompatible devices.
" + } + }, + "DevicePoolType": { + "base": null, + "refs": { + "DevicePool$type": "The device pool's type.
Allowed values include:
CURATED: A device pool that is created and managed by AWS Device Farm.
PRIVATE: A device pool that is created and managed by the device pool developer.
The device pools' type.
Allowed values include:
CURATED: A device pool that is created and managed by AWS Device Farm.
PRIVATE: A device pool that is created and managed by the device pool developer.
Information about the device pools.
" + } + }, + "Devices": { + "base": null, + "refs": { + "ListDevicesResult$devices": "Information about the devices.
" + } + }, + "Double": { + "base": null, + "refs": { + "CPU$clock": "The clock speed of the device's CPU, expressed in hertz (Hz). For example, a 1.2 GHz CPU is expressed as 1200000000.
", + "Location$latitude": "The latitude.
", + "Location$longitude": "The longitude.
" + } + }, + "ExecutionResult": { + "base": null, + "refs": { + "Job$result": "The job's result.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
The problem's result.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
The run's result.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
The suite's result.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
The test's result.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
The job's status.
Allowed values include:
COMPLETED: A completed status.
PENDING: A pending status.
PROCESSING: A processing status.
RUNNING: A running status.
SCHEDULING: A scheduling status.
The run's status.
Allowed values include:
COMPLETED: A completed status.
PENDING: A pending status.
PROCESSING: A processing status.
RUNNING: A running status.
SCHEDULING: A scheduling status.
The suite's status.
Allowed values include:
COMPLETED: A completed status.
PENDING: A pending status.
PROCESSING: A processing status.
RUNNING: A running status.
SCHEDULING: A scheduling status.
The test's status.
Allowed values include:
COMPLETED: A completed status.
PENDING: A pending status.
PROCESSING: A processing status.
RUNNING: A running status.
SCHEDULING: A scheduling status.
The test's filter.
" + } + }, + "GetDevicePoolCompatibilityRequest": { + "base": "Represents a request to the get device pool compatibility operation.
", + "refs": { + } + }, + "GetDevicePoolCompatibilityResult": { + "base": "Represents the result of describe device pool compatibility request.
", + "refs": { + } + }, + "GetDevicePoolRequest": { + "base": "Represents a request to the get device pool operation.
", + "refs": { + } + }, + "GetDevicePoolResult": { + "base": "Represents the result of a get device pool request.
", + "refs": { + } + }, + "GetDeviceRequest": { + "base": "Represents a request to the get device request.
", + "refs": { + } + }, + "GetDeviceResult": { + "base": "Represents the result of a get device request.
", + "refs": { + } + }, + "GetJobRequest": { + "base": "Represents a request to the get job operation.
", + "refs": { + } + }, + "GetJobResult": { + "base": "Represents the result of a get job request.
", + "refs": { + } + }, + "GetProjectRequest": { + "base": "Represents a request to the get project operation.
", + "refs": { + } + }, + "GetProjectResult": { + "base": "Represents the result of a get project request.
", + "refs": { + } + }, + "GetRunRequest": { + "base": "Represents a request to the get run operation.
", + "refs": { + } + }, + "GetRunResult": { + "base": "Represents the result of a get run request.
", + "refs": { + } + }, + "GetSuiteRequest": { + "base": "Represents a request to the get suite operation.
", + "refs": { + } + }, + "GetSuiteResult": { + "base": "Represents the result of a get suite request.
", + "refs": { + } + }, + "GetTestRequest": { + "base": "Represents a request to the get test operation.
", + "refs": { + } + }, + "GetTestResult": { + "base": "Represents the result of a get test request.
", + "refs": { + } + }, + "GetUploadRequest": { + "base": "Represents a request to the get upload operation.
", + "refs": { + } + }, + "GetUploadResult": { + "base": "Represents the result of a get upload request.
", + "refs": { + } + }, + "IdempotencyException": { + "base": "An entity with the same name already exists.
", + "refs": { + } + }, + "IncompatibilityMessage": { + "base": "Represents information about incompatibility.
", + "refs": { + "IncompatibilityMessages$member": null + } + }, + "IncompatibilityMessages": { + "base": null, + "refs": { + "DevicePoolCompatibilityResult$incompatibilityMessages": "Information about the compatibility.
" + } + }, + "Integer": { + "base": null, + "refs": { + "Counters$total": "The total number of entities.
", + "Counters$passed": "The number of passed entities.
", + "Counters$failed": "The number of failed entities.
", + "Counters$warned": "The number of warned entities.
", + "Counters$errored": "The number of errored entities.
", + "Counters$stopped": "The number of stopped entities.
", + "Counters$skipped": "The number of skipped entities.
", + "Resolution$width": "The screen resolution's width, expressed in pixels.
", + "Resolution$height": "The screen resolution's height, expressed in pixels.
", + "Run$totalJobs": "The total number of jobs for the run.
", + "Run$completedJobs": "The total number of completed jobs.
" + } + }, + "Job": { + "base": "Represents a device.
", + "refs": { + "GetJobResult$job": null, + "Jobs$member": null + } + }, + "Jobs": { + "base": null, + "refs": { + "ListJobsResult$jobs": "Information about the jobs.
" + } + }, + "LimitExceededException": { + "base": "A limit was exceeded.
", + "refs": { + } + }, + "ListArtifactsRequest": { + "base": "Represents a request to the list artifacts operation.
", + "refs": { + } + }, + "ListArtifactsResult": { + "base": "Represents the result of a list artifacts operation.
", + "refs": { + } + }, + "ListDevicePoolsRequest": { + "base": "Represents the result of a list device pools request.
", + "refs": { + } + }, + "ListDevicePoolsResult": { + "base": "Represents the result of a list device pools request.
", + "refs": { + } + }, + "ListDevicesRequest": { + "base": "Represents the result of a list devices request.
", + "refs": { + } + }, + "ListDevicesResult": { + "base": "Represents the result of a list devices operation.
", + "refs": { + } + }, + "ListJobsRequest": { + "base": "Represents a request to the list jobs operation.
", + "refs": { + } + }, + "ListJobsResult": { + "base": "Represents the result of a list jobs request.
", + "refs": { + } + }, + "ListProjectsRequest": { + "base": "Represents a request to the list projects operation.
", + "refs": { + } + }, + "ListProjectsResult": { + "base": "Represents the result of a list projects request.
", + "refs": { + } + }, + "ListRunsRequest": { + "base": "Represents a request to the list runs operation.
", + "refs": { + } + }, + "ListRunsResult": { + "base": "Represents the result of a list runs request.
", + "refs": { + } + }, + "ListSamplesRequest": { + "base": "Represents a request to the list samples operation.
", + "refs": { + } + }, + "ListSamplesResult": { + "base": "Represents the result of a list samples request.
", + "refs": { + } + }, + "ListSuitesRequest": { + "base": "Represents a request to the list suites operation.
", + "refs": { + } + }, + "ListSuitesResult": { + "base": "Represents the result of a list suites request.
", + "refs": { + } + }, + "ListTestsRequest": { + "base": "Represents a request to the list tests operation.
", + "refs": { + } + }, + "ListTestsResult": { + "base": "Represents the result of a list tests request.
", + "refs": { + } + }, + "ListUniqueProblemsRequest": { + "base": "Represents a request to the list unique problems operation.
", + "refs": { + } + }, + "ListUniqueProblemsResult": { + "base": "Represents the result of a list unique problems request.
", + "refs": { + } + }, + "ListUploadsRequest": { + "base": "Represents a request to the list uploads operation.
", + "refs": { + } + }, + "ListUploadsResult": { + "base": "Represents the result of a list uploads request.
", + "refs": { + } + }, + "Location": { + "base": "Represents a latitude and longitude pair, expressed in geographic coordinate system degrees (for example 47.6204, -122.3491).
Elevation is currently not supported.
", + "refs": { + "ScheduleRunConfiguration$location": "Information about the location that is used for the run.
" + } + }, + "Long": { + "base": null, + "refs": { + "Device$heapSize": "The device's heap size, expressed in bytes.
", + "Device$memory": "The device's total memory size, expressed in bytes.
" + } + }, + "Message": { + "base": null, + "refs": { + "ArgumentException$message": "Any additional information about the exception.
", + "CreateDevicePoolRequest$description": "The device pool's description.
", + "DevicePool$description": "The device pool's description.
", + "IdempotencyException$message": "Any additional information about the exception.
", + "IncompatibilityMessage$message": "A message about the incompatibility.
", + "Job$message": "A message about the job's result.
", + "LimitExceededException$message": "Any additional information about the exception.
", + "NotFoundException$message": "Any additional information about the exception.
", + "Problem$message": "A message about the problem's result.
", + "Run$message": "A message about the run's result.
", + "ServiceAccountException$message": "Any additional information about the exception.
", + "Suite$message": "A message about the suite's result.
", + "Test$message": "A message about the test's result.
", + "UniqueProblem$message": "A message about the unique problems' result.
", + "Upload$message": "A message about the upload's result.
" + } + }, + "Metadata": { + "base": null, + "refs": { + "Upload$metadata": "The upload's metadata. This contains information that is parsed from the manifest and is displayed in the AWS Device Farm console after the associated app is uploaded.
" + } + }, + "Name": { + "base": null, + "refs": { + "Artifact$name": "The artifact's name.
", + "CreateDevicePoolRequest$name": "The device pool's name.
", + "CreateProjectRequest$name": "The project's name.
", + "CreateUploadRequest$name": "The upload's file name.
", + "Device$name": "The device's display name.
", + "DevicePool$name": "The device pool's name.
", + "Job$name": "The job's name.
", + "ProblemDetail$name": "The problem detail's name.
", + "Project$name": "The project's name.
", + "Run$name": "The run's name.
", + "ScheduleRunRequest$name": "The name for the run to be scheduled.
", + "Suite$name": "The suite's name.
", + "Test$name": "The test's name.
", + "Upload$name": "The upload's file name.
" + } + }, + "NotFoundException": { + "base": "The specified entity was not found.
", + "refs": { + } + }, + "PaginationToken": { + "base": null, + "refs": { + "ListArtifactsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListArtifactsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListDevicePoolsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListDevicePoolsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListDevicesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListDevicesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListJobsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListJobsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListProjectsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListProjectsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListRunsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListRunsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListSamplesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListSamplesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListSuitesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListSuitesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListTestsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListTestsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListUniqueProblemsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListUniqueProblemsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListUploadsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", + "ListUploadsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
" + } + }, + "Problem": { + "base": "Represents a specific warning or failure.
", + "refs": { + "Problems$member": null + } + }, + "ProblemDetail": { + "base": "Information about a problem detail.
", + "refs": { + "Problem$run": "Information about the associated run.
", + "Problem$job": "Information about the associated job.
", + "Problem$suite": "Information about the associated suite.
", + "Problem$test": "Information about the associated test.
" + } + }, + "Problems": { + "base": null, + "refs": { + "UniqueProblem$problems": "Information about the problems.
" + } + }, + "Project": { + "base": "Represents an operating-system neutral workspace for running and managing tests.
", + "refs": { + "CreateProjectResult$project": "The newly created project.
", + "GetProjectResult$project": null, + "Projects$member": null + } + }, + "Projects": { + "base": null, + "refs": { + "ListProjectsResult$projects": "Information about the projects.
" + } + }, + "Radios": { + "base": "Represents the set of radios and their states on a device. Examples of radios include Wi-Fi, GPS, Bluetooth, and NFC.
", + "refs": { + "ScheduleRunConfiguration$radios": "Information about the radio states for the run.
" + } + }, + "Resolution": { + "base": "Represents the screen resolution of a device in height and width, expressed in pixels.
", + "refs": { + "Device$resolution": null + } + }, + "Rule": { + "base": "Represents a condition for a device pool.
", + "refs": { + "Rules$member": null + } + }, + "RuleOperator": { + "base": null, + "refs": { + "Rule$operator": "The rule's operator.
EQUAL: The equals operator.
GREATER_THAN: The greater-than operator.
IN: The in operator.
LESS_THAN: The less-than operator.
NOT_IN: The not-in operator.
The device pool's rules.
", + "DevicePool$rules": "Information about the device pool's rules.
" + } + }, + "Run": { + "base": "Represents an app on a set of devices with a specific test and configuration.
", + "refs": { + "GetRunResult$run": null, + "Runs$member": null, + "ScheduleRunResult$run": "Information about the scheduled run.
" + } + }, + "Runs": { + "base": null, + "refs": { + "ListRunsResult$runs": "Information about the runs.
" + } + }, + "Sample": { + "base": "Represents a sample of performance data.
", + "refs": { + "Samples$member": null + } + }, + "SampleType": { + "base": null, + "refs": { + "Sample$type": "The sample's type.
Must be one of the following values:
CPU: A CPU sample type. This is expressed as the app processing CPU time (including child processes) as reported by process, as a percentage.
MEMORY: A memory usage sample type. This is expressed as the total proportional set size of an app process, in kilobytes.
NATIVE_AVG_DRAWTIME
NATIVE_FPS
NATIVE_FRAMES
NATIVE_MAX_DRAWTIME
NATIVE_MIN_DRAWTIME
OPENGL_AVG_DRAWTIME
OPENGL_FPS
OPENGL_FRAMES
OPENGL_MAX_DRAWTIME
OPENGL_MIN_DRAWTIME
RX
RX_RATE: The total number of bytes per second (TCP and UDP) that are sent, by app process.
THREADS: A threads sample type. This is expressed as the total number of threads per app process.
TX
TX_RATE: The total number of bytes per second (TCP and UDP) that are received, by app process.
Information about the samples.
" + } + }, + "ScheduleRunConfiguration": { + "base": "Represents the settings for a run. Includes things like location, radio states, auxiliary apps, and network profiles.
", + "refs": { + "ScheduleRunRequest$configuration": "Information about the settings for the run to be scheduled.
" + } + }, + "ScheduleRunRequest": { + "base": "Represents a request to the schedule run operation.
", + "refs": { + } + }, + "ScheduleRunResult": { + "base": "Represents the result of a schedule run request.
", + "refs": { + } + }, + "ScheduleRunTest": { + "base": "Represents additional test settings.
", + "refs": { + "ScheduleRunRequest$test": "Information about the test for the run to be scheduled.
" + } + }, + "ServiceAccountException": { + "base": "There was a problem with the service account.
", + "refs": { + } + }, + "String": { + "base": null, + "refs": { + "Artifact$extension": "The artifact's file extension.
", + "CPU$frequency": "The CPU's frequency.
", + "CPU$architecture": "The CPU's architecture, for example x86 or ARM.
", + "Device$manufacturer": "The device's manufacturer name.
", + "Device$model": "The device's model name.
", + "Device$os": "The device's operating system type.
", + "Device$image": "The device's image name.
", + "Device$carrier": "The device's carrier.
", + "Device$radio": "The device's radio.
", + "Rule$value": "The rule's value.
", + "ScheduleRunConfiguration$locale": "Information about the locale that is used for the run.
", + "TestParameters$key": null, + "TestParameters$value": null + } + }, + "Suite": { + "base": "Represents a collection of one or more tests.
", + "refs": { + "GetSuiteResult$suite": null, + "Suites$member": null + } + }, + "Suites": { + "base": null, + "refs": { + "ListSuitesResult$suites": "Information about the suites.
" + } + }, + "Test": { + "base": "Represents a condition that is evaluated.
", + "refs": { + "GetTestResult$test": null, + "Tests$member": null + } + }, + "TestParameters": { + "base": null, + "refs": { + "ScheduleRunTest$parameters": "The test's parameters, such as test framework parameters and fixture settings.
" + } + }, + "TestType": { + "base": null, + "refs": { + "GetDevicePoolCompatibilityRequest$testType": "The test type for the specified device pool.
Allowed values include the following:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
The job's type.
Allowed values include the following:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
The run's type.
Must be one of the following values:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
The test's type.
Must be one of the following values:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
The suite's type.
Must be one of the following values:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
The test's type.
Must be one of the following values:
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
BUILTIN_EXPLORER: An app explorer that will traverse an app, interacting with it and capturing screenshots at the same time.
BUILTIN_FUZZ: The built-in fuzz type.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATOR: The uiautomator type.
Information about the tests.
" + } + }, + "URL": { + "base": null, + "refs": { + "Artifact$url": "The pre-signed Amazon S3 URL that can be used with a corresponding GET request to download the artifact's file.
", + "Sample$url": "The pre-signed Amazon S3 URL that can be used with a corresponding GET request to download the sample's file.
", + "Upload$url": "The pre-signed Amazon S3 URL that was used to store a file through a corresponding PUT request.
" + } + }, + "UniqueProblem": { + "base": "A collection of one or more problems, grouped by their result.
", + "refs": { + "UniqueProblems$member": null + } + }, + "UniqueProblems": { + "base": null, + "refs": { + "UniqueProblemsByExecutionResultMap$value": null + } + }, + "UniqueProblemsByExecutionResultMap": { + "base": null, + "refs": { + "ListUniqueProblemsResult$uniqueProblems": "Information about the unique problems.
Allowed values include:
ERRORED: An error condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
STOPPED: A stopped condition.
PASSED: A passing condition.
PENDING: A pending condition.
WARNED: A warning condition.
An app or a set of one or more tests to upload or that have been uploaded.
", + "refs": { + "CreateUploadResult$upload": "The newly created upload.
", + "GetUploadResult$upload": null, + "Uploads$member": null + } + }, + "UploadStatus": { + "base": null, + "refs": { + "Upload$status": "The upload's status.
Must be one of the following values:
FAILED: A failed status.
INITIALIZED: An initialized status.
PROCESSING: A processing status.
SUCCEEDED: A succeeded status.
The upload's upload type.
Must be one of the following values:
ANDROID_APP: An Android upload.
APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload.
APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload.
CALABASH_TEST_PACKAGE: A Calabash test package upload.
EXTERNAL_DATA: An external data upload.
INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload.
UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload.
The upload's type.
Must be one of the following values:
ANDROID_APP: An Android upload.
APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload.
APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload.
CALABASH_TEST_PACKAGE: A Calabash test package upload.
EXTERNAL_DATA: An external data upload.
INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload.
UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload.
Information about the uploads.
" + } + } + } +} diff --git a/src/data/devicefarm/2015-06-23/paginators-1.json b/src/data/devicefarm/2015-06-23/paginators-1.json new file mode 100644 index 0000000000..dd41ad1ff1 --- /dev/null +++ b/src/data/devicefarm/2015-06-23/paginators-1.json @@ -0,0 +1,64 @@ +{ + "pagination": { + "ListArtifacts": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "artifacts" + }, + "ListDevicePools": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "devicePools" + }, + "ListDevices": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "devices" + }, + "ListDevices": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "devices" + }, + "ListJobs": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "jobs" + }, + "ListProjects": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "projects" + }, + "ListRuns": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "runs" + }, + "ListSamples": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "samples" + }, + "ListSuites": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "suites" + }, + "ListTests": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "tests" + }, + "ListUniqueProblems": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "uniqueProblems" + }, + "ListUploads": { + "input_token": "nextToken", + "output_token": "nextToken", + "result_key": "uploads" + } + } +} diff --git a/src/data/dynamodb/2012-08-10/api-2.json b/src/data/dynamodb/2012-08-10/api-2.json index b7a1fbcbb2..4de3546af1 100644 --- a/src/data/dynamodb/2012-08-10/api-2.json +++ b/src/data/dynamodb/2012-08-10/api-2.json @@ -569,7 +569,8 @@ "KeySchema":{"shape":"KeySchema"}, "LocalSecondaryIndexes":{"shape":"LocalSecondaryIndexList"}, "GlobalSecondaryIndexes":{"shape":"GlobalSecondaryIndexList"}, - "ProvisionedThroughput":{"shape":"ProvisionedThroughput"} + "ProvisionedThroughput":{"shape":"ProvisionedThroughput"}, + "StreamSpecification":{"shape":"StreamSpecification"} } }, "CreateTableOutput":{ @@ -726,7 +727,8 @@ "Backfilling":{"shape":"Backfilling"}, "ProvisionedThroughput":{"shape":"ProvisionedThroughputDescription"}, "IndexSizeBytes":{"shape":"Long"}, - "ItemCount":{"shape":"Long"} + "ItemCount":{"shape":"Long"}, + "IndexArn":{"shape":"String"} } }, "GlobalSecondaryIndexDescriptionList":{ @@ -917,7 +919,8 @@ "KeySchema":{"shape":"KeySchema"}, "Projection":{"shape":"Projection"}, "IndexSizeBytes":{"shape":"Long"}, - "ItemCount":{"shape":"Long"} + "ItemCount":{"shape":"Long"}, + "IndexArn":{"shape":"String"} } }, "LocalSecondaryIndexDescriptionList":{ @@ -1140,7 +1143,8 @@ "ProjectionExpression":{"shape":"ProjectionExpression"}, "FilterExpression":{"shape":"ConditionExpression"}, "ExpressionAttributeNames":{"shape":"ExpressionAttributeNameMap"}, - "ExpressionAttributeValues":{"shape":"ExpressionAttributeValueMap"} + "ExpressionAttributeValues":{"shape":"ExpressionAttributeValueMap"}, + "ConsistentRead":{"shape":"ConsistentRead"} } }, "ScanOutput":{ @@ -1177,6 +1181,29 @@ "COUNT" ] }, + "StreamArn":{ + "type":"string", + "min":37, + "max":1024 + }, + "StreamEnabled":{"type":"boolean"}, + "StreamSpecification":{ + "type":"structure", + "members":{ + "StreamEnabled":{"shape":"StreamEnabled"}, + "StreamViewType":{"shape":"StreamViewType"} + } + }, + "StreamViewType":{ + "type":"string", + "enum":[ + "NEW_IMAGE", + "OLD_IMAGE", + "NEW_AND_OLD_IMAGES", + "KEYS_ONLY" + ] + }, + "String":{"type":"string"}, "StringAttributeValue":{"type":"string"}, "StringSetAttributeValue":{ "type":"list", @@ -1193,8 +1220,12 @@ "ProvisionedThroughput":{"shape":"ProvisionedThroughputDescription"}, "TableSizeBytes":{"shape":"Long"}, "ItemCount":{"shape":"Long"}, + "TableArn":{"shape":"String"}, "LocalSecondaryIndexes":{"shape":"LocalSecondaryIndexDescriptionList"}, - "GlobalSecondaryIndexes":{"shape":"GlobalSecondaryIndexDescriptionList"} + "GlobalSecondaryIndexes":{"shape":"GlobalSecondaryIndexDescriptionList"}, + "StreamSpecification":{"shape":"StreamSpecification"}, + "LatestStreamLabel":{"shape":"String"}, + "LatestStreamArn":{"shape":"StreamArn"} } }, "TableName":{ @@ -1264,7 +1295,8 @@ "AttributeDefinitions":{"shape":"AttributeDefinitions"}, "TableName":{"shape":"TableName"}, "ProvisionedThroughput":{"shape":"ProvisionedThroughput"}, - "GlobalSecondaryIndexUpdates":{"shape":"GlobalSecondaryIndexUpdateList"} + "GlobalSecondaryIndexUpdates":{"shape":"GlobalSecondaryIndexUpdateList"}, + "StreamSpecification":{"shape":"StreamSpecification"} } }, "UpdateTableOutput":{ diff --git a/src/data/dynamodb/2012-08-10/docs-2.json b/src/data/dynamodb/2012-08-10/docs-2.json index 33d21cf683..400d816357 100644 --- a/src/data/dynamodb/2012-08-10/docs-2.json +++ b/src/data/dynamodb/2012-08-10/docs-2.json @@ -1,19 +1,19 @@ { "version": "2.0", "operations": { - "BatchGetItem": "The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one data set.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchGetItem will return a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true
for any or all tables.
In order to minimize response latency, BatchGetItem retrieves items in parallel.
When designing your application, keep in mind that DynamoDB does not return attributes in any particular order. To help parse the response by item, include the primary key values for the items in your request in the AttributesToGet parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Capacity Units Calculations in the Amazon DynamoDB Developer Guide.
", - "BatchWriteItem": "The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.
BatchWriteItem cannot update items. To update items, use the UpdateItem API.
The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.
Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem will return a ProvisionedThroughputExceededException.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.
If you use a programming language that supports concurrency, such as Java, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, such as PHP, you must update or delete the specified items one at a time. In both situations, BatchWriteItem provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the BatchWriteItem request does not exist.
Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same BatchWriteItem request. For example, you cannot put and delete the same item in the same BatchWriteItem request.
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.
If you request more than 100 items BatchGetItem will return a ValidationException with the message \"Too many items requested for the BatchGetItem call\".
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one data set.
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchGetItem will return a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true
for any or all tables.
In order to minimize response latency, BatchGetItem retrieves items in parallel.
When designing your application, keep in mind that DynamoDB does not return attributes in any particular order. To help parse the response by item, include the primary key values for the items in your request in the AttributesToGet parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Capacity Units Calculations in the Amazon DynamoDB Developer Guide.
", + "BatchWriteItem": "The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.
BatchWriteItem cannot update items. To update items, use the UpdateItem API.
The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.
Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem will return a ProvisionedThroughputExceededException.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.
If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, you must update or delete the specified items one at a time. In both situations, BatchWriteItem provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the BatchWriteItem request does not exist.
Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same BatchWriteItem request. For example, you cannot put and delete the same item in the same BatchWriteItem request.
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
The CreateTable operation adds a new table to your account. In an AWS account, table names must be unique within each region. That is, you can have two tables with same name if you create the tables in different regions.
CreateTable is an asynchronous operation. Upon receiving a CreateTable request, DynamoDB immediately returns a response with a TableStatus of CREATING
. After the table is created, DynamoDB sets the TableStatus to ACTIVE
. You can perform read and write operations only on an ACTIVE
table.
You can optionally define secondary indexes on the new table, as part of the CreateTable operation. If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially. Only one table with secondary indexes can be in the CREATING
state at any given time.
You can use the DescribeTable API to check the table status.
", "DeleteItem": "Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter.
Unless you specify conditions, the DeleteItem is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response.
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
", - "DeleteTable": "The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the specified table is in the DELETING
state until DynamoDB completes the deletion. If the table is in the ACTIVE
state, you can delete it. If a table is in CREATING
or UPDATING
states, then DynamoDB returns a ResourceInUseException. If the specified table does not exist, DynamoDB returns a ResourceNotFoundException. If table is already in the DELETING
state, no error is returned.
DynamoDB might continue to accept data read and write operations, such as GetItem and PutItem, on a table in the DELETING
state until the table deletion is complete.
When you delete a table, any indexes on that table are also deleted.
Use the DescribeTable API to check the status of the table.
", + "DeleteTable": "The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the specified table is in the DELETING
state until DynamoDB completes the deletion. If the table is in the ACTIVE
state, you can delete it. If a table is in CREATING
or UPDATING
states, then DynamoDB returns a ResourceInUseException. If the specified table does not exist, DynamoDB returns a ResourceNotFoundException. If table is already in the DELETING
state, no error is returned.
DynamoDB might continue to accept data read and write operations, such as GetItem and PutItem, on a table in the DELETING
state until the table deletion is complete.
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the DISABLED
state, and the stream is automatically deleted after 24 hours.
Use the DescribeTable API to check the status of the table.
", "DescribeTable": "Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
If you issue a DescribeTable request immediately after a CreateTable request, DynamoDB might return a ResourceNotFoundException. This is because DescribeTable uses an eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a few seconds, and then try the DescribeTable request again.
The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data.
GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to true
. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.
Returns an array of table names associated with the current account and endpoint. The output from ListTables is paginated, with each page returning a maximum of 100 table names.
", "PutItem": "Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values.
In addition to putting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter.
When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException exception.
You can request that PutItem return either a copy of the original item (before the update) or a copy of the updated item (after the update). For more information, see the ReturnValues description below.
To prevent a new item from replacing an existing item, use a conditional put operation with ComparisonOperator set to NULL
for the primary key attribute, or attributes.
For more information about using this API, see Working with Items in the Amazon DynamoDB Developer Guide.
", - "Query": "A Query operation uses the primary key of a table or a secondary index to directly access items from that table or index.
Use the KeyConditionExpression parameter to provide a specific hash key value. The Query operation will return all of the items from the table or index with that hash key value. You can optionally narrow the scope of the Query by specifying a range key value and a comparison operator in the KeyConditionExpression. You can use the ScanIndexForward parameter to get results in forward or reverse order, by range key or by index key.
Queries that do not return results consume the minimum number of read capacity units for that type of read operation.
If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with LastEvaluatedKey to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns both an empty result set and a LastEvaluatedKey. The LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used Limit.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set ConsistentRead to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.
", - "Scan": "The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a ScanFilter operation.
If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.
The result set is eventually consistent.
By default, Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.
", + "Query": "A Query operation uses the primary key of a table or a secondary index to directly access items from that table or index.
Use the KeyConditionExpression parameter to provide a specific hash key value. The Query operation will return all of the items from the table or index with that hash key value. You can optionally narrow the scope of the Query operation by specifying a range key value and a comparison operator in KeyConditionExpression. You can use the ScanIndexForward parameter to get results in forward or reverse order, by range key or by index key.
Queries that do not return results consume the minimum number of read capacity units for that type of read operation.
If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with the LastEvaluatedKey element to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns both an empty result set and a LastEvaluatedKey value. LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used the Limit parameter.
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set the ConsistentRead parameter to true
and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.
The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a ScanFilter operation.
If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.
By default, Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.
By default, Scan uses eventually consistent reads when acessing the data in the table or local secondary index. However, you can use strongly consistent reads instead by setting the ConsistentRead parameter to true.
", "UpdateItem": "Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). If conditions are specified and the item does not exist, then the operation fails and a new item is not created.
You can also return the item's attribute values in the same UpdateItem operation using the ReturnValues parameter.
", - "UpdateTable": "Updates the provisioned throughput for the given table, or manages the global secondary indexes on the table.
You can increase or decrease the table's provisioned throughput values within the maximums and minimums listed in the Limits section in the Amazon DynamoDB Developer Guide.
In addition, you can use UpdateTable to add, modify or delete global secondary indexes on the table. For more information, see Managing Global Secondary Indexes in the Amazon DynamoDB Developer Guide.
The table must be in the ACTIVE
state for UpdateTable to succeed. UpdateTable is an asynchronous operation; while executing the operation, the table is in the UPDATING
state. While the table is in the UPDATING
state, the table still has the provisioned throughput from before the call. The table's new provisioned throughput settings go into effect when the table returns to the ACTIVE
state; at that point, the UpdateTable operation is complete.
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
You can only perform one of the following operations at once:
Modify the provisioned throughput settings of the table.
Enable or disable Streams on the table.
Remove a global secondary index from the table.
Create a new global secondary index on the table. Once the index begins backfilling, you can use UpdateTable to perform other operations.
UpdateTable is an asynchronous operation; while it is executing, the table status changes from ACTIVE
to UPDATING
. While it is UPDATING
, you cannot issue another UpdateTable request. When the table returns to the ACTIVE
state, the UpdateTable operation is complete.
Overview
This is the Amazon DynamoDB API Reference. This guide provides descriptions and samples of the low-level DynamoDB API. For information about DynamoDB application development, see the Amazon DynamoDB Developer Guide.
Instead of making the requests to the low-level DynamoDB API directly from your application, we recommend that you use the AWS Software Development Kits (SDKs). The easy-to-use libraries in the AWS SDKs make it unnecessary to call the low-level DynamoDB API directly from your application. The libraries take care of request authentication, serialization, and connection management. For more information, see Using the AWS SDKs with DynamoDB in the Amazon DynamoDB Developer Guide.
If you decide to code against the low-level DynamoDB API directly, you will need to write the necessary code to authenticate your requests. For more information on signing your requests, see Using the DynamoDB API in the Amazon DynamoDB Developer Guide.
The following are short descriptions of each low-level API action, organized by function.
Managing Tables
CreateTable - Creates a table with user-specified provisioned throughput settings. You must designate one attribute as the hash primary key for the table; you can optionally designate a second attribute as the range primary key. DynamoDB creates indexes on these key attributes for fast data access. Optionally, you can create one or more secondary indexes, which provide fast data access using non-key attributes.
DescribeTable - Returns metadata for a table, such as table size, status, and index information.
UpdateTable - Modifies the provisioned throughput settings for a table. Optionally, you can modify the provisioned throughput settings for global secondary indexes on the table.
ListTables - Returns a list of all tables associated with the current AWS account and endpoint.
DeleteTable - Deletes a table and all of its indexes.
For conceptual information about managing tables, see Working with Tables in the Amazon DynamoDB Developer Guide.
Reading Data
GetItem - Returns a set of attributes for the item that has a given primary key. By default, GetItem performs an eventually consistent read; however, applications can request a strongly consistent read instead.
BatchGetItem - Performs multiple GetItem requests for data items using their primary keys, from one table or multiple tables. The response from BatchGetItem has a size limit of 16 MB and returns a maximum of 100 items. Both eventually consistent and strongly consistent reads can be used.
Query - Returns one or more items from a table or a secondary index. You must provide a specific hash key value. You can narrow the scope of the query using comparison operators against a range key value, or on the index key. Query supports either eventual or strong consistency. A single response has a size limit of 1 MB.
Scan - Reads every item in a table; the result set is eventually consistent. You can limit the number of items returned by filtering the data attributes, using conditional expressions. Scan can be used to enable ad-hoc querying of a table against non-key attributes; however, since this is a full table scan without using an index, Scan should not be used for any application query use case that requires predictable performance.
For conceptual information about reading data, see Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.
Modifying Data
PutItem - Creates a new item, or replaces an existing item with a new item (including all the attributes). By default, if an item in the table already exists with the same primary key, the new item completely replaces the existing item. You can use conditional operators to replace an item only if its attribute values match certain conditions, or to insert a new item only if that item doesn't already exist.
UpdateItem - Modifies the attributes of an existing item. You can also use conditional operators to perform an update only if the item's attribute values match certain conditions.
DeleteItem - Deletes an item in a table by primary key. You can use conditional operators to perform a delete an item only if the item's attribute values match certain conditions.
BatchWriteItem - Performs multiple PutItem and DeleteItem requests across multiple tables in a single request. A failure of any request(s) in the batch will not cause the entire BatchWriteItem operation to fail. Supports batches of up to 25 items to put or delete, with a maximum total request size of 16 MB.
For conceptual information about modifying data, see Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.
", "shapes": { @@ -125,7 +125,7 @@ "BatchGetRequestMap": { "base": null, "refs": { - "BatchGetItemInput$RequestItems": "A map of one or more table names and, for each table, a map that describes one or more items to retrieve from that table. Each table name can be used only once per BatchGetItem request.
Each element in the map of items to retrieve consists of the following:
ConsistentRead - If true
, a strongly consistent read is used; if false
(the default), an eventually consistent read is used.
ExpressionAttributeNames - One or more substitution tokens for attribute names in the ProjectionExpression parameter. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
Keys - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. For example, with a hash type primary key, you only need to provide the hash attribute. For a hash-and-range type primary key, you must provide both the hash attribute and the range attribute.
ProjectionExpression - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
AttributesToGet -
This is a legacy parameter, for backward compatibility. New applications should use ProjectionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.
This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.
The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.
A map of one or more table names and, for each table, a map that describes one or more items to retrieve from that table. Each table name can be used only once per BatchGetItem request.
Each element in the map of items to retrieve consists of the following:
ConsistentRead - If true
, a strongly consistent read is used; if false
(the default), an eventually consistent read is used.
ExpressionAttributeNames - One or more substitution tokens for attribute names in the ProjectionExpression parameter. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
Keys - An array of primary key attribute values that define specific items in the table. For each primary key, you must provide all of the key attributes. For example, with a hash type primary key, you only need to provide the hash attribute. For a hash-and-range type primary key, you must provide both the hash attribute and the range attribute.
ProjectionExpression - A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.
If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
AttributesToGet -
This is a legacy parameter, for backward compatibility. New applications should use ProjectionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.
This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.
The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.
Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.
A map of tables and their respective keys that were not processed with the current response. The UnprocessedKeys value is in the same form as RequestItems, so the value can be provided directly to a subsequent BatchGetItem operation. For more information, see RequestItems in the Request Parameters section.
Each element consists of:
Keys - An array of primary key attribute values that define specific items in the table.
AttributesToGet - One or more attributes to be retrieved from the table or index. By default, all attributes are returned. If a requested attribute is not found, it does not appear in the result.
ConsistentRead - The consistency of a read operation. If set to true
, then a strongly consistent read is used; otherwise, an eventually consistent read is used.
If there are no unprocessed keys remaining, the response contains an empty UnprocessedKeys map.
" } }, @@ -175,7 +175,7 @@ "base": null, "refs": { "ExpectedAttributeValue$Exists": "Causes DynamoDB to evaluate the value before attempting a conditional operation:
If Exists is true
, DynamoDB will check to see if that attribute value already exists in the table. If it is found, then the operation succeeds. If it is not found, the operation fails with a ConditionalCheckFailedException.
If Exists is false
, DynamoDB assumes that the attribute value does not exist in the table. If in fact the value does not exist, then the assumption is valid and the operation succeeds. If the value is found, despite the assumption that it does not exist, the operation fails with a ConditionalCheckFailedException.
The default setting for Exists is true
. If you supply a Value all by itself, DynamoDB assumes the attribute exists: You don't have to set Exists to true
, because it is implied.
DynamoDB returns a ValidationException if:
Exists is true
but there is no Value to check. (You expect a value to exist, but don't specify what that value is.)
Exists is false
but you also provide a Value. (You cannot expect an attribute to have a value, while also expecting it not to exist.)
A value that specifies ascending (true) or descending (false) traversal of the index. DynamoDB returns results reflecting the requested order determined by the range key. If the data type is Number, the results are returned in numeric order. For type String, the results are returned in order of ASCII character code values. For type Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
If ScanIndexForward is not specified, the results are returned in ascending order.
" + "QueryInput$ScanIndexForward": "Specifies the order in which to return the query results - either ascending (true
) or descending (false
).
Items with the same hash key are stored in sorted order by range key .If the range key data type is Number, the results are stored in numeric order. For type String, the results are returned in order of ASCII character code values. For type Binary, DynamoDB treats each byte of the binary data as unsigned.
If ScanIndexForward is true
, DynamoDB returns the results in order, by range key. This is the default behavior.
If ScanIndexForward is false
, DynamoDB sorts the results in descending order by range key, and then returns the results to the client.
A condition that must be satisfied in order for a conditional DeleteItem to succeed.
An expression can contain any of the following:
Boolean functions: attribute_exists | attribute_not_exists | contains | begins_with
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A condition that must be satisfied in order for a conditional PutItem operation to succeed.
An expression can contain any of the following:
Boolean functions: attribute_exists | attribute_not_exists | contains | begins_with
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A condition that must be satisfied in order for a conditional DeleteItem to succeed.
An expression can contain any of the following:
Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A condition that must be satisfied in order for a conditional PutItem operation to succeed.
An expression can contain any of the following:
Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A string that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.
A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.
FilterExpression replaces the legacy QueryFilter and ConditionalOperator parameters.
A string that contains conditions that DynamoDB applies after the Scan operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.
A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.
FilterExpression replaces the legacy ScanFilter and ConditionalOperator parameters.
A condition that must be satisfied in order for a conditional update to succeed.
An expression can contain any of the following:
Boolean functions: attribute_exists | attribute_not_exists | contains | begins_with
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A condition that must be satisfied in order for a conditional update to succeed.
An expression can contain any of the following:
Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.
A value that if set to true
, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used.
Determines the read consistency model: If set to true
, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.
The consistency of a read operation. If set to true
, then a strongly consistent read is used; otherwise, an eventually consistent read is used.
A value that if set to true
, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used.
Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with ConsistentRead set to true
, you will receive an error message.
Determines the read consistency model: If set to true
, then the operation uses strongly consistent reads; otherwise, the operation uses eventually consistent reads.
Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with ConsistentRead set to true
, you will receive a ValidationException.
A Boolean value that determines the read consistency model during the scan:
If ConsistentRead is false
, then Scan will use eventually consistent reads. The data returned from Scan might not contain the results of other recently completed write operations (PutItem, UpdateItem or DeleteItem). The Scan response might include some stale data.
If ConsistentRead is true
, then Scan will use strongly consistent reads. All of the write operations that completed before the Scan began are guaranteed to be contained in the Scan response.
The default setting for ConsistentRead is false
, meaning that eventually consistent reads will be used.
Strongly consistent reads are not supported on global secondary indexes. If you scan a global secondary index with ConsistentRead set to true, you will receive a ValidationException.
" } }, "ConsumedCapacity": { @@ -353,13 +354,13 @@ "ExpressionAttributeNameMap": { "base": null, "refs": { - "DeleteItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "GetItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "KeysAndAttributes$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "PutItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "QueryInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "ScanInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "UpdateItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
" + "DeleteItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "GetItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "KeysAndAttributes$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "PutItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "QueryInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "ScanInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
", + "UpdateItemInput$ExpressionAttributeNames": "One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:
To access an attribute whose name conflicts with a DynamoDB reserved word.
To create a placeholder for repeating occurrences of an attribute name in an expression.
To prevent special characters in an attribute name from being misinterpreted in an expression.
Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:
Percentile
The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:
{\"#P\":\"Percentile\"}
You could then use this substitution in an expression, as in this example:
#P = :val
Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.
For more information on expression attribute names, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.
" } }, "ExpressionAttributeNameVariable": { @@ -371,11 +372,11 @@ "ExpressionAttributeValueMap": { "base": null, "refs": { - "DeleteItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "PutItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "QueryInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "ScanInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
", - "UpdateItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
" + "DeleteItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
", + "PutItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
", + "QueryInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
", + "ScanInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
", + "UpdateItemInput$ExpressionAttributeValues": "One or more values that can be substituted in an expression.
Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:
Available | Backordered | Discontinued
You would first need to specify ExpressionAttributeValues as follows:
{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }
You could then use these values in an expression, such as this:
ProductStatus IN (:avail, :back, :disc)
For more information on expression attribute values, see Specifying Conditions in the Amazon DynamoDB Developer Guide.
" } }, "ExpressionAttributeValueVariable": { @@ -387,7 +388,7 @@ "FilterConditionMap": { "base": null, "refs": { - "QueryInput$QueryFilter": "This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.
A condition that evaluates the query results after the items are read and returns only the desired values.
This parameter does not support attributes of type List or Map.
A QueryFilter is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
If you provide more than one condition in the QueryFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)
Note that QueryFilter does not allow key attributes. You cannot define a filter condition on a hash key or range key.
Each QueryFilter element consists of an attribute name to compare, along with the following:
AttributeValueList - One or more values to evaluate against the supplied attribute. The number of values in the list depends on the operator specified in ComparisonOperator.
For type Number, value comparisons are numeric.
String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a
is greater than A
, and a
is greater than B
. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters.
For type Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
For information on specifying data types in JSON, see JSON Data Format in the Amazon DynamoDB Developer Guide.
ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than, less than, etc.
The following comparison operators are available:
EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN
For complete descriptions of all comparison operators, see the Condition data type.
This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.
A condition that evaluates the query results after the items are read and returns only the desired values.
This parameter does not support attributes of type List or Map.
A QueryFilter is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.
If you provide more than one condition in the QueryFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)
Note that QueryFilter does not allow key attributes. You cannot define a filter condition on a hash key or range key.
Each QueryFilter element consists of an attribute name to compare, along with the following:
AttributeValueList - One or more values to evaluate against the supplied attribute. The number of values in the list depends on the operator specified in ComparisonOperator.
For type Number, value comparisons are numeric.
String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a
is greater than A
, and a
is greater than B
. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters.
For type Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
For information on specifying data types in JSON, see JSON Data Format in the Amazon DynamoDB Developer Guide.
ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than, less than, etc.
The following comparison operators are available:
EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN
For complete descriptions of all comparison operators, see the Condition data type.
This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.
A condition that evaluates the scan results and returns only the desired values.
This parameter does not support attributes of type List or Map.
If you specify more than one condition in the ScanFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)
Each ScanFilter element consists of an attribute name to compare, along with the following:
AttributeValueList - One or more values to evaluate against the supplied attribute. The number of values in the list depends on the operator specified in ComparisonOperator .
For type Number, value comparisons are numeric.
String value comparisons for greater than, equals, or less than are based on ASCII character code values. For example, a
is greater than A
, and a
is greater than B
. For a list of code values, see http://en.wikipedia.org/wiki/ASCII#ASCII_printable_characters.
For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
For information on specifying data types in JSON, see JSON Data Format in the Amazon DynamoDB Developer Guide.
ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than, less than, etc.
The following comparison operators are available:
EQ | NE | LE | LT | GE | GT | NOT_NULL | NULL | CONTAINS | NOT_CONTAINS | BEGINS_WITH | IN | BETWEEN
For complete descriptions of all comparison operators, see Condition.
An array of one or more global secondary indexes for the table. For each index in the array, you can request one action:
Create - add a new global secondary index to the table.
Update - modify the provisioned throughput settings of an existing global secondary index.
Delete - remove a global secondary index from the table.
An array of one or more global secondary indexes for the table. For each index in the array, you can request one action:
Create - add a new global secondary index to the table.
Update - modify the provisioned throughput settings of an existing global secondary index.
Delete - remove a global secondary index from the table.
For more information, see Managing Global Secondary Indexes in the Amazon DynamoDB Developer Guide.
" } }, "IndexName": { @@ -547,7 +548,7 @@ "KeyExpression": { "base": null, "refs": { - "QueryInput$KeyConditionExpression": "The condition that specifies the key value(s) for items to be retrieved by the Query action.
The condition must perform an equality test on a single hash key value. The condition can also test for one or more range key values. A Query can use KeyConditionExpression to retrieve a single item with a given hash and range key value, or several items that have the same hash key value but different range key values.
The hash key equality test is required, and must be specified in the following format:
hashAttributeName
= :hashval
If you also want to provide a range key condition, it must be combined using AND with the hash key condition. Following is an example, using the = comparison operator for the range key:
hashAttributeName
= :hashval
AND rangeAttributeName
= :rangeval
Valid comparisons for the range key condition are as follows:
rangeAttributeName
= :rangeval
- true if the range key is equal to :rangeval
.
rangeAttributeName
< :rangeval
- true if the range key is less than :rangeval
.
rangeAttributeName
<= :rangeval
- true if the range key is less than or equal to :rangeval
.
rangeAttributeName
> :rangeval
- true if the range key is greater than :rangeval
.
rangeAttributeName
>= :rangeval
- true if the range key is greater than or equal to :rangeval
.
rangeAttributeName
BETWEEN :rangeval1
AND :rangeval2
- true if the range key is less than or greater than :rangeval1
, and less than or equal to :rangeval2
.
begins_with (rangeAttributeName
, :rangeval
) - true if the range key begins with a particular operand. Note that the function name begins_with
is case-sensitive.
Use the ExpressionAttributeValues parameter to replace tokens such as :hashval
and :rangeval
with actual values at runtime.
You can optionally use the ExpressionAttributeNames parameter to replace the names of the hash and range attributes with placeholder tokens. This might be necessary if an attribute name conflicts with a DynamoDB reserved word. For example, the following KeyConditionExpression causes an error because Size is a reserved word:
Size = :myval
To work around this, define a placeholder (such a #myval
) to represent the attribute name Size. KeyConditionExpression then is as follows:
#S = :myval
For a list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide.
For more information on ExpressionAttributeNames and ExpressionAttributeValues, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
KeyConditionExpression replaces the legacy KeyConditions parameter.
The condition that specifies the key value(s) for items to be retrieved by the Query action.
The condition must perform an equality test on a single hash key value. The condition can also perform one of several comparison tests on a single range key value. Query can use KeyConditionExpression to retrieve one item with a given hash and range key value, or several items that have the same hash key value but different range key values.
The hash key equality test is required, and must be specified in the following format:
hashAttributeName
= :hashval
If you also want to provide a range key condition, it must be combined using AND with the hash key condition. Following is an example, using the = comparison operator for the range key:
hashAttributeName
= :hashval
AND rangeAttributeName
= :rangeval
Valid comparisons for the range key condition are as follows:
rangeAttributeName
= :rangeval
- true if the range key is equal to :rangeval
.
rangeAttributeName
< :rangeval
- true if the range key is less than :rangeval
.
rangeAttributeName
<= :rangeval
- true if the range key is less than or equal to :rangeval
.
rangeAttributeName
> :rangeval
- true if the range key is greater than :rangeval
.
rangeAttributeName
>= :rangeval
- true if the range key is greater than or equal to :rangeval
.
rangeAttributeName
BETWEEN :rangeval1
AND :rangeval2
- true if the range key is greater than or equal to :rangeval1
, and less than or equal to :rangeval2
.
begins_with (rangeAttributeName
, :rangeval
) - true if the range key begins with a particular operand. (You cannot use this function with a range key that is of type Number.) Note that the function name begins_with
is case-sensitive.
Use the ExpressionAttributeValues parameter to replace tokens such as :hashval
and :rangeval
with actual values at runtime.
You can optionally use the ExpressionAttributeNames parameter to replace the names of the hash and range attributes with placeholder tokens. This option might be necessary if an attribute name conflicts with a DynamoDB reserved word. For example, the following KeyConditionExpression parameter causes an error because Size is a reserved word:
Size = :myval
To work around this, define a placeholder (such a #S
) to represent the attribute name Size. KeyConditionExpression then is as follows:
#S = :myval
For a list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide.
For more information on ExpressionAttributeNames and ExpressionAttributeValues, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.
KeyConditionExpression replaces the legacy KeyConditions parameter.
The request rate is too high, or the request is too large, for the available throughput to accommodate. The AWS SDKs automatically retry requests that receive this exception; therefore, your request will eventually succeed, unless the request is too large or your retry queue is too large to finish. Reduce the frequency of requests by using the strategies listed in Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
", + "base": "Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
", "refs": { } }, @@ -800,7 +801,7 @@ } }, "ReturnConsumedCapacity": { - "base": "A value that if set to TOTAL
, the response includes ConsumedCapacity data for tables and indexes. If set to INDEXES
, the response includes ConsumedCapacity for indexes. If set to NONE
(the default), ConsumedCapacity is not included in the response.
Determines the level of detail about provisioned throughput consumption that is returned in the response:
INDEXES - The response includes the aggregate ConsumedCapacity for the operation, together with ConsumedCapacity for each table and secondary index that was accessed.
Note that some operations, such as GetItem and BatchGetItem, do not access any indexes at all. In these cases, specifying INDEXES will only return ConsumedCapacity information for table(s).
TOTAL - The response includes only the aggregate ConsumedCapacity for the operation.
NONE - No ConsumedCapacity details are included in the response.
A value that if set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
A value that if set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
A value that if set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
A value that if set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
Determines whether item collection metrics are returned. If set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
Determines whether item collection metrics are returned. If set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
Determines whether item collection metrics are returned. If set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
Determines whether item collection metrics are returned. If set to SIZE
, the response includes statistics about item collections, if any, that were modified during the operation are returned in the response. If set to NONE
(the default), no statistics are returned.
Use ReturnValues if you want to get the item attributes as they appeared before they were deleted. For DeleteItem, the valid values are:
NONE
- If ReturnValues is not specified, or if its value is NONE
, then nothing is returned. (This setting is the default for ReturnValues.)
ALL_OLD
- The content of the old item is returned.
Use ReturnValues if you want to get the item attributes as they appeared before they were updated with the PutItem request. For PutItem, the valid values are:
NONE
- If ReturnValues is not specified, or if its value is NONE
, then nothing is returned. (This setting is the default for ReturnValues.)
ALL_OLD
- If PutItem overwrote an attribute name-value pair, then the content of the old item is returned.
Use ReturnValues if you want to get the item attributes as they appeared before they were updated with the PutItem request. For PutItem, the valid values are:
NONE
- If ReturnValues is not specified, or if its value is NONE
, then nothing is returned. (This setting is the default for ReturnValues.)
ALL_OLD
- If PutItem overwrote an attribute name-value pair, then the content of the old item is returned.
Other \"Valid Values\" are not relevant to PutItem.
Use ReturnValues if you want to get the item attributes as they appeared either before or after they were updated. For UpdateItem, the valid values are:
NONE
- If ReturnValues is not specified, or if its value is NONE
, then nothing is returned. (This setting is the default for ReturnValues.)
ALL_OLD
- If UpdateItem overwrote an attribute name-value pair, then the content of the old item is returned.
UPDATED_OLD
- The old versions of only the updated attributes are returned.
ALL_NEW
- All of the attributes of the new version of the item are returned.
UPDATED_NEW
- The new versions of only the updated attributes are returned.
The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, or the count of matching items.
ALL_ATTRIBUTES
- Returns all of the item attributes.
COUNT
- Returns the number of matching items, rather than the matching items themselves.
SPECIFIC_ATTRIBUTES
- Returns only the attributes listed in AttributesToGet. This return value is equivalent to specifying AttributesToGet without specifying any value for Select.
If neither Select nor AttributesToGet are specified, DynamoDB defaults to ALL_ATTRIBUTES
. You cannot use both AttributesToGet and Select together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES
. (This usage is equivalent to specifying AttributesToGet without any value for Select.)
The Amazon Resource Name (ARN) that uniquely identifies the latest stream for this table.
" + } + }, + "StreamEnabled": { + "base": null, + "refs": { + "StreamSpecification$StreamEnabled": "Indicates whether DynamoDB Streams is enabled (true) or disabled (false) on the table.
" + } + }, + "StreamSpecification": { + "base": "Represents the DynamoDB Streams configuration for a table in DynamoDB.
", + "refs": { + "CreateTableInput$StreamSpecification": "The settings for DynamoDB Streams on the table. These settings consist of:
StreamEnabled - Indicates whether Streams is to be enabled (true) or disabled (false).
StreamViewType - When an item in the table is modified, StreamViewType determines what information is written to the table's stream. Valid values for StreamViewType are:
KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
The current DynamoDB Streams configuration for the table.
", + "UpdateTableInput$StreamSpecification": "Represents the DynamoDB Streams configuration for the table.
You will receive a ResourceInUseException if you attempt to enable a stream on a table that already has a stream, or if you attempt to disable a stream on a table which does not have a stream.
The DynamoDB Streams settings for the table. These settings consist of:
StreamEnabled - Indicates whether DynamoDB Streams is enabled (true) or disabled (false) on the table.
StreamViewType - When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are:
KEYS_ONLY - Only the key attributes of the modified item are written to the stream.
NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.
OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.
NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
The Amazon Resource Name (ARN) that uniquely identifies the index.
", + "LocalSecondaryIndexDescription$IndexArn": "The Amazon Resource Name (ARN) that uniquely identifies the index.
", + "TableDescription$TableArn": "The Amazon Resource Name (ARN) that uniquely identifies the table.
", + "TableDescription$LatestStreamLabel": "A timestamp, in ISO 8601 format, for this stream.
Note that LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
the AWS customer ID.
the table name.
the StreamLabel.
An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them.
The following action values are available for UpdateExpression.
SET
- Adds one or more attributes and values to an item. If any of these attribute already exist, they are replaced by the new values. You can also use SET
to add or subtract from an attribute that is of type Number.
SET
supports the following functions:
if_not_exists (path, operand)
- if the item does not contain an attribute at the specified path, then if_not_exists
evaluates to operand; otherwise, it evaluates to path. You can use this function to avoid overwriting an attribute that may already be present in the item.
list_append (operand, operand)
- evaluates to a list with a new element added to it. You can append the new element to the start or the end of the list by reversing the order of the operands.
These function names are case-sensitive.
REMOVE
- Removes one or more attributes from an item.
ADD
- Adds the specified value to the item, if the attribute does not already exist. If the attribute does exist, then the behavior of ADD
depends on the data type of the attribute:
If the existing attribute is a number, and if Value is also a number, then Value is mathematically added to the existing attribute. If Value is a negative number, then it is subtracted from the existing attribute.
If you use ADD
to increment or decrement a number value for an item that doesn't exist before the update, DynamoDB uses 0
as the initial value.
Similarly, if you use ADD
for an existing item to increment or decrement an attribute value that doesn't exist before the update, DynamoDB uses 0
as the initial value. For example, suppose that the item you want to update doesn't have an attribute named itemcount, but you decide to ADD
the number 3
to this attribute anyway. DynamoDB will create the itemcount attribute, set its initial value to 0
, and finally add 3
to it. The result will be a new itemcount attribute in the item, with a value of 3
.
If the existing data type is a set and if Value is also a set, then Value is added to the existing set. For example, if the attribute value is the set [1,2]
, and the ADD
action specified [3]
, then the final attribute value is [1,2,3]
. An error occurs if an ADD
action is specified for a set attribute and the attribute type specified does not match the existing set type.
Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the Value must also be a set of strings.
The ADD
action only supports Number and set data types. In addition, ADD
can only be used on top-level attributes, not nested attributes.
DELETE
- Deletes an element from a set.
If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set [a,b,c]
and the DELETE
action specifies [a,c]
, then the final attribute value is [b]
. Specifying an empty set is an error.
The DELETE
action only supports Number and set data types. In addition, DELETE
can only be used on top-level attributes, not nested attributes.
You can have many actions in a single expression, such as the following: SET a=:value1, b=:value2 DELETE :value3, :value4, :value5
For more information on update expressions, see Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.
UpdateExpression replaces the legacy AttributeUpdates parameter.
An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them.
The following action values are available for UpdateExpression.
SET
- Adds one or more attributes and values to an item. If any of these attribute already exist, they are replaced by the new values. You can also use SET
to add or subtract from an attribute that is of type Number. For example: SET myNum = myNum + :val
SET
supports the following functions:
if_not_exists (path, operand)
- if the item does not contain an attribute at the specified path, then if_not_exists
evaluates to operand; otherwise, it evaluates to path. You can use this function to avoid overwriting an attribute that may already be present in the item.
list_append (operand, operand)
- evaluates to a list with a new element added to it. You can append the new element to the start or the end of the list by reversing the order of the operands.
These function names are case-sensitive.
REMOVE
- Removes one or more attributes from an item.
ADD
- Adds the specified value to the item, if the attribute does not already exist. If the attribute does exist, then the behavior of ADD
depends on the data type of the attribute:
If the existing attribute is a number, and if Value is also a number, then Value is mathematically added to the existing attribute. If Value is a negative number, then it is subtracted from the existing attribute.
If you use ADD
to increment or decrement a number value for an item that doesn't exist before the update, DynamoDB uses 0
as the initial value.
Similarly, if you use ADD
for an existing item to increment or decrement an attribute value that doesn't exist before the update, DynamoDB uses 0
as the initial value. For example, suppose that the item you want to update doesn't have an attribute named itemcount, but you decide to ADD
the number 3
to this attribute anyway. DynamoDB will create the itemcount attribute, set its initial value to 0
, and finally add 3
to it. The result will be a new itemcount attribute in the item, with a value of 3
.
If the existing data type is a set and if Value is also a set, then Value is added to the existing set. For example, if the attribute value is the set [1,2]
, and the ADD
action specified [3]
, then the final attribute value is [1,2,3]
. An error occurs if an ADD
action is specified for a set attribute and the attribute type specified does not match the existing set type.
Both sets must have the same primitive data type. For example, if the existing data type is a set of strings, the Value must also be a set of strings.
The ADD
action only supports Number and set data types. In addition, ADD
can only be used on top-level attributes, not nested attributes.
DELETE
- Deletes an element from a set.
If a set of values is specified, then those values are subtracted from the old set. For example, if the attribute value was the set [a,b,c]
and the DELETE
action specifies [a,c]
, then the final attribute value is [b]
. Specifying an empty set is an error.
The DELETE
action only supports set data types. In addition, DELETE
can only be used on top-level attributes, not nested attributes.
You can have many actions in a single expression, such as the following: SET a=:value1, b=:value2 DELETE :value3, :value4, :value5
For more information on update expressions, see Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.
UpdateExpression replaces the legacy AttributeUpdates parameter.
Returns information about a stream, including the current status of the stream, its Amazon Resource Name (ARN), the composition of its shards, and its corresponding DynamoDB table.
You can call DescribeStream at a maximum rate of 10 times per second.
Each shard in the stream has a SequenceNumberRange
associated with it. If the SequenceNumberRange
has a StartingSequenceNumber
but no EndingSequenceNumber
, then the shard is still open (able to receive more stream records). If both StartingSequenceNumber
and EndingSequenceNumber
are present, the that shared is closed and can no longer receive more data.
Retrieves the stream records from a given shard.
Specify a shard iterator using the ShardIterator
parameter. The shard iterator specifies the position in the shard from which you want to start reading stream records sequentially. If there are no stream records available in the portion of the shard that the iterator points to, GetRecords
returns an empty list. Note that it might take multiple calls to get to a portion of the shard that contains stream records.
Returns a shard iterator. A shard iterator provides information about how to retrieve the stream records from within a shard. Use the shard iterator in a subsequent GetRecords
request to read the stream records from the shard.
A shard iterator expires 15 minutes after it is returned to the requester.
Returns an array of stream ARNs associated with the current account and endpoint. If the TableName
parameter is present, then ListStreams will return only the streams ARNs for that table.
You can call ListStreams at a maximum rate of 5 times per second.
This is the Amazon DynamoDB Streams API Reference. This guide describes the low-level API actions for accessing streams and processing stream records. For information about application development with DynamoDB Streams, see the Amazon DynamoDB Developer Guide.
Note that this document is intended for use with the following DynamoDB documentation:
The following are short descriptions of each low-level DynamoDB Streams API action, organized by function.
DescribeStream - Returns detailed information about a particular stream.
GetRecords - Retrieves the stream records from within a shard.
GetShardIterator - Returns information on how to retrieve the streams record from a shard with a given shard ID.
ListStreams - Returns a list of all the streams associated with the current AWS account and endpoint.
The primary key attribute(s) for the DynamoDB item that was modified.
", + "StreamRecord$NewImage": "The item in the DynamoDB table as it appeared after it was modified.
", + "StreamRecord$OldImage": "The item in the DynamoDB table as it appeared before it was modified.
" + } + }, + "AttributeName": { + "base": null, + "refs": { + "AttributeMap$key": null, + "MapAttributeValue$key": null + } + }, + "AttributeValue": { + "base": "Represents the data for an attribute. You can set one, and only one, of the elements.
Each attribute in an item is a name-value pair. An attribute can be single-valued or multi-valued set. For example, a book item can have title and authors attributes. Each book has one title but can have many authors. The multi-valued attribute is a set; duplicate values are not allowed.
", + "refs": { + "AttributeMap$value": null, + "ListAttributeValue$member": null, + "MapAttributeValue$value": null + } + }, + "BinaryAttributeValue": { + "base": null, + "refs": { + "AttributeValue$B": "A Binary data type.
", + "BinarySetAttributeValue$member": null + } + }, + "BinarySetAttributeValue": { + "base": null, + "refs": { + "AttributeValue$BS": "A Binary Set data type.
" + } + }, + "BooleanAttributeValue": { + "base": null, + "refs": { + "AttributeValue$BOOL": "A Boolean data type.
" + } + }, + "Date": { + "base": null, + "refs": { + "StreamDescription$CreationRequestDateTime": "The date and time when the request to create this stream was issued.
" + } + }, + "DescribeStreamInput": { + "base": "Represents the input of a DescribeStream operation.
", + "refs": { + } + }, + "DescribeStreamOutput": { + "base": "Represents the output of a DescribeStream operation.
", + "refs": { + } + }, + "ErrorMessage": { + "base": null, + "refs": { + "ExpiredIteratorException$message": "The provided iterator exceeds the maximum age allowed.
", + "InternalServerError$message": "The server encountered an internal error trying to fulfill the request.
", + "LimitExceededException$message": "Too many operations for a given subscriber.
", + "ResourceNotFoundException$message": "The resource which is being requested does not exist.
", + "TrimmedDataAccessException$message": "\"The data you are trying to access has been trimmed.
" + } + }, + "ExpiredIteratorException": { + "base": "The shard iterator has expired and can no longer be used to retrieve stream records. A shard iterator expires 15 minutes after it is retrieved using the GetShardIterator action.
", + "refs": { + } + }, + "GetRecordsInput": { + "base": "Represents the input of a GetRecords operation.
", + "refs": { + } + }, + "GetRecordsOutput": { + "base": "Represents the output of a GetRecords operation.
", + "refs": { + } + }, + "GetShardIteratorInput": { + "base": "Represents the input of a GetShardIterator operation.
", + "refs": { + } + }, + "GetShardIteratorOutput": { + "base": "Represents the output of a GetShardIterator operation.
", + "refs": { + } + }, + "InternalServerError": { + "base": "An error occurred on the server side.
", + "refs": { + } + }, + "KeySchema": { + "base": null, + "refs": { + "StreamDescription$KeySchema": "The key attribute(s) of the stream's DynamoDB table.
" + } + }, + "KeySchemaAttributeName": { + "base": null, + "refs": { + "KeySchemaElement$AttributeName": "The name of a key attribute.
" + } + }, + "KeySchemaElement": { + "base": "Represents a single element of a key schema. A key schema specifies the attributes that make up the primary key of a table, or the key attributes of an index.
A KeySchemaElement represents exactly one attribute of the primary key. For example, a hash type primary key would be represented by one KeySchemaElement. A hash-and-range type primary key would require one KeySchemaElement for the hash attribute, and another KeySchemaElement for the range attribute.
", + "refs": { + "KeySchema$member": null + } + }, + "KeyType": { + "base": null, + "refs": { + "KeySchemaElement$KeyType": "The attribute data, consisting of the data type and the attribute value itself.
" + } + }, + "LimitExceededException": { + "base": "Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
", + "refs": { + } + }, + "ListAttributeValue": { + "base": null, + "refs": { + "AttributeValue$L": "A List data type.
" + } + }, + "ListStreamsInput": { + "base": "Represents the input of a ListStreams operation.
", + "refs": { + } + }, + "ListStreamsOutput": { + "base": "Represents the output of a ListStreams operation.
", + "refs": { + } + }, + "MapAttributeValue": { + "base": null, + "refs": { + "AttributeValue$M": "A Map data type.
" + } + }, + "NullAttributeValue": { + "base": null, + "refs": { + "AttributeValue$NULL": "A Null data type.
" + } + }, + "NumberAttributeValue": { + "base": null, + "refs": { + "AttributeValue$N": "A Number data type.
", + "NumberSetAttributeValue$member": null + } + }, + "NumberSetAttributeValue": { + "base": null, + "refs": { + "AttributeValue$NS": "A Number Set data type.
" + } + }, + "OperationType": { + "base": null, + "refs": { + "Record$eventName": "The type of data modification that was performed on the DynamoDB table:
INSERT
- a new item was added to the table.
MODIFY
- one or more of the item's attributes were updated.
REMOVE
- the item was deleted from the table
The maximum number of shard objects to return. The upper limit is 100.
", + "GetRecordsInput$Limit": "The maximum number of records to return from the shard. The upper limit is 1000.
", + "ListStreamsInput$Limit": "The maximum number of streams to return. The upper limit is 100.
" + } + }, + "PositiveLongObject": { + "base": null, + "refs": { + "StreamRecord$SizeBytes": "The size of the stream record, in bytes.
" + } + }, + "Record": { + "base": "A description of a unique event within a stream.
", + "refs": { + "RecordList$member": null + } + }, + "RecordList": { + "base": null, + "refs": { + "GetRecordsOutput$Records": "The stream records from the shard, which were retrieved using the shard iterator.
" + } + }, + "ResourceNotFoundException": { + "base": "The operation tried to access a nonexistent stream.
", + "refs": { + } + }, + "SequenceNumber": { + "base": null, + "refs": { + "GetShardIteratorInput$SequenceNumber": "The sequence number of a stream record in the shard from which to start reading.
", + "SequenceNumberRange$StartingSequenceNumber": "The first sequence number.
", + "SequenceNumberRange$EndingSequenceNumber": "The last sequence number.
", + "StreamRecord$SequenceNumber": "The sequence number of the stream record.
" + } + }, + "SequenceNumberRange": { + "base": "The beginning and ending sequence numbers for the stream records contained within a shard.
", + "refs": { + "Shard$SequenceNumberRange": "The range of possible sequence numbers for the shard.
" + } + }, + "Shard": { + "base": "A uniquely identified group of stream records within a stream.
", + "refs": { + "ShardDescriptionList$member": null + } + }, + "ShardDescriptionList": { + "base": null, + "refs": { + "StreamDescription$Shards": "The shards that comprise the stream.
" + } + }, + "ShardId": { + "base": null, + "refs": { + "DescribeStreamInput$ExclusiveStartShardId": "The shard ID of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedShardId
in the previous operation.
The identifier of the shard. The iterator will be returned for this shard ID.
", + "Shard$ShardId": "The system-generated identifier for this shard.
", + "Shard$ParentShardId": "The shard ID of the current shard's parent.
", + "StreamDescription$LastEvaluatedShardId": "The shard ID of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.
If LastEvaluatedShardId
is empty, then the \"last page\" of results has been processed and there is currently no more data to be retrieved.
If LastEvaluatedShardId
is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedShardId
is empty.
A shard iterator that was retrieved from a previous GetShardIterator operation. This iterator can be used to access the stream records in this shard.
", + "GetRecordsOutput$NextShardIterator": "The next position in the shard from which to start sequentially reading stream records. If set to null
, the shard has been closed and the requested iterator will not return any more data.
The position in the shard from which to start reading stream records sequentially. A shard iterator specifies this position using the sequence number of a stream record in a shard.
" + } + }, + "ShardIteratorType": { + "base": null, + "refs": { + "GetShardIteratorInput$ShardIteratorType": "Determines how the shard iterator is used to start reading stream records from the shard:
AT_SEQUENCE_NUMBER
- Start reading exactly from the position denoted by a specific sequence number.
AFTER_SEQUENCE_NUMBER
- Start reading right after the position denoted by a specific sequence number.
TRIM_HORIZON
- Start reading at the last (untrimmed) stream record, which is the oldest record in the shard. In DynamoDB Streams, there is a 24 hour limit on data retention. Stream records whose age exceeds this limit are subject to removal (trimming) from the stream.
LATEST
- Start reading just after the most recent stream record in the shard, so that you always read the most recent data in the shard.
Represents all of the data describing a particular stream.
", + "refs": { + "StreamList$member": null + } + }, + "StreamArn": { + "base": null, + "refs": { + "DescribeStreamInput$StreamArn": "The Amazon Resource Name (ARN) for the stream.
", + "GetShardIteratorInput$StreamArn": "The Amazon Resource Name (ARN) for the stream.
", + "ListStreamsInput$ExclusiveStartStreamArn": "The ARN (Amazon Resource Name) of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedStreamArn
in the previous operation.
The stream ARN of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request.
If LastEvaluatedStreamArn
is empty, then the \"last page\" of results has been processed and there is no more data to be retrieved.
If LastEvaluatedStreamArn
is not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedStreamArn
is empty.
The Amazon Resource Name (ARN) for the stream.
", + "StreamDescription$StreamArn": "The Amazon Resource Name (ARN) for the stream.
" + } + }, + "StreamDescription": { + "base": "Represents all of the data describing a particular stream.
", + "refs": { + "DescribeStreamOutput$StreamDescription": "A complete description of the stream, including its creation date and time, the DynamoDB table associated with the stream, the shard IDs within the stream, and the beginning and ending sequence numbers of stream records within the shards.
" + } + }, + "StreamList": { + "base": null, + "refs": { + "ListStreamsOutput$Streams": "A list of stream descriptors associated with the current account and endpoint.
" + } + }, + "StreamRecord": { + "base": "A description of a single data modification that was performed on an item in a DynamoDB table.
", + "refs": { + "Record$dynamodb": "The main body of the stream record, containing all of the DynamoDB-specific fields.
" + } + }, + "StreamStatus": { + "base": null, + "refs": { + "StreamDescription$StreamStatus": "Indicates the current status of the stream:
ENABLING
- Streams is currently being enabled on the DynamoDB table.
ENABLING
- the stream is enabled.
DISABLING
- Streams is currently being disabled on the DynamoDB table.
DISABLED
- the stream is disabled.
Indicates the format of the records within this stream:
KEYS_ONLY
- only the key attributes of items that were modified in the DynamoDB table.
NEW_IMAGE
- entire item from the table, as it appeared after they were modified.
OLD_IMAGE
- entire item from the table, as it appeared before they were modified.
NEW_AND_OLD_IMAGES
- both the new and the old images of the items from the table.
The type of data from the modified DynamoDB item that was captured in this stream record:
KEYS_ONLY
- only the key attributes of the modified item.
NEW_IMAGE
- the entire item, as it appears after it was modified.
OLD_IMAGE
- the entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES
— both the new and the old item images of the item.
A globally unique identifier for the event that was recorded in this stream record.
", + "Record$eventVersion": "The version number of the stream record format. Currently, this is 1.0.
", + "Record$eventSource": "The AWS service from which the stream record originated. For DynamoDB Streams, this is aws:dynamodb.
", + "Record$awsRegion": "The region in which the GetRecords request was received.
", + "Stream$StreamLabel": "A timestamp, in ISO 8601 format, for this stream.
Note that LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
the AWS customer ID.
the table name
the StreamLabel
A timestamp, in ISO 8601 format, for this stream.
Note that LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
the AWS customer ID.
the table name
the StreamLabel
A String data type.
", + "StringSetAttributeValue$member": null + } + }, + "StringSetAttributeValue": { + "base": null, + "refs": { + "AttributeValue$SS": "A String Set data type.
" + } + }, + "TableName": { + "base": null, + "refs": { + "ListStreamsInput$TableName": "If this parameter is provided, then only the streams associated with this table name are returned.
", + "Stream$TableName": "The DynamoDB table with which the stream is associated.
", + "StreamDescription$TableName": "The DynamoDB table with which the stream is associated.
" + } + }, + "TrimmedDataAccessException": { + "base": "The operation attempted to read past the oldest stream record in a shard.
In DynamoDB Streams, there is a 24 hour limit on data retention. Stream records whose age exceeds this limit are subject to removal (trimming) from the stream. You might receive a TrimmedDataAccessException if: