diff --git a/src/DirectoryService/DirectoryServiceClient.php b/src/DirectoryService/DirectoryServiceClient.php new file mode 100644 index 0000000000..b9816e7ffe --- /dev/null +++ b/src/DirectoryService/DirectoryServiceClient.php @@ -0,0 +1,9 @@ +Cancels an update on the specified stack. If the call completes successfully, the stack will roll back the update and revert to the previous stack configuration.

Only stacks that are in the UPDATE_IN_PROGRESS state can be canceled.", "CreateStack": "

Creates a stack as specified in the template. After the call completes successfully, the stack creation starts. You can check the status of the stack via the DescribeStacks API.

", @@ -20,6 +21,18 @@ }, "service": "AWS CloudFormation

AWS CloudFormation enables you to create and manage AWS infrastructure deployments predictably and repeatedly. AWS CloudFormation helps you leverage AWS products such as Amazon EC2, EBS, Amazon SNS, ELB, and Auto Scaling to build highly-reliable, highly scalable, cost effective applications without worrying about creating and configuring the underlying AWS infrastructure.

With AWS CloudFormation, you declare all of your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. AWS CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you.

For more information about this product, go to the CloudFormation Product Page.

Amazon CloudFormation makes use of other AWS products. If you need additional technical information about a specific AWS product, you can find the product's technical documentation at http://aws.amazon.com/documentation/.

", "shapes": { + "AllowedValue": { + "base": null, + "refs": { + "AllowedValues$member": null + } + }, + "AllowedValues": { + "base": null, + "refs": { + "ParameterConstraints$AllowedValues": "

A list of values that are permitted for a parameter.

" + } + }, "AlreadyExistsException": { "base": "

Resource with the name requested already exists.

", "refs": { @@ -33,18 +46,18 @@ "Capabilities": { "base": null, "refs": { - "CreateStackInput$Capabilities": "

A list of capabilities that you must specify before AWS CloudFormation can create or update certain stacks. Some stack templates might include resources that can affect permissions in your AWS account. For those stacks, you must explicitly acknowledge their capabilities by specifying this parameter.

Currently, the only valid value is CAPABILITY_IAM, which is required for the following resources: AWS::CloudFormation::Stack, AWS::IAM::AccessKey, AWS::IAM::Group, AWS::IAM::InstanceProfile, AWS::IAM::Policy, AWS::IAM::Role, AWS::IAM::User, and AWS::IAM::UserToGroupAddition. If your stack template contains these resources, we recommend that you review any permissions associated with them. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

", + "CreateStackInput$Capabilities": "

A list of capabilities that you must specify before AWS CloudFormation can create or update certain stacks. Some stack templates might include resources that can affect permissions in your AWS account. For those stacks, you must explicitly acknowledge their capabilities by specifying this parameter.

Currently, the only valid value is CAPABILITY_IAM, which is required for the following resources: AWS::IAM::AccessKey, AWS::IAM::Group, AWS::IAM::InstanceProfile, AWS::IAM::Policy, AWS::IAM::Role, AWS::IAM::User, and AWS::IAM::UserToGroupAddition. If your stack template contains these resources, we recommend that you review any permissions associated with them. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

", "GetTemplateSummaryOutput$Capabilities": "

The capabilities found within the template. Currently, AWS CloudFormation supports only the CAPABILITY_IAM capability. If your template contains IAM resources, you must specify the CAPABILITY_IAM value for this parameter when you use the CreateStack or UpdateStack actions with your template; otherwise, those actions return an InsufficientCapabilities error.

", "Stack$Capabilities": "

The capabilities allowed in the stack.

", - "UpdateStackInput$Capabilities": "

A list of capabilities that you must specify before AWS CloudFormation can create or update certain stacks. Some stack templates might include resources that can affect permissions in your AWS account. For those stacks, you must explicitly acknowledge their capabilities by specifying this parameter. Currently, the only valid value is CAPABILITY_IAM, which is required for the following resources: AWS::CloudFormation::Stack, AWS::IAM::AccessKey, AWS::IAM::Group, AWS::IAM::InstanceProfile, AWS::IAM::Policy, AWS::IAM::Role, AWS::IAM::User, and AWS::IAM::UserToGroupAddition. If your stack template contains these resources, we recommend that you review any permissions associated with them. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

", + "UpdateStackInput$Capabilities": "

A list of capabilities that you must specify before AWS CloudFormation can create or update certain stacks. Some stack templates might include resources that can affect permissions in your AWS account. For those stacks, you must explicitly acknowledge their capabilities by specifying this parameter. Currently, the only valid value is CAPABILITY_IAM, which is required for the following resources: AWS::IAM::AccessKey, AWS::IAM::Group, AWS::IAM::InstanceProfile, AWS::IAM::Policy, AWS::IAM::Role, AWS::IAM::User, and AWS::IAM::UserToGroupAddition. If your stack template contains these resources, we recommend that you review any permissions associated with them. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

", "ValidateTemplateOutput$Capabilities": "

The capabilities found within the template. Currently, AWS CloudFormation supports only the CAPABILITY_IAM capability. If your template contains IAM resources, you must specify the CAPABILITY_IAM value for this parameter when you use the CreateStack or UpdateStack actions with your template; otherwise, those actions return an InsufficientCapabilities error.

" } }, "CapabilitiesReason": { "base": null, "refs": { - "GetTemplateSummaryOutput$CapabilitiesReason": "

The capabilities reason found within the template.

", - "ValidateTemplateOutput$CapabilitiesReason": "

The capabilities reason found within the template.

" + "GetTemplateSummaryOutput$CapabilitiesReason": "

The list of resources that generated the values in the Capabilities response element.

", + "ValidateTemplateOutput$CapabilitiesReason": "

The list of resources that generated the values in the Capabilities response element.

" } }, "Capability": { @@ -239,6 +252,7 @@ "Metadata": { "base": null, "refs": { + "GetTemplateSummaryOutput$Metadata": "

The value that is defined for the Metadata property of the template.

", "StackResourceDetail$Metadata": "

The JSON format content of the Metadata attribute declared for the resource. For more information, see Metadata Attribute in the AWS CloudFormation User Guide.

" } }, @@ -312,6 +326,12 @@ "Parameters$member": null } }, + "ParameterConstraints": { + "base": "

A set of criteria that AWS CloudFormation uses to validate parameter values. Although other constraints might be defined in the stack template, AWS CloudFormation returns only the AllowedValues property.

", + "refs": { + "ParameterDeclaration$ParameterConstraints": "

The criteria that AWS CloudFormation uses to validate parameter values.

" + } + }, "ParameterDeclaration": { "base": "

The ParameterDeclaration data type.

", "refs": { @@ -327,7 +347,7 @@ "ParameterKey": { "base": null, "refs": { - "Parameter$ParameterKey": "

The key associated with the parameter.

", + "Parameter$ParameterKey": "

The key associated with the parameter. If you don't specify a key and value for a particular parameter, AWS CloudFormation uses the default value that is specified in your template.

", "ParameterDeclaration$ParameterKey": "

The name that is associated with the parameter.

", "TemplateParameter$ParameterKey": "

The name associated with the parameter.

" } @@ -352,7 +372,7 @@ "CreateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack.

", "EstimateTemplateCostInput$Parameters": "

A list of Parameter structures that specify input parameters.

", "Stack$Parameters": "

A list of Parameter structures.

", - "UpdateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack.

" + "UpdateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack. For more information, see the Parameter data type.

" } }, "PhysicalResourceId": { @@ -634,11 +654,11 @@ "TemplateURL": { "base": null, "refs": { - "CreateStackInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 307,200 bytes) located in an S3 bucket in the same region as the stack. For more information, go to the Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify either the TemplateBody or the TemplateURL parameter, but not both.

", + "CreateStackInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 460,800 bytes) located in an S3 bucket in the same region as the stack. For more information, go to the Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify either the TemplateBody or the TemplateURL parameter, but not both.

", "EstimateTemplateCostInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template located in an S3 bucket in the same region as the stack. For more information, go to Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must pass TemplateURL or TemplateBody. If both are passed, only TemplateBody is used.

", - "GetTemplateSummaryInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 307,200 bytes) located in an Amazon S3 bucket. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, TemplateBody, or TemplateURL.

", + "GetTemplateSummaryInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 460,800 bytes) located in an Amazon S3 bucket. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, TemplateBody, or TemplateURL.

", "UpdateStackInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template located in an S3 bucket in the same region as the stack. For more information, go to Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify either the TemplateBody or the TemplateURL parameter, but not both.

", - "ValidateTemplateInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 307,200 bytes) located in an S3 bucket in the same region as the stack. For more information, go to Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must pass TemplateURL or TemplateBody. If both are passed, only TemplateBody is used.

" + "ValidateTemplateInput$TemplateURL": "

Location of file containing the template body. The URL must point to a template (max size: 460,800 bytes) located in an S3 bucket in the same region as the stack. For more information, go to Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must pass TemplateURL or TemplateBody. If both are passed, only TemplateBody is used.

" } }, "TimeoutMinutes": { @@ -682,7 +702,7 @@ "UsePreviousValue": { "base": null, "refs": { - "Parameter$UsePreviousValue": "

During a stack update, use the existing parameter value that is being used for the stack.

" + "Parameter$UsePreviousValue": "

During a stack update, use the existing parameter value that the stack is using for a given parameter key. If you specify true, do not specify a parameter value.

" } }, "ValidateTemplateInput": { diff --git a/src/data/cloudformation/2010-05-15/waiters-2.json b/src/data/cloudformation/2010-05-15/waiters-2.json new file mode 100644 index 0000000000..8daeb7c762 --- /dev/null +++ b/src/data/cloudformation/2010-05-15/waiters-2.json @@ -0,0 +1,70 @@ +{ + "version": 2, + "waiters": { + "StackCreateComplete": { + "delay": 30, + "operation": "DescribeStacks", + "maxAttempts": 50, + "description": "Wait until stack status is CREATE_COMPLETE.", + "acceptors": [ + { + "expected": "CREATE_COMPLETE", + "matcher": "pathAll", + "state": "success", + "argument": "Stacks[].StackStatus" + }, + { + "expected": "CREATE_FAILED", + "matcher": "pathAny", + "state": "failure", + "argument": "Stacks[].StackStatus" + } + ] + }, + "StackDeleteComplete": { + "delay": 30, + "operation": "DescribeStacks", + "maxAttempts": 25, + "description": "Wait until stack status is DELETE_COMPLETE.", + "acceptors": [ + { + "expected": "DELETE_COMPLETE", + "matcher": "pathAll", + "state": "success", + "argument": "Stacks[].StackStatus" + }, + { + "expected": "ValidationError", + "matcher": "error", + "state": "success" + }, + { + "expected": "DELETE_FAILED", + "matcher": "pathAny", + "state": "failure", + "argument": "Stacks[].StackStatus" + } + ] + }, + "StackUpdateComplete": { + "delay": 30, + "operation": "DescribeStacks", + "maxAttempts": 5, + "description": "Wait until stack status is UPDATE_COMPLETE.", + "acceptors": [ + { + "expected": "UPDATE_COMPLETE", + "matcher": "pathAll", + "state": "success", + "argument": "Stacks[].StackStatus" + }, + { + "expected": "UPDATE_FAILED", + "matcher": "pathAny", + "state": "failure", + "argument": "Stacks[].StackStatus" + } + ] + } + } +} diff --git a/src/data/ds/2015-04-16/api-2.json b/src/data/ds/2015-04-16/api-2.json new file mode 100644 index 0000000000..ee7759fb45 --- /dev/null +++ b/src/data/ds/2015-04-16/api-2.json @@ -0,0 +1,1260 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2015-04-16", + "endpointPrefix":"ds", + "jsonVersion":"1.1", + "serviceAbbreviation":"Directory Service", + "serviceFullName":"AWS Directory Service", + "signatureVersion":"v4", + "targetPrefix":"DirectoryService_20150416", + "protocol":"json" + }, + "operations":{ + "ConnectDirectory":{ + "name":"ConnectDirectory", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ConnectDirectoryRequest"}, + "output":{"shape":"ConnectDirectoryResult"}, + "errors":[ + { + "shape":"DirectoryLimitExceededException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "CreateAlias":{ + "name":"CreateAlias", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateAliasRequest"}, + "output":{"shape":"CreateAliasResult"}, + "errors":[ + { + "shape":"EntityAlreadyExistsException", + "exception":true + }, + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "CreateComputer":{ + "name":"CreateComputer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateComputerRequest"}, + "output":{"shape":"CreateComputerResult"}, + "errors":[ + { + "shape":"AuthenticationFailedException", + "exception":true + }, + { + "shape":"DirectoryUnavailableException", + "exception":true + }, + { + "shape":"EntityAlreadyExistsException", + "exception":true + }, + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"UnsupportedOperationException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "CreateDirectory":{ + "name":"CreateDirectory", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDirectoryRequest"}, + "output":{"shape":"CreateDirectoryResult"}, + "errors":[ + { + "shape":"DirectoryLimitExceededException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "CreateSnapshot":{ + "name":"CreateSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateSnapshotRequest"}, + "output":{"shape":"CreateSnapshotResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"SnapshotLimitExceededException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DeleteDirectory":{ + "name":"DeleteDirectory", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDirectoryRequest"}, + "output":{"shape":"DeleteDirectoryResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DeleteSnapshot":{ + "name":"DeleteSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteSnapshotRequest"}, + "output":{"shape":"DeleteSnapshotResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DescribeDirectories":{ + "name":"DescribeDirectories", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDirectoriesRequest"}, + "output":{"shape":"DescribeDirectoriesResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"InvalidNextTokenException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DescribeSnapshots":{ + "name":"DescribeSnapshots", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeSnapshotsRequest"}, + "output":{"shape":"DescribeSnapshotsResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"InvalidNextTokenException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DisableRadius":{ + "name":"DisableRadius", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisableRadiusRequest"}, + "output":{"shape":"DisableRadiusResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "DisableSso":{ + "name":"DisableSso", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisableSsoRequest"}, + "output":{"shape":"DisableSsoResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InsufficientPermissionsException", + "exception":true + }, + { + "shape":"AuthenticationFailedException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "EnableRadius":{ + "name":"EnableRadius", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"EnableRadiusRequest"}, + "output":{"shape":"EnableRadiusResult"}, + "errors":[ + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"EntityAlreadyExistsException", + "exception":true + }, + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "EnableSso":{ + "name":"EnableSso", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"EnableSsoRequest"}, + "output":{"shape":"EnableSsoResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InsufficientPermissionsException", + "exception":true + }, + { + "shape":"AuthenticationFailedException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "GetDirectoryLimits":{ + "name":"GetDirectoryLimits", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDirectoryLimitsRequest"}, + "output":{"shape":"GetDirectoryLimitsResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "GetSnapshotLimits":{ + "name":"GetSnapshotLimits", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetSnapshotLimitsRequest"}, + "output":{"shape":"GetSnapshotLimitsResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "RestoreFromSnapshot":{ + "name":"RestoreFromSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RestoreFromSnapshotRequest"}, + "output":{"shape":"RestoreFromSnapshotResult"}, + "errors":[ + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + }, + "UpdateRadius":{ + "name":"UpdateRadius", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateRadiusRequest"}, + "output":{"shape":"UpdateRadiusResult"}, + "errors":[ + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"EntityDoesNotExistException", + "exception":true + }, + { + "shape":"ClientException", + "exception":true + }, + { + "shape":"ServiceException", + "exception":true, + "fault":true + } + ] + } + }, + "shapes":{ + "AccessUrl":{ + "type":"string", + "min":1, + "max":128 + }, + "AliasName":{ + "type":"string", + "min":1, + "max":62, + "pattern":"^(?!d-)([\\da-zA-Z]+)([-]*[\\da-zA-Z])*" + }, + "Attribute":{ + "type":"structure", + "members":{ + "Name":{"shape":"AttributeName"}, + "Value":{"shape":"AttributeValue"} + } + }, + "AttributeName":{ + "type":"string", + "min":1 + }, + "AttributeValue":{"type":"string"}, + "Attributes":{ + "type":"list", + "member":{"shape":"Attribute"} + }, + "AuthenticationFailedException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "AvailabilityZone":{"type":"string"}, + "AvailabilityZones":{ + "type":"list", + "member":{"shape":"AvailabilityZone"} + }, + "ClientException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "CloudOnlyDirectoriesLimitReached":{"type":"boolean"}, + "Computer":{ + "type":"structure", + "members":{ + "ComputerId":{"shape":"SID"}, + "ComputerName":{"shape":"ComputerName"}, + "ComputerAttributes":{"shape":"Attributes"} + } + }, + "ComputerName":{ + "type":"string", + "min":1, + "max":15 + }, + "ComputerPassword":{ + "type":"string", + "min":8, + "max":64, + "pattern":"[\\u0020-\\u00FF]+", + "sensitive":true + }, + "ConnectDirectoryRequest":{ + "type":"structure", + "required":[ + "Name", + "Password", + "Size", + "ConnectSettings" + ], + "members":{ + "Name":{"shape":"DirectoryName"}, + "ShortName":{"shape":"DirectoryShortName"}, + "Password":{"shape":"ConnectPassword"}, + "Description":{"shape":"Description"}, + "Size":{"shape":"DirectorySize"}, + "ConnectSettings":{"shape":"DirectoryConnectSettings"} + } + }, + "ConnectDirectoryResult":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "ConnectPassword":{ + "type":"string", + "min":1, + "max":128, + "sensitive":true + }, + "ConnectedDirectoriesLimitReached":{"type":"boolean"}, + "CreateAliasRequest":{ + "type":"structure", + "required":[ + "DirectoryId", + "Alias" + ], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "Alias":{"shape":"AliasName"} + } + }, + "CreateAliasResult":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "Alias":{"shape":"AliasName"} + } + }, + "CreateComputerRequest":{ + "type":"structure", + "required":[ + "DirectoryId", + "ComputerName", + "Password" + ], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "ComputerName":{"shape":"ComputerName"}, + "Password":{"shape":"ComputerPassword"}, + "OrganizationalUnitDistinguishedName":{"shape":"OrganizationalUnitDN"}, + "ComputerAttributes":{"shape":"Attributes"} + } + }, + "CreateComputerResult":{ + "type":"structure", + "members":{ + "Computer":{"shape":"Computer"} + } + }, + "CreateDirectoryRequest":{ + "type":"structure", + "required":[ + "Name", + "Password", + "Size" + ], + "members":{ + "Name":{"shape":"DirectoryName"}, + "ShortName":{"shape":"DirectoryShortName"}, + "Password":{"shape":"Password"}, + "Description":{"shape":"Description"}, + "Size":{"shape":"DirectorySize"}, + "VpcSettings":{"shape":"DirectoryVpcSettings"} + } + }, + "CreateDirectoryResult":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "CreateSnapshotRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "Name":{"shape":"SnapshotName"} + } + }, + "CreateSnapshotResult":{ + "type":"structure", + "members":{ + "SnapshotId":{"shape":"SnapshotId"} + } + }, + "DeleteDirectoryRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "DeleteDirectoryResult":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "DeleteSnapshotRequest":{ + "type":"structure", + "required":["SnapshotId"], + "members":{ + "SnapshotId":{"shape":"SnapshotId"} + } + }, + "DeleteSnapshotResult":{ + "type":"structure", + "members":{ + "SnapshotId":{"shape":"SnapshotId"} + } + }, + "DescribeDirectoriesRequest":{ + "type":"structure", + "members":{ + "DirectoryIds":{"shape":"DirectoryIds"}, + "NextToken":{"shape":"NextToken"}, + "Limit":{"shape":"Limit"} + } + }, + "DescribeDirectoriesResult":{ + "type":"structure", + "members":{ + "DirectoryDescriptions":{"shape":"DirectoryDescriptions"}, + "NextToken":{"shape":"NextToken"} + } + }, + "DescribeSnapshotsRequest":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "SnapshotIds":{"shape":"SnapshotIds"}, + "NextToken":{"shape":"NextToken"}, + "Limit":{"shape":"Limit"} + } + }, + "DescribeSnapshotsResult":{ + "type":"structure", + "members":{ + "Snapshots":{"shape":"Snapshots"}, + "NextToken":{"shape":"NextToken"} + } + }, + "Description":{ + "type":"string", + "min":0, + "max":128, + "pattern":"^([a-zA-Z0-9_])[\\\\a-zA-Z0-9_@#%*+=:?./!\\s-]*$" + }, + "DirectoryConnectSettings":{ + "type":"structure", + "required":[ + "VpcId", + "SubnetIds", + "CustomerDnsIps", + "CustomerUserName" + ], + "members":{ + "VpcId":{"shape":"VpcId"}, + "SubnetIds":{"shape":"SubnetIds"}, + "CustomerDnsIps":{"shape":"DnsIpAddrs"}, + "CustomerUserName":{"shape":"UserName"} + } + }, + "DirectoryConnectSettingsDescription":{ + "type":"structure", + "members":{ + "VpcId":{"shape":"VpcId"}, + "SubnetIds":{"shape":"SubnetIds"}, + "CustomerUserName":{"shape":"UserName"}, + "SecurityGroupId":{"shape":"SecurityGroupId"}, + "AvailabilityZones":{"shape":"AvailabilityZones"}, + "ConnectIps":{"shape":"IpAddrs"} + } + }, + "DirectoryDescription":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "Name":{"shape":"DirectoryName"}, + "ShortName":{"shape":"DirectoryShortName"}, + "Size":{"shape":"DirectorySize"}, + "Alias":{"shape":"AliasName"}, + "AccessUrl":{"shape":"AccessUrl"}, + "Description":{"shape":"Description"}, + "DnsIpAddrs":{"shape":"DnsIpAddrs"}, + "Stage":{"shape":"DirectoryStage"}, + "LaunchTime":{"shape":"LaunchTime"}, + "StageLastUpdatedDateTime":{"shape":"LastUpdatedDateTime"}, + "Type":{"shape":"DirectoryType"}, + "VpcSettings":{"shape":"DirectoryVpcSettingsDescription"}, + "ConnectSettings":{"shape":"DirectoryConnectSettingsDescription"}, + "RadiusSettings":{"shape":"RadiusSettings"}, + "RadiusStatus":{"shape":"RadiusStatus"}, + "StageReason":{"shape":"StageReason"}, + "SsoEnabled":{"shape":"SsoEnabled"} + } + }, + "DirectoryDescriptions":{ + "type":"list", + "member":{"shape":"DirectoryDescription"} + }, + "DirectoryId":{ + "type":"string", + "pattern":"^d-[0-9a-f]{10}$" + }, + "DirectoryIds":{ + "type":"list", + "member":{"shape":"DirectoryId"} + }, + "DirectoryLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "DirectoryLimits":{ + "type":"structure", + "members":{ + "CloudOnlyDirectoriesLimit":{"shape":"Limit"}, + "CloudOnlyDirectoriesCurrentCount":{"shape":"Limit"}, + "CloudOnlyDirectoriesLimitReached":{"shape":"CloudOnlyDirectoriesLimitReached"}, + "ConnectedDirectoriesLimit":{"shape":"Limit"}, + "ConnectedDirectoriesCurrentCount":{"shape":"Limit"}, + "ConnectedDirectoriesLimitReached":{"shape":"ConnectedDirectoriesLimitReached"} + } + }, + "DirectoryName":{ + "type":"string", + "pattern":"^([a-zA-Z0-9]+[\\\\.-])+([a-zA-Z0-9])+$" + }, + "DirectoryShortName":{ + "type":"string", + "pattern":"^[^\\\\/:*?\\\"\\<\\>|.]+[^\\\\/:*?\\\"<>|]*$" + }, + "DirectorySize":{ + "type":"string", + "enum":[ + "Small", + "Large" + ] + }, + "DirectoryStage":{ + "type":"string", + "enum":[ + "Requested", + "Creating", + "Created", + "Active", + "Inoperable", + "Impaired", + "Restoring", + "RestoreFailed", + "Deleting", + "Deleted", + "Failed" + ] + }, + "DirectoryType":{ + "type":"string", + "enum":[ + "SimpleAD", + "ADConnector" + ] + }, + "DirectoryUnavailableException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "DirectoryVpcSettings":{ + "type":"structure", + "required":[ + "VpcId", + "SubnetIds" + ], + "members":{ + "VpcId":{"shape":"VpcId"}, + "SubnetIds":{"shape":"SubnetIds"} + } + }, + "DirectoryVpcSettingsDescription":{ + "type":"structure", + "members":{ + "VpcId":{"shape":"VpcId"}, + "SubnetIds":{"shape":"SubnetIds"}, + "SecurityGroupId":{"shape":"SecurityGroupId"}, + "AvailabilityZones":{"shape":"AvailabilityZones"} + } + }, + "DisableRadiusRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "DisableRadiusResult":{ + "type":"structure", + "members":{ + } + }, + "DisableSsoRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "UserName":{"shape":"UserName"}, + "Password":{"shape":"ConnectPassword"} + } + }, + "DisableSsoResult":{ + "type":"structure", + "members":{ + } + }, + "DnsIpAddrs":{ + "type":"list", + "member":{"shape":"IpAddr"} + }, + "EnableRadiusRequest":{ + "type":"structure", + "required":[ + "DirectoryId", + "RadiusSettings" + ], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "RadiusSettings":{"shape":"RadiusSettings"} + } + }, + "EnableRadiusResult":{ + "type":"structure", + "members":{ + } + }, + "EnableSsoRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "UserName":{"shape":"UserName"}, + "Password":{"shape":"ConnectPassword"} + } + }, + "EnableSsoResult":{ + "type":"structure", + "members":{ + } + }, + "EntityAlreadyExistsException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "EntityDoesNotExistException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "ExceptionMessage":{"type":"string"}, + "GetDirectoryLimitsRequest":{ + "type":"structure", + "members":{ + } + }, + "GetDirectoryLimitsResult":{ + "type":"structure", + "members":{ + "DirectoryLimits":{"shape":"DirectoryLimits"} + } + }, + "GetSnapshotLimitsRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{"shape":"DirectoryId"} + } + }, + "GetSnapshotLimitsResult":{ + "type":"structure", + "members":{ + "SnapshotLimits":{"shape":"SnapshotLimits"} + } + }, + "InsufficientPermissionsException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "InvalidNextTokenException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "InvalidParameterException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "IpAddr":{ + "type":"string", + "pattern":"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" + }, + "IpAddrs":{ + "type":"list", + "member":{"shape":"IpAddr"} + }, + "LastUpdatedDateTime":{"type":"timestamp"}, + "LaunchTime":{"type":"timestamp"}, + "Limit":{ + "type":"integer", + "min":0 + }, + "ManualSnapshotsLimitReached":{"type":"boolean"}, + "NextToken":{"type":"string"}, + "OrganizationalUnitDN":{ + "type":"string", + "min":1, + "max":2000 + }, + "Password":{ + "type":"string", + "pattern":"(?=^.{8,64}$)((?=.*\\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\\d)(?=.*[^A-Za-z0-9])(?=.*[a-z])|(?=.*[^A-Za-z0-9])(?=.*[A-Z])(?=.*[a-z])|(?=.*\\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9]))^.*", + "sensitive":true + }, + "PortNumber":{ + "type":"integer", + "min":1025, + "max":65535 + }, + "RadiusAuthenticationProtocol":{ + "type":"string", + "enum":[ + "PAP", + "CHAP", + "MS-CHAPv1", + "MS-CHAPv2" + ] + }, + "RadiusDisplayLabel":{ + "type":"string", + "min":1, + "max":64 + }, + "RadiusRetries":{ + "type":"integer", + "min":0, + "max":10 + }, + "RadiusSettings":{ + "type":"structure", + "members":{ + "RadiusServers":{"shape":"Servers"}, + "RadiusPort":{"shape":"PortNumber"}, + "RadiusTimeout":{"shape":"RadiusTimeout"}, + "RadiusRetries":{"shape":"RadiusRetries"}, + "SharedSecret":{"shape":"RadiusSharedSecret"}, + "AuthenticationProtocol":{"shape":"RadiusAuthenticationProtocol"}, + "DisplayLabel":{"shape":"RadiusDisplayLabel"}, + "UseSameUsername":{"shape":"UseSameUsername"} + } + }, + "RadiusSharedSecret":{ + "type":"string", + "min":8, + "max":512, + "sensitive":true + }, + "RadiusStatus":{ + "type":"string", + "enum":[ + "Creating", + "Completed", + "Failed" + ] + }, + "RadiusTimeout":{ + "type":"integer", + "min":1, + "max":20 + }, + "RequestId":{ + "type":"string", + "pattern":"^([A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12})$" + }, + "RestoreFromSnapshotRequest":{ + "type":"structure", + "required":["SnapshotId"], + "members":{ + "SnapshotId":{"shape":"SnapshotId"} + } + }, + "RestoreFromSnapshotResult":{ + "type":"structure", + "members":{ + } + }, + "SID":{ + "type":"string", + "min":1, + "max":256, + "pattern":"[&\\w+-.@]+" + }, + "SecurityGroupId":{ + "type":"string", + "pattern":"^(sg-[0-9a-f]{8})$" + }, + "Server":{ + "type":"string", + "min":1, + "max":256 + }, + "Servers":{ + "type":"list", + "member":{"shape":"Server"} + }, + "ServiceException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true, + "fault":true + }, + "Snapshot":{ + "type":"structure", + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "SnapshotId":{"shape":"SnapshotId"}, + "Type":{"shape":"SnapshotType"}, + "Name":{"shape":"SnapshotName"}, + "Status":{"shape":"SnapshotStatus"}, + "StartTime":{"shape":"StartTime"} + } + }, + "SnapshotId":{ + "type":"string", + "pattern":"^s-[0-9a-f]{10}$" + }, + "SnapshotIds":{ + "type":"list", + "member":{"shape":"SnapshotId"} + }, + "SnapshotLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "SnapshotLimits":{ + "type":"structure", + "members":{ + "ManualSnapshotsLimit":{"shape":"Limit"}, + "ManualSnapshotsCurrentCount":{"shape":"Limit"}, + "ManualSnapshotsLimitReached":{"shape":"ManualSnapshotsLimitReached"} + } + }, + "SnapshotName":{ + "type":"string", + "min":0, + "max":128, + "pattern":"^([a-zA-Z0-9_])[\\\\a-zA-Z0-9_@#%*+=:?./!\\s-]*$" + }, + "SnapshotStatus":{ + "type":"string", + "enum":[ + "Creating", + "Completed", + "Failed" + ] + }, + "SnapshotType":{ + "type":"string", + "enum":[ + "Auto", + "Manual" + ] + }, + "Snapshots":{ + "type":"list", + "member":{"shape":"Snapshot"} + }, + "SsoEnabled":{"type":"boolean"}, + "StageReason":{"type":"string"}, + "StartTime":{"type":"timestamp"}, + "SubnetId":{ + "type":"string", + "pattern":"^(subnet-[0-9a-f]{8})$" + }, + "SubnetIds":{ + "type":"list", + "member":{"shape":"SubnetId"} + }, + "UnsupportedOperationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "exception":true + }, + "UpdateRadiusRequest":{ + "type":"structure", + "required":[ + "DirectoryId", + "RadiusSettings" + ], + "members":{ + "DirectoryId":{"shape":"DirectoryId"}, + "RadiusSettings":{"shape":"RadiusSettings"} + } + }, + "UpdateRadiusResult":{ + "type":"structure", + "members":{ + } + }, + "UseSameUsername":{"type":"boolean"}, + "UserName":{ + "type":"string", + "min":1, + "pattern":"[a-zA-Z0-9._-]+" + }, + "VpcId":{ + "type":"string", + "pattern":"^(vpc-[0-9a-f]{8})$" + } + } +} diff --git a/src/data/ds/2015-04-16/docs-2.json b/src/data/ds/2015-04-16/docs-2.json new file mode 100644 index 0000000000..37b6968086 --- /dev/null +++ b/src/data/ds/2015-04-16/docs-2.json @@ -0,0 +1,753 @@ +{ + "version": "2.0", + "operations": { + "ConnectDirectory": "

Creates an AD Connector to connect an on-premises directory.

", + "CreateAlias": "

Creates an alias for a directory and assigns the alias to the directory. The alias is used to construct the access URL for the directory, such as http://<alias>.awsapps.com.

After an alias has been created, it cannot be deleted or reused, so this operation should only be used when absolutely necessary.

", + "CreateComputer": "

Creates a computer account in the specified directory, and joins the computer to the directory.

", + "CreateDirectory": "

Creates a Simple AD directory.

", + "CreateSnapshot": "

Creates a snapshot of an existing directory.

You cannot take snapshots of extended or connected directories.

", + "DeleteDirectory": "

Deletes an AWS Directory Service directory.

", + "DeleteSnapshot": "

Deletes a directory snapshot.

", + "DescribeDirectories": "

Obtains information about the directories that belong to this account.

You can retrieve information about specific directories by passing the directory identifiers in the DirectoryIds parameter. Otherwise, all directories that belong to the current account are returned.

This operation supports pagination with the use of the NextToken request and response parameters. If more results are available, the DescribeDirectoriesResult.NextToken member contains a token that you pass in the next call to DescribeDirectories to retrieve the next set of items.

You can also specify a maximum number of return results with the Limit parameter.

", + "DescribeSnapshots": "

Obtains information about the directory snapshots that belong to this account.

This operation supports pagination with the use of the NextToken request and response parameters. If more results are available, the DescribeSnapshots.NextToken member contains a token that you pass in the next call to DescribeSnapshots to retrieve the next set of items.

You can also specify a maximum number of return results with the Limit parameter.

", + "DisableRadius": "

Disables multi-factor authentication (MFA) with Remote Authentication Dial In User Service (RADIUS) for an AD Connector directory.

", + "DisableSso": "

Disables single-sign on for a directory.

", + "EnableRadius": "

Enables multi-factor authentication (MFA) with Remote Authentication Dial In User Service (RADIUS) for an AD Connector directory.

", + "EnableSso": "

Enables single-sign on for a directory.

", + "GetDirectoryLimits": "

Obtains directory limit information for the current region.

", + "GetSnapshotLimits": "

Obtains the manual snapshot limits for a directory.

", + "RestoreFromSnapshot": "

Restores a directory using an existing directory snapshot.

When you restore a directory from a snapshot, any changes made to the directory after the snapshot date are overwritten.

This action returns as soon as the restore operation is initiated. You can monitor the progress of the restore operation by calling the DescribeDirectories operation with the directory identifier. When the DirectoryDescription.Stage value changes to Active, the restore operation is complete.

", + "UpdateRadius": "

Updates the Remote Authentication Dial In User Service (RADIUS) server information for an AD Connector directory.

" + }, + "service": "AWS Directory Service

This is the AWS Directory Service API Reference. This guide provides detailed information about AWS Directory Service operations, data types, parameters, and errors.

", + "shapes": { + "AccessUrl": { + "base": null, + "refs": { + "DirectoryDescription$AccessUrl": "

The access URL for the directory, such as http://<alias>.awsapps.com.

" + } + }, + "AliasName": { + "base": null, + "refs": { + "CreateAliasRequest$Alias": "

The requested alias.

The alias must be unique amongst all aliases in AWS. This operation will throw an EntityAlreadyExistsException if this alias already exists.

", + "CreateAliasResult$Alias": "

The alias for the directory.

", + "DirectoryDescription$Alias": "

The alias for the directory.

" + } + }, + "Attribute": { + "base": "

Represents a named directory attribute.

", + "refs": { + "Attributes$member": null + } + }, + "AttributeName": { + "base": null, + "refs": { + "Attribute$Name": "

The name of the attribute.

" + } + }, + "AttributeValue": { + "base": null, + "refs": { + "Attribute$Value": "

The value of the attribute.

" + } + }, + "Attributes": { + "base": null, + "refs": { + "Computer$ComputerAttributes": "

An array of Attribute objects that contain the LDAP attributes that belong to the computer account.

", + "CreateComputerRequest$ComputerAttributes": "

An array of Attribute objects that contain any LDAP attributes to apply to the computer account.

" + } + }, + "AuthenticationFailedException": { + "base": "

An authentication error occurred.

", + "refs": { + } + }, + "AvailabilityZone": { + "base": null, + "refs": { + "AvailabilityZones$member": null + } + }, + "AvailabilityZones": { + "base": null, + "refs": { + "DirectoryConnectSettingsDescription$AvailabilityZones": "

A list of the Availability Zones that the directory is in.

", + "DirectoryVpcSettingsDescription$AvailabilityZones": "

The list of Availability Zones that the directory is in.

" + } + }, + "ClientException": { + "base": "

A client exception has occurred.

", + "refs": { + } + }, + "CloudOnlyDirectoriesLimitReached": { + "base": null, + "refs": { + "DirectoryLimits$CloudOnlyDirectoriesLimitReached": "

Indicates if the cloud directory limit has been reached.

" + } + }, + "Computer": { + "base": "

Contains information about a computer account in a directory.

", + "refs": { + "CreateComputerResult$Computer": "

A Computer object the represents the computer account.

" + } + }, + "ComputerName": { + "base": null, + "refs": { + "Computer$ComputerName": "

The computer name.

", + "CreateComputerRequest$ComputerName": "

The name of the computer account.

" + } + }, + "ComputerPassword": { + "base": null, + "refs": { + "CreateComputerRequest$Password": "

A one-time password that is used to join the computer to the directory. You should generate a random, strong password to use for this parameter.

" + } + }, + "ConnectDirectoryRequest": { + "base": "

Contains the inputs for the ConnectDirectory operation.

", + "refs": { + } + }, + "ConnectDirectoryResult": { + "base": "

Contains the results of the ConnectDirectory operation.

", + "refs": { + } + }, + "ConnectPassword": { + "base": null, + "refs": { + "ConnectDirectoryRequest$Password": "

The password for the on-premises user account.

", + "DisableSsoRequest$Password": "

The password of an alternate account to use to disable single-sign on. This is only used for AD Connector directories. See the UserName parameter for more information.

", + "EnableSsoRequest$Password": "

The password of an alternate account to use to enable single-sign on. This is only used for AD Connector directories. See the UserName parameter for more information.

" + } + }, + "ConnectedDirectoriesLimitReached": { + "base": null, + "refs": { + "DirectoryLimits$ConnectedDirectoriesLimitReached": "

Indicates if the connected directory limit has been reached.

" + } + }, + "CreateAliasRequest": { + "base": "

Contains the inputs for the CreateAlias operation.

", + "refs": { + } + }, + "CreateAliasResult": { + "base": "

Contains the results of the CreateAlias operation.

", + "refs": { + } + }, + "CreateComputerRequest": { + "base": "

Contains the inputs for the CreateComputer operation.

", + "refs": { + } + }, + "CreateComputerResult": { + "base": "

Contains the results for the CreateComputer operation.

", + "refs": { + } + }, + "CreateDirectoryRequest": { + "base": "

Contains the inputs for the CreateDirectory operation.

", + "refs": { + } + }, + "CreateDirectoryResult": { + "base": "

Contains the results of the CreateDirectory operation.

", + "refs": { + } + }, + "CreateSnapshotRequest": { + "base": "

Contains the inputs for the CreateSnapshot operation.

", + "refs": { + } + }, + "CreateSnapshotResult": { + "base": "

Contains the results of the CreateSnapshot operation.

", + "refs": { + } + }, + "DeleteDirectoryRequest": { + "base": "

Contains the inputs for the DeleteDirectory operation.

", + "refs": { + } + }, + "DeleteDirectoryResult": { + "base": "

Contains the results of the DeleteDirectory operation.

", + "refs": { + } + }, + "DeleteSnapshotRequest": { + "base": "

Contains the inputs for the DeleteSnapshot operation.

", + "refs": { + } + }, + "DeleteSnapshotResult": { + "base": "

Contains the results of the DeleteSnapshot operation.

", + "refs": { + } + }, + "DescribeDirectoriesRequest": { + "base": "

Contains the inputs for the DescribeDirectories operation.

", + "refs": { + } + }, + "DescribeDirectoriesResult": { + "base": "

Contains the results of the DescribeDirectories operation.

", + "refs": { + } + }, + "DescribeSnapshotsRequest": { + "base": "

Contains the inputs for the DescribeSnapshots operation.

", + "refs": { + } + }, + "DescribeSnapshotsResult": { + "base": "

Contains the results of the DescribeSnapshots operation.

", + "refs": { + } + }, + "Description": { + "base": null, + "refs": { + "ConnectDirectoryRequest$Description": "

A textual description for the directory.

", + "CreateDirectoryRequest$Description": "

A textual description for the directory.

", + "DirectoryDescription$Description": "

The textual description for the directory.

" + } + }, + "DirectoryConnectSettings": { + "base": "

Contains information for the ConnectDirectory operation when an AD Connector directory is being created.

", + "refs": { + "ConnectDirectoryRequest$ConnectSettings": "

A DirectoryConnectSettings object that contains additional information for the operation.

" + } + }, + "DirectoryConnectSettingsDescription": { + "base": "

Contains information about an AD Connector directory.

", + "refs": { + "DirectoryDescription$ConnectSettings": "

A DirectoryConnectSettingsDescription object that contains additional information about an AD Connector directory. This member is only present if the directory is an AD Connector directory.

" + } + }, + "DirectoryDescription": { + "base": "

Contains information about an AWS Directory Service directory.

", + "refs": { + "DirectoryDescriptions$member": null + } + }, + "DirectoryDescriptions": { + "base": "

A list of directory descriptions.

", + "refs": { + "DescribeDirectoriesResult$DirectoryDescriptions": "

The list of DirectoryDescription objects that were retrieved.

It is possible that this list contains less than the number of items specified in the Limit member of the request. This occurs if there are less than the requested number of items left to retrieve, or if the limitations of the operation have been exceeded.

" + } + }, + "DirectoryId": { + "base": null, + "refs": { + "ConnectDirectoryResult$DirectoryId": "

The identifier of the new directory.

", + "CreateAliasRequest$DirectoryId": "

The identifier of the directory to create the alias for.

", + "CreateAliasResult$DirectoryId": "

The identifier of the directory.

", + "CreateComputerRequest$DirectoryId": "

The identifier of the directory to create the computer account in.

", + "CreateDirectoryResult$DirectoryId": "

The identifier of the directory that was created.

", + "CreateSnapshotRequest$DirectoryId": "

The identifier of the directory to take a snapshot of.

", + "DeleteDirectoryRequest$DirectoryId": "

The identifier of the directory to delete.

", + "DeleteDirectoryResult$DirectoryId": "

The directory identifier.

", + "DescribeSnapshotsRequest$DirectoryId": "

The identifier of the directory to retrieve snapshot information for.

", + "DirectoryDescription$DirectoryId": "

The directory identifier.

", + "DirectoryIds$member": null, + "DisableRadiusRequest$DirectoryId": "

The identifier of the directory to disable MFA for.

", + "DisableSsoRequest$DirectoryId": "

The identifier of the directory to disable single-sign on for.

", + "EnableRadiusRequest$DirectoryId": "

The identifier of the directory to enable MFA for.

", + "EnableSsoRequest$DirectoryId": "

The identifier of the directory to enable single-sign on for.

", + "GetSnapshotLimitsRequest$DirectoryId": "

Contains the identifier of the directory to obtain the limits for.

", + "Snapshot$DirectoryId": "

The directory identifier.

", + "UpdateRadiusRequest$DirectoryId": "

The identifier of the directory to update the RADIUS server information for.

" + } + }, + "DirectoryIds": { + "base": "

A list of directory identifiers.

", + "refs": { + "DescribeDirectoriesRequest$DirectoryIds": "

A list of identifiers of the directories to obtain the information for. If this member is null, all directories that belong to the current account are returned.

An empty list results in an InvalidParameterException being thrown.

" + } + }, + "DirectoryLimitExceededException": { + "base": "

The maximum number of directories in the region has been reached. You can use the GetDirectoryLimits operation to determine your directory limits in the region.

", + "refs": { + } + }, + "DirectoryLimits": { + "base": "

Contains directory limit information for a region.

", + "refs": { + "GetDirectoryLimitsResult$DirectoryLimits": "

A DirectoryLimits object that contains the directory limits for the current region.

" + } + }, + "DirectoryName": { + "base": null, + "refs": { + "ConnectDirectoryRequest$Name": "

The fully-qualified name of the on-premises directory, such as corp.example.com.

", + "CreateDirectoryRequest$Name": "

The fully qualified name for the directory, such as corp.example.com.

", + "DirectoryDescription$Name": "

The fully-qualified name of the directory.

" + } + }, + "DirectoryShortName": { + "base": null, + "refs": { + "ConnectDirectoryRequest$ShortName": "

The NetBIOS name of the on-premises directory, such as CORP.

", + "CreateDirectoryRequest$ShortName": "

The short name of the directory, such as CORP.

", + "DirectoryDescription$ShortName": "

The short name of the directory.

" + } + }, + "DirectorySize": { + "base": null, + "refs": { + "ConnectDirectoryRequest$Size": "

The size of the directory.

", + "CreateDirectoryRequest$Size": "

The size of the directory.

", + "DirectoryDescription$Size": "

The directory size.

" + } + }, + "DirectoryStage": { + "base": null, + "refs": { + "DirectoryDescription$Stage": "

The current stage of the directory.

" + } + }, + "DirectoryType": { + "base": null, + "refs": { + "DirectoryDescription$Type": "

The directory size.

" + } + }, + "DirectoryUnavailableException": { + "base": "

The specified directory is unavailable or could not be found.

", + "refs": { + } + }, + "DirectoryVpcSettings": { + "base": "

Contains information for the CreateDirectory operation when a Simple AD directory is being created.

", + "refs": { + "CreateDirectoryRequest$VpcSettings": "

A DirectoryVpcSettings object that contains additional information for the operation.

" + } + }, + "DirectoryVpcSettingsDescription": { + "base": "

Contains information about a Simple AD directory.

", + "refs": { + "DirectoryDescription$VpcSettings": "

A DirectoryVpcSettingsDescription object that contains additional information about a Simple AD directory. This member is only present if the directory is a Simple AD directory.

" + } + }, + "DisableRadiusRequest": { + "base": "

Contains the inputs for the DisableRadius operation.

", + "refs": { + } + }, + "DisableRadiusResult": { + "base": "

Contains the results of the DisableRadius operation.

", + "refs": { + } + }, + "DisableSsoRequest": { + "base": "

Contains the inputs for the DisableSso operation.

", + "refs": { + } + }, + "DisableSsoResult": { + "base": "

Contains the results of the DisableSso operation.

", + "refs": { + } + }, + "DnsIpAddrs": { + "base": null, + "refs": { + "DirectoryConnectSettings$CustomerDnsIps": "

A list of one or more IP addresses of DNS servers or domain controllers in the on-premises directory.

", + "DirectoryDescription$DnsIpAddrs": "

The IP addresses of the DNS servers for the directory. For a Simple AD directory, these are the IP addresses of the Simple AD directory servers. For an AD Connector directory, these are the IP addresses of the DNS servers or domain controllers in the on-premises directory that the AD Connector is connected to.

" + } + }, + "EnableRadiusRequest": { + "base": "

Contains the inputs for the EnableRadius operation.

", + "refs": { + } + }, + "EnableRadiusResult": { + "base": "

Contains the results of the EnableRadius operation.

", + "refs": { + } + }, + "EnableSsoRequest": { + "base": "

Contains the inputs for the EnableSso operation.

", + "refs": { + } + }, + "EnableSsoResult": { + "base": "

Contains the results of the EnableSso operation.

", + "refs": { + } + }, + "EntityAlreadyExistsException": { + "base": "

The specified entity already exists.

", + "refs": { + } + }, + "EntityDoesNotExistException": { + "base": "

The specified entity could not be found.

", + "refs": { + } + }, + "ExceptionMessage": { + "base": "

The descriptive message for the exception.

", + "refs": { + "AuthenticationFailedException$Message": "

The textual message for the exception.

", + "ClientException$Message": null, + "DirectoryLimitExceededException$Message": null, + "DirectoryUnavailableException$Message": null, + "EntityAlreadyExistsException$Message": null, + "EntityDoesNotExistException$Message": null, + "InsufficientPermissionsException$Message": null, + "InvalidNextTokenException$Message": null, + "InvalidParameterException$Message": null, + "ServiceException$Message": null, + "SnapshotLimitExceededException$Message": null, + "UnsupportedOperationException$Message": null + } + }, + "GetDirectoryLimitsRequest": { + "base": "

Contains the inputs for the GetDirectoryLimits operation.

", + "refs": { + } + }, + "GetDirectoryLimitsResult": { + "base": "

Contains the results of the GetDirectoryLimits operation.

", + "refs": { + } + }, + "GetSnapshotLimitsRequest": { + "base": "

Contains the inputs for the GetSnapshotLimits operation.

", + "refs": { + } + }, + "GetSnapshotLimitsResult": { + "base": "

Contains the results of the GetSnapshotLimits operation.

", + "refs": { + } + }, + "InsufficientPermissionsException": { + "base": "

The account does not have sufficient permission to perform the operation.

", + "refs": { + } + }, + "InvalidNextTokenException": { + "base": "

The NextToken value is not valid.

", + "refs": { + } + }, + "InvalidParameterException": { + "base": "

One or more parameters are not valid.

", + "refs": { + } + }, + "IpAddr": { + "base": null, + "refs": { + "DnsIpAddrs$member": null, + "IpAddrs$member": null + } + }, + "IpAddrs": { + "base": null, + "refs": { + "DirectoryConnectSettingsDescription$ConnectIps": "

The IP addresses of the AD Connector servers.

" + } + }, + "LastUpdatedDateTime": { + "base": null, + "refs": { + "DirectoryDescription$StageLastUpdatedDateTime": "

The date and time that the stage was last updated.

" + } + }, + "LaunchTime": { + "base": null, + "refs": { + "DirectoryDescription$LaunchTime": "

Specifies when the directory was created.

" + } + }, + "Limit": { + "base": null, + "refs": { + "DescribeDirectoriesRequest$Limit": "

The maximum number of items to return. If this value is zero, the maximum number of items is specified by the limitations of the operation.

", + "DescribeSnapshotsRequest$Limit": "

The maximum number of objects to return.

", + "DirectoryLimits$CloudOnlyDirectoriesLimit": "

The maximum number of cloud directories allowed in the region.

", + "DirectoryLimits$CloudOnlyDirectoriesCurrentCount": "

The current number of cloud directories in the region.

", + "DirectoryLimits$ConnectedDirectoriesLimit": "

The maximum number of connected directories allowed in the region.

", + "DirectoryLimits$ConnectedDirectoriesCurrentCount": "

The current number of connected directories in the region.

", + "SnapshotLimits$ManualSnapshotsLimit": "

The maximum number of manual snapshots allowed.

", + "SnapshotLimits$ManualSnapshotsCurrentCount": "

The current number of manual snapshots of the directory.

" + } + }, + "ManualSnapshotsLimitReached": { + "base": null, + "refs": { + "SnapshotLimits$ManualSnapshotsLimitReached": "

Indicates if the manual snapshot limit has been reached.

" + } + }, + "NextToken": { + "base": null, + "refs": { + "DescribeDirectoriesRequest$NextToken": "

The DescribeDirectoriesResult.NextToken value from a previous call to DescribeDirectories. Pass null if this is the first call.

", + "DescribeDirectoriesResult$NextToken": "

If not null, more results are available. Pass this value for the NextToken parameter in a subsequent call to DescribeDirectories to retrieve the next set of items.

", + "DescribeSnapshotsRequest$NextToken": "

The DescribeSnapshotsResult.NextToken value from a previous call to DescribeSnapshots. Pass null if this is the first call.

", + "DescribeSnapshotsResult$NextToken": "

If not null, more results are available. Pass this value in the NextToken member of a subsequent call to DescribeSnapshots.

" + } + }, + "OrganizationalUnitDN": { + "base": null, + "refs": { + "CreateComputerRequest$OrganizationalUnitDistinguishedName": "

The fully-qualified distinguished name of the organizational unit to place the computer account in.

" + } + }, + "Password": { + "base": null, + "refs": { + "CreateDirectoryRequest$Password": "

The password for the directory administrator. The directory creation process creates a directory administrator account with the username Administrator and this password.

" + } + }, + "PortNumber": { + "base": null, + "refs": { + "RadiusSettings$RadiusPort": "

The port that your RADIUS server is using for communications. Your on-premises network must allow inbound traffic over this port from the AWS Directory Service servers.

" + } + }, + "RadiusAuthenticationProtocol": { + "base": null, + "refs": { + "RadiusSettings$AuthenticationProtocol": "

The protocol specified for your RADIUS endpoints.

" + } + }, + "RadiusDisplayLabel": { + "base": null, + "refs": { + "RadiusSettings$DisplayLabel": "

Not currently used.

" + } + }, + "RadiusRetries": { + "base": null, + "refs": { + "RadiusSettings$RadiusRetries": "

The maximum number of times that communication with the RADIUS server is attempted.

" + } + }, + "RadiusSettings": { + "base": "

Contains information about a Remote Authentication Dial In User Service (RADIUS) server.

", + "refs": { + "DirectoryDescription$RadiusSettings": "

A RadiusSettings object that contains information about the RADIUS server configured for this directory.

", + "EnableRadiusRequest$RadiusSettings": "

A RadiusSettings object that contains information about the RADIUS server.

", + "UpdateRadiusRequest$RadiusSettings": "

A RadiusSettings object that contains information about the RADIUS server.

" + } + }, + "RadiusSharedSecret": { + "base": null, + "refs": { + "RadiusSettings$SharedSecret": "

The shared secret code that was specified when your RADIUS endpoints were created.

" + } + }, + "RadiusStatus": { + "base": null, + "refs": { + "DirectoryDescription$RadiusStatus": "

The status of the RADIUS MFA server connection.

" + } + }, + "RadiusTimeout": { + "base": null, + "refs": { + "RadiusSettings$RadiusTimeout": "

The amount of time, in seconds, to wait for the RADIUS server to respond.

" + } + }, + "RequestId": { + "base": "

The AWS request identifier.

", + "refs": { + "AuthenticationFailedException$RequestId": "

The identifier of the request that caused the exception.

", + "ClientException$RequestId": null, + "DirectoryLimitExceededException$RequestId": null, + "DirectoryUnavailableException$RequestId": null, + "EntityAlreadyExistsException$RequestId": null, + "EntityDoesNotExistException$RequestId": null, + "InsufficientPermissionsException$RequestId": null, + "InvalidNextTokenException$RequestId": null, + "InvalidParameterException$RequestId": null, + "ServiceException$RequestId": null, + "SnapshotLimitExceededException$RequestId": null, + "UnsupportedOperationException$RequestId": null + } + }, + "RestoreFromSnapshotRequest": { + "base": "

An object representing the inputs for the RestoreFromSnapshot operation.

", + "refs": { + } + }, + "RestoreFromSnapshotResult": { + "base": "

Contains the results of the RestoreFromSnapshot operation.

", + "refs": { + } + }, + "SID": { + "base": null, + "refs": { + "Computer$ComputerId": "

The identifier of the computer.

" + } + }, + "SecurityGroupId": { + "base": null, + "refs": { + "DirectoryConnectSettingsDescription$SecurityGroupId": "

The security group identifier for the AD Connector directory.

", + "DirectoryVpcSettingsDescription$SecurityGroupId": "

The security group identifier for the directory.

" + } + }, + "Server": { + "base": null, + "refs": { + "Servers$member": null + } + }, + "Servers": { + "base": null, + "refs": { + "RadiusSettings$RadiusServers": "

An array of strings that contains the IP addresses of the RADIUS server endpoints, or the IP addresses of your RADIUS server load balancer.

" + } + }, + "ServiceException": { + "base": "

An exception has occurred in AWS Directory Service.

", + "refs": { + } + }, + "Snapshot": { + "base": "

Describes a directory snapshot.

", + "refs": { + "Snapshots$member": null + } + }, + "SnapshotId": { + "base": null, + "refs": { + "CreateSnapshotResult$SnapshotId": "

The identifier of the snapshot that was created.

", + "DeleteSnapshotRequest$SnapshotId": "

The identifier of the directory snapshot to be deleted.

", + "DeleteSnapshotResult$SnapshotId": "

The identifier of the directory snapshot that was deleted.

", + "RestoreFromSnapshotRequest$SnapshotId": "

The identifier of the snapshot to restore from.

", + "Snapshot$SnapshotId": "

The snapshot identifier.

", + "SnapshotIds$member": null + } + }, + "SnapshotIds": { + "base": "

A list of directory snapshot identifiers.

", + "refs": { + "DescribeSnapshotsRequest$SnapshotIds": "

A list of identifiers of the snapshots to obtain the information for. If this member is null or empty, all snapshots are returned using the Limit and NextToken members.

" + } + }, + "SnapshotLimitExceededException": { + "base": "

The maximum number of manual snapshots for the directory has been reached. You can use the GetSnapshotLimits operation to determine the snapshot limits for a directory.

", + "refs": { + } + }, + "SnapshotLimits": { + "base": "

Contains manual snapshot limit information for a directory.

", + "refs": { + "GetSnapshotLimitsResult$SnapshotLimits": "

A SnapshotLimits object that contains the manual snapshot limits for the specified directory.

" + } + }, + "SnapshotName": { + "base": null, + "refs": { + "CreateSnapshotRequest$Name": "

The descriptive name to apply to the snapshot.

", + "Snapshot$Name": "

The descriptive name of the snapshot.

" + } + }, + "SnapshotStatus": { + "base": null, + "refs": { + "Snapshot$Status": "

The snapshot status.

" + } + }, + "SnapshotType": { + "base": null, + "refs": { + "Snapshot$Type": "

The snapshot type.

" + } + }, + "Snapshots": { + "base": "

A list of descriptions of directory snapshots.

", + "refs": { + "DescribeSnapshotsResult$Snapshots": "

The list of Snapshot objects that were retrieved.

It is possible that this list contains less than the number of items specified in the Limit member of the request. This occurs if there are less than the requested number of items left to retrieve, or if the limitations of the operation have been exceeded.

" + } + }, + "SsoEnabled": { + "base": null, + "refs": { + "DirectoryDescription$SsoEnabled": "

Indicates if single-sign on is enabled for the directory. For more information, see EnableSso and DisableSso.

" + } + }, + "StageReason": { + "base": null, + "refs": { + "DirectoryDescription$StageReason": "

Additional information about the directory stage.

" + } + }, + "StartTime": { + "base": null, + "refs": { + "Snapshot$StartTime": "

The date and time that the snapshot was taken.

" + } + }, + "SubnetId": { + "base": null, + "refs": { + "SubnetIds$member": null + } + }, + "SubnetIds": { + "base": null, + "refs": { + "DirectoryConnectSettings$SubnetIds": "

A list of subnet identifiers in the VPC that the AD Connector is created in.

", + "DirectoryConnectSettingsDescription$SubnetIds": "

A list of subnet identifiers in the VPC that the AD connector is in.

", + "DirectoryVpcSettings$SubnetIds": "

The identifiers of the subnets for the directory servers. The two subnets must be in different Availability Zones. AWS Directory Service creates a directory server and a DNS server in each of these subnets.

", + "DirectoryVpcSettingsDescription$SubnetIds": "

The identifiers of the subnets for the directory servers.

" + } + }, + "UnsupportedOperationException": { + "base": "

The operation is not supported.

", + "refs": { + } + }, + "UpdateRadiusRequest": { + "base": "

Contains the inputs for the UpdateRadius operation.

", + "refs": { + } + }, + "UpdateRadiusResult": { + "base": "

Contains the results of the UpdateRadius operation.

", + "refs": { + } + }, + "UseSameUsername": { + "base": null, + "refs": { + "RadiusSettings$UseSameUsername": "

Not currently used.

" + } + }, + "UserName": { + "base": null, + "refs": { + "DirectoryConnectSettings$CustomerUserName": "

The username of an account in the on-premises directory that is used to connect to the directory. This account must have the following privileges:

", + "DirectoryConnectSettingsDescription$CustomerUserName": "

The username of the service account in the on-premises directory.

", + "DisableSsoRequest$UserName": "

The username of an alternate account to use to disable single-sign on. This is only used for AD Connector directories. This account must have privileges to remove a service principle name.

If the AD Connector service account does not have privileges to remove a service principle name, you can specify an alternate account with the UserName and Password parameters. These credentials are only used to disable single sign-on and are not stored by the service. The AD Connector service account is not changed.

", + "EnableSsoRequest$UserName": "

The username of an alternate account to use to enable single-sign on. This is only used for AD Connector directories. This account must have privileges to add a service principle name.

If the AD Connector service account does not have privileges to add a service principle name, you can specify an alternate account with the UserName and Password parameters. These credentials are only used to enable single sign-on and are not stored by the service. The AD Connector service account is not changed.

" + } + }, + "VpcId": { + "base": null, + "refs": { + "DirectoryConnectSettings$VpcId": "

The identifier of the VPC that the AD Connector is created in.

", + "DirectoryConnectSettingsDescription$VpcId": "

The identifier of the VPC that the AD Connector is in.

", + "DirectoryVpcSettings$VpcId": "

The identifier of the VPC to create the Simple AD directory in.

", + "DirectoryVpcSettingsDescription$VpcId": "

The identifier of the VPC that the directory is in.

" + } + } + } +} diff --git a/src/data/dynamodb/2012-08-10/api-2.json b/src/data/dynamodb/2012-08-10/api-2.json index 41967ffac7..b7a1fbcbb2 100644 --- a/src/data/dynamodb/2012-08-10/api-2.json +++ b/src/data/dynamodb/2012-08-10/api-2.json @@ -1,4 +1,5 @@ { + "version":"2.0", "metadata":{ "apiVersion":"2012-08-10", "endpointPrefix":"dynamodb", @@ -819,6 +820,7 @@ "key":{"shape":"AttributeName"}, "value":{"shape":"Condition"} }, + "KeyExpression":{"type":"string"}, "KeyList":{ "type":"list", "member":{"shape":"Key"}, @@ -1042,10 +1044,7 @@ }, "QueryInput":{ "type":"structure", - "required":[ - "TableName", - "KeyConditions" - ], + "required":["TableName"], "members":{ "TableName":{"shape":"TableName"}, "IndexName":{"shape":"IndexName"}, @@ -1061,6 +1060,7 @@ "ReturnConsumedCapacity":{"shape":"ReturnConsumedCapacity"}, "ProjectionExpression":{"shape":"ProjectionExpression"}, "FilterExpression":{"shape":"ConditionExpression"}, + "KeyConditionExpression":{"shape":"KeyExpression"}, "ExpressionAttributeNames":{"shape":"ExpressionAttributeNameMap"}, "ExpressionAttributeValues":{"shape":"ExpressionAttributeValueMap"} } diff --git a/src/data/dynamodb/2012-08-10/docs-2.json b/src/data/dynamodb/2012-08-10/docs-2.json index d97b35d81d..33d21cf683 100644 --- a/src/data/dynamodb/2012-08-10/docs-2.json +++ b/src/data/dynamodb/2012-08-10/docs-2.json @@ -1,7 +1,8 @@ { + "version": "2.0", "operations": { - "BatchGetItem": "

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.

For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one data set.

If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchGetItem will return a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

For more information, go to Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.

By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

In order to minimize response latency, BatchGetItem retrieves items in parallel.

When designing your application, keep in mind that DynamoDB does not return attributes in any particular order. To help parse the response by item, include the primary key values for the items in your request in the AttributesToGet parameter.

If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Capacity Units Calculations in the Amazon DynamoDB Developer Guide.

", - "BatchWriteItem": "

The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.

BatchWriteItem cannot update items. To update items, use the UpdateItem API.

The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.

Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem will return a ProvisionedThroughputExceededException.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

For more information, go to Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.

With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.

If you use a programming language that supports concurrency, such as Java, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, such as PHP, you must update provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.

Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.

If one or more of the following is true, DynamoDB rejects the entire batch write operation:

", + "BatchGetItem": "

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.

For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one data set.

If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchGetItem will return a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.

By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

In order to minimize response latency, BatchGetItem retrieves items in parallel.

When designing your application, keep in mind that DynamoDB does not return attributes in any particular order. To help parse the response by item, include the primary key values for the items in your request in the AttributesToGet parameter.

If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Capacity Units Calculations in the Amazon DynamoDB Developer Guide.

", + "BatchWriteItem": "

The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.

BatchWriteItem cannot update items. To update items, use the UpdateItem API.

The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.

Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem will return a ProvisionedThroughputExceededException.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.

With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.

If you use a programming language that supports concurrency, such as Java, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, such as PHP, you must update or delete the specified items one at a time. In both situations, BatchWriteItem provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.

Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.

If one or more of the following is true, DynamoDB rejects the entire batch write operation:

", "CreateTable": "

The CreateTable operation adds a new table to your account. In an AWS account, table names must be unique within each region. That is, you can have two tables with same name if you create the tables in different regions.

CreateTable is an asynchronous operation. Upon receiving a CreateTable request, DynamoDB immediately returns a response with a TableStatus of CREATING. After the table is created, DynamoDB sets the TableStatus to ACTIVE. You can perform read and write operations only on an ACTIVE table.

You can optionally define secondary indexes on the new table, as part of the CreateTable operation. If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially. Only one table with secondary indexes can be in the CREATING state at any given time.

You can use the DescribeTable API to check the table status.

", "DeleteItem": "

Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.

In addition to deleting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter.

Unless you specify conditions, the DeleteItem is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response.

Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.

", "DeleteTable": "

The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the specified table is in the DELETING state until DynamoDB completes the deletion. If the table is in the ACTIVE state, you can delete it. If a table is in CREATING or UPDATING states, then DynamoDB returns a ResourceInUseException. If the specified table does not exist, DynamoDB returns a ResourceNotFoundException. If table is already in the DELETING state, no error is returned.

DynamoDB might continue to accept data read and write operations, such as GetItem and PutItem, on a table in the DELETING state until the table deletion is complete.

When you delete a table, any indexes on that table are also deleted.

Use the DescribeTable API to check the status of the table.

", @@ -9,12 +10,12 @@ "GetItem": "

The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data.

GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to true. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.

", "ListTables": "

Returns an array of table names associated with the current account and endpoint. The output from ListTables is paginated, with each page returning a maximum of 100 table names.

", "PutItem": "

Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values.

In addition to putting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter.

When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException exception.

You can request that PutItem return either a copy of the original item (before the update) or a copy of the updated item (after the update). For more information, see the ReturnValues description below.

To prevent a new item from replacing an existing item, use a conditional put operation with ComparisonOperator set to NULL for the primary key attribute, or attributes.

For more information about using this API, see Working with Items in the Amazon DynamoDB Developer Guide.

", - "Query": "

A Query operation directly accesses items from a table using the table primary key, or from an index using the index key. You must provide a specific hash key value. You can narrow the scope of the query by using comparison operators on the range key value, or on the index key. You can use the ScanIndexForward parameter to get results in forward or reverse order, by range key or by index key.

Queries that do not return results consume the minimum number of read capacity units for that type of read operation.

If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with LastEvaluatedKey to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns both an empty result set and a LastEvaluatedKey. The LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used Limit.

You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set ConsistentRead to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.

", + "Query": "

A Query operation uses the primary key of a table or a secondary index to directly access items from that table or index.

Use the KeyConditionExpression parameter to provide a specific hash key value. The Query operation will return all of the items from the table or index with that hash key value. You can optionally narrow the scope of the Query by specifying a range key value and a comparison operator in the KeyConditionExpression. You can use the ScanIndexForward parameter to get results in forward or reverse order, by range key or by index key.

Queries that do not return results consume the minimum number of read capacity units for that type of read operation.

If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with LastEvaluatedKey to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns both an empty result set and a LastEvaluatedKey. The LastEvaluatedKey is only provided if the results exceed 1 MB, or if you have used Limit.

You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set ConsistentRead to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.

", "Scan": "

The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a ScanFilter operation.

If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.

The result set is eventually consistent.

By default, Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.

", "UpdateItem": "

Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). If conditions are specified and the item does not exist, then the operation fails and a new item is not created.

You can also return the item's attribute values in the same UpdateItem operation using the ReturnValues parameter.

", "UpdateTable": "

Updates the provisioned throughput for the given table, or manages the global secondary indexes on the table.

You can increase or decrease the table's provisioned throughput values within the maximums and minimums listed in the Limits section in the Amazon DynamoDB Developer Guide.

In addition, you can use UpdateTable to add, modify or delete global secondary indexes on the table. For more information, see Managing Global Secondary Indexes in the Amazon DynamoDB Developer Guide.

The table must be in the ACTIVE state for UpdateTable to succeed. UpdateTable is an asynchronous operation; while executing the operation, the table is in the UPDATING state. While the table is in the UPDATING state, the table still has the provisioned throughput from before the call. The table's new provisioned throughput settings go into effect when the table returns to the ACTIVE state; at that point, the UpdateTable operation is complete.

" }, - "service": "Amazon DynamoDB

Overview

This is the Amazon DynamoDB API Reference. This guide provides descriptions and samples of the low-level DynamoDB API. For information about DynamoDB application development, go to the Amazon DynamoDB Developer Guide.

Instead of making the requests to the low-level DynamoDB API directly from your application, we recommend that you use the AWS Software Development Kits (SDKs). The easy-to-use libraries in the AWS SDKs make it unnecessary to call the low-level DynamoDB API directly from your application. The libraries take care of request authentication, serialization, and connection management. For more information, go to Using the AWS SDKs with DynamoDB in the Amazon DynamoDB Developer Guide.

If you decide to code against the low-level DynamoDB API directly, you will need to write the necessary code to authenticate your requests. For more information on signing your requests, go to Using the DynamoDB API in the Amazon DynamoDB Developer Guide.

The following are short descriptions of each low-level API action, organized by function.

Managing Tables

For conceptual information about managing tables, go to Working with Tables in the Amazon DynamoDB Developer Guide.

Reading Data

For conceptual information about reading data, go to Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.

Modifying Data

For conceptual information about modifying data, go to Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.

", + "service": "Amazon DynamoDB

Overview

This is the Amazon DynamoDB API Reference. This guide provides descriptions and samples of the low-level DynamoDB API. For information about DynamoDB application development, see the Amazon DynamoDB Developer Guide.

Instead of making the requests to the low-level DynamoDB API directly from your application, we recommend that you use the AWS Software Development Kits (SDKs). The easy-to-use libraries in the AWS SDKs make it unnecessary to call the low-level DynamoDB API directly from your application. The libraries take care of request authentication, serialization, and connection management. For more information, see Using the AWS SDKs with DynamoDB in the Amazon DynamoDB Developer Guide.

If you decide to code against the low-level DynamoDB API directly, you will need to write the necessary code to authenticate your requests. For more information on signing your requests, see Using the DynamoDB API in the Amazon DynamoDB Developer Guide.

The following are short descriptions of each low-level API action, organized by function.

Managing Tables

For conceptual information about managing tables, see Working with Tables in the Amazon DynamoDB Developer Guide.

Reading Data

For conceptual information about reading data, see Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.

Modifying Data

For conceptual information about modifying data, see Working with Items and Query and Scan Operations in the Amazon DynamoDB Developer Guide.

", "shapes": { "AttributeAction": { "base": null, @@ -65,16 +66,16 @@ "AttributeNameList": { "base": null, "refs": { - "GetItemInput$AttributesToGet": "

There is a newer parameter available. Use ProjectionExpression instead. Note that if you use AttributesToGet and ProjectionExpression at the same time, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

", + "GetItemInput$AttributesToGet": "

This is a legacy parameter, for backward compatibility. New applications should use ProjectionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

", "KeysAndAttributes$AttributesToGet": "

One or more attributes to retrieve from the table or index. If no attribute names are specified then all attributes will be returned. If any of the specified attributes are not found, they will not appear in the result.

", - "QueryInput$AttributesToGet": "

There is a newer parameter available. Use ProjectionExpression instead. Note that if you use AttributesToGet and ProjectionExpression at the same time, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

You cannot use both AttributesToGet and Select together in a Query request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying AttributesToGet without any value for Select.)

If you query a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB will fetch each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.

If you query a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.

", - "ScanInput$AttributesToGet": "

There is a newer parameter available. Use ProjectionExpression instead. Note that if you use AttributesToGet and ProjectionExpression at the same time, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

" + "QueryInput$AttributesToGet": "

This is a legacy parameter, for backward compatibility. New applications should use ProjectionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

You cannot use both AttributesToGet and Select together in a Query request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying AttributesToGet without any value for Select.)

If you query a local secondary index and request only attributes that are projected into that index, the operation will read only the index and not the table. If any of the requested attributes are not projected into the local secondary index, DynamoDB will fetch each of these attributes from the parent table. This extra fetching incurs additional throughput cost and latency.

If you query a global secondary index, you can only request attributes that are projected into the index. Global secondary index queries cannot fetch attributes from the parent table.

", + "ScanInput$AttributesToGet": "

This is a legacy parameter, for backward compatibility. New applications should use ProjectionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

This parameter allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map.

The names of one or more attributes to retrieve. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application.

" } }, "AttributeUpdates": { "base": null, "refs": { - "UpdateItemInput$AttributeUpdates": "

There is a newer parameter available. Use UpdateExpression instead. Note that if you use AttributeUpdates and UpdateExpression at the same time, DynamoDB will return a ValidationException exception.

This parameter can be used for modifying top-level attributes; however, it does not support individual list or map elements.

The names of attributes to be modified, the action to perform on each, and the new value for each. If you are updating an attribute that is an index key attribute for any indexes on that table, the attribute type must match the index key type defined in the AttributesDefinition of the table description. You can use UpdateItem to update any nonkey attributes.

Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes must not be empty. Requests with empty values will be rejected with a ValidationException exception.

Each AttributeUpdates element consists of an attribute name to modify, along with the following:

If you provide any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition.

" + "UpdateItemInput$AttributeUpdates": "

This is a legacy parameter, for backward compatibility. New applications should use UpdateExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

This parameter can be used for modifying top-level attributes; however, it does not support individual list or map elements.

The names of attributes to be modified, the action to perform on each, and the new value for each. If you are updating an attribute that is an index key attribute for any indexes on that table, the attribute type must match the index key type defined in the AttributesDefinition of the table description. You can use UpdateItem to update any nonkey attributes.

Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes must not be empty. Requests with empty values will be rejected with a ValidationException exception.

Each AttributeUpdates element consists of an attribute name to modify, along with the following:

If you provide any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition.

" } }, "AttributeValue": { @@ -124,7 +125,7 @@ "BatchGetRequestMap": { "base": null, "refs": { - "BatchGetItemInput$RequestItems": "

A map of one or more table names and, for each table, the corresponding primary keys for the items to retrieve. Each table name can be invoked only once.

Each element in the map consists of the following:

", + "BatchGetItemInput$RequestItems": "

A map of one or more table names and, for each table, a map that describes one or more items to retrieve from that table. Each table name can be used only once per BatchGetItem request.

Each element in the map of items to retrieve consists of the following:

", "BatchGetItemOutput$UnprocessedKeys": "

A map of tables and their respective keys that were not processed with the current response. The UnprocessedKeys value is in the same form as RequestItems, so the value can be provided directly to a subsequent BatchGetItem operation. For more information, see RequestItems in the Request Parameters section.

Each element consists of:

If there are no unprocessed keys remaining, the response contains an empty UnprocessedKeys map.

" } }, @@ -201,11 +202,11 @@ "ConditionExpression": { "base": null, "refs": { - "DeleteItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional DeleteItem to succeed.

An expression can contain any of the following:

For more information on condition expressions, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "PutItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional PutItem operation to succeed.

An expression can contain any of the following:

For more information on condition expressions, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "QueryInput$FilterExpression": "

A string that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

For more information, go to Filter Expressions in the Amazon DynamoDB Developer Guide.

", - "ScanInput$FilterExpression": "

A string that contains conditions that DynamoDB applies after the Scan operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

For more information, go to Filter Expressions in the Amazon DynamoDB Developer Guide.

", - "UpdateItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional update to succeed.

An expression can contain any of the following:

For more information on condition expressions, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

" + "DeleteItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional DeleteItem to succeed.

An expression can contain any of the following:

For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.

ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.

", + "PutItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional PutItem operation to succeed.

An expression can contain any of the following:

For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.

ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.

", + "QueryInput$FilterExpression": "

A string that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.

FilterExpression replaces the legacy QueryFilter and ConditionalOperator parameters.

", + "ScanInput$FilterExpression": "

A string that contains conditions that DynamoDB applies after the Scan operation, but before the data is returned to you. Items that do not satisfy the FilterExpression criteria are not returned.

A FilterExpression is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

For more information, see Filter Expressions in the Amazon DynamoDB Developer Guide.

FilterExpression replaces the legacy ScanFilter and ConditionalOperator parameters.

", + "UpdateItemInput$ConditionExpression": "

A condition that must be satisfied in order for a conditional update to succeed.

An expression can contain any of the following:

For more information on condition expressions, see Specifying Conditions in the Amazon DynamoDB Developer Guide.

ConditionExpression replaces the legacy ConditionalOperator and Expected parameters.

" } }, "ConditionalCheckFailedException": { @@ -216,11 +217,11 @@ "ConditionalOperator": { "base": null, "refs": { - "DeleteItemInput$ConditionalOperator": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use ConditionalOperator and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", - "PutItemInput$ConditionalOperator": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use ConditionalOperator and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", - "QueryInput$ConditionalOperator": "

A logical operator to apply to the conditions in a QueryFilter map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", - "ScanInput$ConditionalOperator": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use ConditionalOperator and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in a ScanFilter map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", - "UpdateItemInput$ConditionalOperator": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use ConditionalOperator and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

" + "DeleteItemInput$ConditionalOperator": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", + "PutItemInput$ConditionalOperator": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", + "QueryInput$ConditionalOperator": "

This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in a QueryFilter map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", + "ScanInput$ConditionalOperator": "

This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in a ScanFilter map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

", + "UpdateItemInput$ConditionalOperator": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A logical operator to apply to the conditions in the Expected map:

If you omit ConditionalOperator, then AND is the default.

The operation will succeed only if the entire map evaluates to true.

This parameter does not support attributes of type List or Map.

" } }, "ConsistentRead": { @@ -338,13 +339,13 @@ "ExpectedAttributeMap": { "base": null, "refs": { - "DeleteItemInput$Expected": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use Expected and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the DeleteItem operation.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

This parameter does not support attributes of type List or Map.

", - "PutItemInput$Expected": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use Expected and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the PutItem operation.

This parameter does not support attributes of type List or Map.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

", - "UpdateItemInput$Expected": "

There is a newer parameter available. Use ConditionExpression instead. Note that if you use Expected and ConditionExpression at the same time, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the UpdateItem operation.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

This parameter does not support attributes of type List or Map.

" + "DeleteItemInput$Expected": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the DeleteItem operation.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

This parameter does not support attributes of type List or Map.

", + "PutItemInput$Expected": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the PutItem operation.

This parameter does not support attributes of type List or Map.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

", + "UpdateItemInput$Expected": "

This is a legacy parameter, for backward compatibility. New applications should use ConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A map of attribute/condition pairs. Expected provides a conditional block for the UpdateItem operation.

Each element of Expected consists of an attribute name, a comparison operator, and one or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison operator. For each Expected element, the result of the evaluation is either true or false.

If you specify more than one element in the Expected map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.

Expected contains the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

For backward compatibility with previous DynamoDB releases, the following parameters can be used instead of AttributeValueList and ComparisonOperator:

The Value and Exists parameters are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

This parameter does not support attributes of type List or Map.

" } }, "ExpectedAttributeValue": { - "base": "

Represents a condition to be compared with an attribute value. This condition can be used with DeleteItem, PutItem or UpdateItem operations; if the comparison evaluates to true, the operation succeeds; if not, the operation fails. You can use ExpectedAttributeValue in one of two different ways:

Value and Exists are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

", + "base": "

Represents a condition to be compared with an attribute value. This condition can be used with DeleteItem, PutItem or UpdateItem operations; if the comparison evaluates to true, the operation succeeds; if not, the operation fails. You can use ExpectedAttributeValue in one of two different ways:

Value and Exists are incompatible with AttributeValueList and ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a ValidationException exception.

", "refs": { "ExpectedAttributeMap$value": null } @@ -352,13 +353,13 @@ "ExpressionAttributeNameMap": { "base": null, "refs": { - "DeleteItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "GetItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "KeysAndAttributes$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "PutItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "QueryInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "ScanInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "UpdateItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, go to Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

" + "DeleteItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "GetItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "KeysAndAttributes$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "PutItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "QueryInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "ScanInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "UpdateItemInput$ExpressionAttributeNames": "

One or more substitution tokens for attribute names in an expression. The following are some use cases for using ExpressionAttributeNames:

Use the # character in an expression to dereference an attribute name. For example, consider the following attribute name:

The name of this attribute conflicts with a reserved word, so it cannot be used directly in an expression. (For the complete list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide). To work around this, you could specify the following for ExpressionAttributeNames:

You could then use this substitution in an expression, as in this example:

Tokens that begin with the : character are expression attribute values, which are placeholders for the actual value at runtime.

For more information on expression attribute names, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

" } }, "ExpressionAttributeNameVariable": { @@ -370,11 +371,11 @@ "ExpressionAttributeValueMap": { "base": null, "refs": { - "DeleteItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "PutItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "QueryInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "ScanInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

", - "UpdateItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, go to Specifying Conditions in the Amazon DynamoDB Developer Guide.

" + "DeleteItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "PutItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "QueryInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "ScanInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

", + "UpdateItemInput$ExpressionAttributeValues": "

One or more values that can be substituted in an expression.

Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following:

Available | Backordered | Discontinued

You would first need to specify ExpressionAttributeValues as follows:

{ \":avail\":{\"S\":\"Available\"}, \":back\":{\"S\":\"Backordered\"}, \":disc\":{\"S\":\"Discontinued\"} }

You could then use these values in an expression, such as this:

ProductStatus IN (:avail, :back, :disc)

For more information on expression attribute values, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

" } }, "ExpressionAttributeValueVariable": { @@ -386,8 +387,8 @@ "FilterConditionMap": { "base": null, "refs": { - "QueryInput$QueryFilter": "

There is a newer parameter available. Use FilterExpression instead. Note that if you use QueryFilter and FilterExpression at the same time, DynamoDB will return a ValidationException exception.

A condition that evaluates the query results after the items are read and returns only the desired values.

This parameter does not support attributes of type List or Map.

A QueryFilter is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

If you provide more than one condition in the QueryFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

Note that QueryFilter does not allow key attributes. You cannot define a filter condition on a hash key or range key.

Each QueryFilter element consists of an attribute name to compare, along with the following:

", - "ScanInput$ScanFilter": "

There is a newer parameter available. Use FilterExpression instead. Note that if you use ScanFilter and FilterExpression at the same time, DynamoDB will return a ValidationException exception.

A condition that evaluates the scan results and returns only the desired values.

This parameter does not support attributes of type List or Map.

If you specify more than one condition in the ScanFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

Each ScanFilter element consists of an attribute name to compare, along with the following:

" + "QueryInput$QueryFilter": "

This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A condition that evaluates the query results after the items are read and returns only the desired values.

This parameter does not support attributes of type List or Map.

A QueryFilter is applied after the items have already been read; the process of filtering does not consume any additional read capacity units.

If you provide more than one condition in the QueryFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

Note that QueryFilter does not allow key attributes. You cannot define a filter condition on a hash key or range key.

Each QueryFilter element consists of an attribute name to compare, along with the following:

", + "ScanInput$ScanFilter": "

This is a legacy parameter, for backward compatibility. New applications should use FilterExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

A condition that evaluates the scan results and returns only the desired values.

This parameter does not support attributes of type List or Map.

If you specify more than one condition in the ScanFilter map, then by default all of the conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the conditions must evaluate to true, rather than all of them.)

Each ScanFilter element consists of an attribute name to compare, along with the following:

" } }, "GetItemInput": { @@ -445,7 +446,7 @@ "GlobalSecondaryIndexDescription$IndexName": "

The name of the global secondary index.

", "LocalSecondaryIndex$IndexName": "

The name of the local secondary index. The name must be unique among all other indexes on this table.

", "LocalSecondaryIndexDescription$IndexName": "

Represents the name of the local secondary index.

", - "QueryInput$IndexName": "

The name of an index to query. This index can be any local secondary index or global secondary index on the table.

", + "QueryInput$IndexName": "

The name of an index to query. This index can be any local secondary index or global secondary index on the table. Note that if you use the IndexName parameter, you must also provide TableName.

", "ScanInput$IndexName": "

The name of a secondary index to scan. This index can be any local secondary index or global secondary index. Note that if you use the IndexName parameter, you must also provide TableName.

", "SecondaryIndexesCapacityMap$key": null, "UpdateGlobalSecondaryIndexAction$IndexName": "

The name of the global secondary index to be updated.

" @@ -540,7 +541,13 @@ "KeyConditions": { "base": null, "refs": { - "QueryInput$KeyConditions": "

The selection criteria for the query. For a query on a table, you can have conditions only on the table primary key attributes. You must provide the hash key attribute name and value as an EQ condition. You can optionally provide a second condition, referring to the range key attribute.

If you do not provide a range key condition, all of the items that match the hash key will be retrieved. If a FilterExpression or QueryFilter is present, it will be applied after the items are retrieved.

For a query on an index, you can have conditions only on the index key attributes. You must provide the index hash attribute name and value as an EQ condition. You can optionally provide a second condition, referring to the index key range attribute.

Each KeyConditions element consists of an attribute name to compare, along with the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

" + "QueryInput$KeyConditions": "

This is a legacy parameter, for backward compatibility. New applications should use KeyConditionExpression instead. Do not combine legacy parameters and expression parameters in a single API call; otherwise, DynamoDB will return a ValidationException exception.

The selection criteria for the query. For a query on a table, you can have conditions only on the table primary key attributes. You must provide the hash key attribute name and value as an EQ condition. You can optionally provide a second condition, referring to the range key attribute.

If you don't provide a range key condition, all of the items that match the hash key will be retrieved. If a FilterExpression or QueryFilter is present, it will be applied after the items are retrieved.

For a query on an index, you can have conditions only on the index key attributes. You must provide the index hash attribute name and value as an EQ condition. You can optionally provide a second condition, referring to the index key range attribute.

Each KeyConditions element consists of an attribute name to compare, along with the following:

For usage examples of AttributeValueList and ComparisonOperator, see Legacy Conditional Parameters in the Amazon DynamoDB Developer Guide.

" + } + }, + "KeyExpression": { + "base": null, + "refs": { + "QueryInput$KeyConditionExpression": "

The condition that specifies the key value(s) for items to be retrieved by the Query action.

The condition must perform an equality test on a single hash key value. The condition can also test for one or more range key values. A Query can use KeyConditionExpression to retrieve a single item with a given hash and range key value, or several items that have the same hash key value but different range key values.

The hash key equality test is required, and must be specified in the following format:

hashAttributeName = :hashval

If you also want to provide a range key condition, it must be combined using AND with the hash key condition. Following is an example, using the = comparison operator for the range key:

hashAttributeName = :hashval AND rangeAttributeName = :rangeval

Valid comparisons for the range key condition are as follows:

Use the ExpressionAttributeValues parameter to replace tokens such as :hashval and :rangeval with actual values at runtime.

You can optionally use the ExpressionAttributeNames parameter to replace the names of the hash and range attributes with placeholder tokens. This might be necessary if an attribute name conflicts with a DynamoDB reserved word. For example, the following KeyConditionExpression causes an error because Size is a reserved word:

To work around this, define a placeholder (such a #myval) to represent the attribute name Size. KeyConditionExpression then is as follows:

For a list of reserved words, see Reserved Words in the Amazon DynamoDB Developer Guide.

For more information on ExpressionAttributeNames and ExpressionAttributeValues, see Using Placeholders for Attribute Names and Values in the Amazon DynamoDB Developer Guide.

KeyConditionExpression replaces the legacy KeyConditions parameter.

" } }, "KeyList": { @@ -715,10 +722,10 @@ "ProjectionExpression": { "base": null, "refs": { - "GetItemInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "KeysAndAttributes$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the ProjectionExpression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "QueryInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

", - "ScanInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the specified table or index. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, go to Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

" + "GetItemInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

ProjectionExpression replaces the legacy AttributesToGet parameter.

", + "KeysAndAttributes$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the ProjectionExpression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

ProjectionExpression replaces the legacy AttributesToGet parameter.

", + "QueryInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the table. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

ProjectionExpression replaces the legacy AttributesToGet parameter.

", + "ScanInput$ProjectionExpression": "

A string that identifies one or more attributes to retrieve from the specified table or index. These attributes can include scalars, sets, or elements of a JSON document. The attributes in the expression must be separated by commas.

If no attribute names are specified, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result.

For more information, see Accessing Item Attributes in the Amazon DynamoDB Developer Guide.

ProjectionExpression replaces the legacy AttributesToGet parameter.

" } }, "ProjectionType": { @@ -860,7 +867,7 @@ "Select": { "base": null, "refs": { - "QueryInput$Select": "

The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.

If neither Select nor AttributesToGet are specified, DynamoDB defaults to ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when accessing an index. You cannot use both Select and AttributesToGet together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying AttributesToGet without any value for Select.)

", + "QueryInput$Select": "

The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, the count of matching items, or in the case of an index, some or all of the attributes projected into the index.

If neither Select nor AttributesToGet are specified, DynamoDB defaults to ALL_ATTRIBUTES when accessing a table, and ALL_PROJECTED_ATTRIBUTES when accessing an index. You cannot use both Select and AttributesToGet together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying AttributesToGet without any value for Select.)

If you use the ProjectionExpression parameter, then the value for Select can only be SPECIFIC_ATTRIBUTES. Any other value for Select will return an error.

", "ScanInput$Select": "

The attributes to be returned in the result. You can retrieve all item attributes, specific item attributes, or the count of matching items.

If neither Select nor AttributesToGet are specified, DynamoDB defaults to ALL_ATTRIBUTES. You cannot use both AttributesToGet and Select together in a single request, unless the value for Select is SPECIFIC_ATTRIBUTES. (This usage is equivalent to specifying AttributesToGet without any value for Select.)

" } }, @@ -925,7 +932,7 @@ "UpdateExpression": { "base": null, "refs": { - "UpdateItemInput$UpdateExpression": "

An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them.

The following action values are available for UpdateExpression.

You can have many actions in a single expression, such as the following: SET a=:value1, b=:value2 DELETE :value3, :value4, :value5

For more information on update expressions, go to Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.

" + "UpdateItemInput$UpdateExpression": "

An expression that defines one or more attributes to be updated, the action to be performed on them, and new value(s) for them.

The following action values are available for UpdateExpression.

You can have many actions in a single expression, such as the following: SET a=:value1, b=:value2 DELETE :value3, :value4, :value5

For more information on update expressions, see Modifying Items and Attributes in the Amazon DynamoDB Developer Guide.

UpdateExpression replaces the legacy AttributeUpdates parameter.

" } }, "UpdateGlobalSecondaryIndexAction": { diff --git a/src/data/ec2/2015-03-01/api-2.json b/src/data/ec2/2015-04-15/api-2.json similarity index 91% rename from src/data/ec2/2015-03-01/api-2.json rename to src/data/ec2/2015-04-15/api-2.json index 4a22352ba3..f1786e6d9d 100644 --- a/src/data/ec2/2015-03-01/api-2.json +++ b/src/data/ec2/2015-04-15/api-2.json @@ -1,12 +1,12 @@ { "version":"2.0", "metadata":{ - "apiVersion":"2015-03-01", + "apiVersion":"2015-04-15", "endpointPrefix":"ec2", "serviceAbbreviation":"Amazon EC2", "serviceFullName":"Amazon Elastic Compute Cloud", "signatureVersion":"v4", - "xmlNamespace":"http://ec2.amazonaws.com/doc/2015-03-01", + "xmlNamespace":"http://ec2.amazonaws.com/doc/2015-04-15", "protocol":"ec2" }, "operations":{ @@ -177,6 +177,15 @@ "input":{"shape":"CancelReservedInstancesListingRequest"}, "output":{"shape":"CancelReservedInstancesListingResult"} }, + "CancelSpotFleetRequests":{ + "name":"CancelSpotFleetRequests", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CancelSpotFleetRequestsRequest"}, + "output":{"shape":"CancelSpotFleetRequestsResponse"} + }, "CancelSpotInstanceRequests":{ "name":"CancelSpotInstanceRequests", "http":{ @@ -319,7 +328,8 @@ "method":"POST", "requestUri":"/" }, - "input":{"shape":"CreateRouteRequest"} + "input":{"shape":"CreateRouteRequest"}, + "output":{"shape":"CreateRouteResult"} }, "CreateRouteTable":{ "name":"CreateRouteTable", @@ -398,6 +408,15 @@ "input":{"shape":"CreateVpcRequest"}, "output":{"shape":"CreateVpcResult"} }, + "CreateVpcEndpoint":{ + "name":"CreateVpcEndpoint", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateVpcEndpointRequest"}, + "output":{"shape":"CreateVpcEndpointResult"} + }, "CreateVpcPeeringConnection":{ "name":"CreateVpcPeeringConnection", "http":{ @@ -569,6 +588,15 @@ }, "input":{"shape":"DeleteVpcRequest"} }, + "DeleteVpcEndpoints":{ + "name":"DeleteVpcEndpoints", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteVpcEndpointsRequest"}, + "output":{"shape":"DeleteVpcEndpointsResult"} + }, "DeleteVpcPeeringConnection":{ "name":"DeleteVpcPeeringConnection", "http":{ @@ -775,6 +803,15 @@ "input":{"shape":"DescribeKeyPairsRequest"}, "output":{"shape":"DescribeKeyPairsResult"} }, + "DescribeMovingAddresses":{ + "name":"DescribeMovingAddresses", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeMovingAddressesRequest"}, + "output":{"shape":"DescribeMovingAddressesResult"} + }, "DescribeNetworkAcls":{ "name":"DescribeNetworkAcls", "http":{ @@ -811,6 +848,15 @@ "input":{"shape":"DescribePlacementGroupsRequest"}, "output":{"shape":"DescribePlacementGroupsResult"} }, + "DescribePrefixLists":{ + "name":"DescribePrefixLists", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribePrefixListsRequest"}, + "output":{"shape":"DescribePrefixListsResult"} + }, "DescribeRegions":{ "name":"DescribeRegions", "http":{ @@ -901,6 +947,33 @@ "input":{"shape":"DescribeSpotDatafeedSubscriptionRequest"}, "output":{"shape":"DescribeSpotDatafeedSubscriptionResult"} }, + "DescribeSpotFleetInstances":{ + "name":"DescribeSpotFleetInstances", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeSpotFleetInstancesRequest"}, + "output":{"shape":"DescribeSpotFleetInstancesResponse"} + }, + "DescribeSpotFleetRequestHistory":{ + "name":"DescribeSpotFleetRequestHistory", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeSpotFleetRequestHistoryRequest"}, + "output":{"shape":"DescribeSpotFleetRequestHistoryResponse"} + }, + "DescribeSpotFleetRequests":{ + "name":"DescribeSpotFleetRequests", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeSpotFleetRequestsRequest"}, + "output":{"shape":"DescribeSpotFleetRequestsResponse"} + }, "DescribeSpotInstanceRequests":{ "name":"DescribeSpotInstanceRequests", "http":{ @@ -982,6 +1055,24 @@ "input":{"shape":"DescribeVpcClassicLinkRequest"}, "output":{"shape":"DescribeVpcClassicLinkResult"} }, + "DescribeVpcEndpointServices":{ + "name":"DescribeVpcEndpointServices", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeVpcEndpointServicesRequest"}, + "output":{"shape":"DescribeVpcEndpointServicesResult"} + }, + "DescribeVpcEndpoints":{ + "name":"DescribeVpcEndpoints", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeVpcEndpointsRequest"}, + "output":{"shape":"DescribeVpcEndpointsResult"} + }, "DescribeVpcPeeringConnections":{ "name":"DescribeVpcPeeringConnections", "http":{ @@ -1249,6 +1340,15 @@ }, "input":{"shape":"ModifyVpcAttributeRequest"} }, + "ModifyVpcEndpoint":{ + "name":"ModifyVpcEndpoint", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ModifyVpcEndpointRequest"}, + "output":{"shape":"ModifyVpcEndpointResult"} + }, "MonitorInstances":{ "name":"MonitorInstances", "http":{ @@ -1258,6 +1358,15 @@ "input":{"shape":"MonitorInstancesRequest"}, "output":{"shape":"MonitorInstancesResult"} }, + "MoveAddressToVpc":{ + "name":"MoveAddressToVpc", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"MoveAddressToVpcRequest"}, + "output":{"shape":"MoveAddressToVpcResult"} + }, "PurchaseReservedInstancesOffering":{ "name":"PurchaseReservedInstancesOffering", "http":{ @@ -1343,6 +1452,15 @@ }, "input":{"shape":"ReportInstanceStatusRequest"} }, + "RequestSpotFleet":{ + "name":"RequestSpotFleet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RequestSpotFleetRequest"}, + "output":{"shape":"RequestSpotFleetResponse"} + }, "RequestSpotInstances":{ "name":"RequestSpotInstances", "http":{ @@ -1384,6 +1502,15 @@ }, "input":{"shape":"ResetSnapshotAttributeRequest"} }, + "RestoreAddressToClassic":{ + "name":"RestoreAddressToClassic", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RestoreAddressToClassicRequest"}, + "output":{"shape":"RestoreAddressToClassicResult"} + }, "RevokeSecurityGroupEgress":{ "name":"RevokeSecurityGroupEgress", "http":{ @@ -1530,6 +1657,30 @@ "locationName":"item" } }, + "ActiveInstance":{ + "type":"structure", + "members":{ + "InstanceType":{ + "shape":"String", + "locationName":"instanceType" + }, + "InstanceId":{ + "shape":"String", + "locationName":"instanceId" + }, + "SpotInstanceRequestId":{ + "shape":"String", + "locationName":"spotInstanceRequestId" + } + } + }, + "ActiveInstanceSet":{ + "type":"list", + "member":{ + "shape":"ActiveInstance", + "locationName":"item" + } + }, "Address":{ "type":"structure", "members":{ @@ -1982,6 +2133,17 @@ "type":"string", "enum":["available"] }, + "BatchState":{ + "type":"string", + "enum":[ + "submitted", + "active", + "cancelled", + "failed", + "cancelled_running", + "cancelled_terminating" + ] + }, "BlockDeviceMapping":{ "type":"structure", "members":{ @@ -2118,6 +2280,15 @@ "failed" ] }, + "CancelBatchErrorCode":{ + "type":"string", + "enum":[ + "fleetRequestIdDoesNotExist", + "fleetRequestIdMalformed", + "fleetRequestNotInCancellableState", + "unexpectedError" + ] + }, "CancelBundleTaskRequest":{ "type":"structure", "required":["BundleId"], @@ -2210,6 +2381,110 @@ } } }, + "CancelSpotFleetRequestsError":{ + "type":"structure", + "required":[ + "Code", + "Message" + ], + "members":{ + "Code":{ + "shape":"CancelBatchErrorCode", + "locationName":"code" + }, + "Message":{ + "shape":"String", + "locationName":"message" + } + } + }, + "CancelSpotFleetRequestsErrorItem":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "Error" + ], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "Error":{ + "shape":"CancelSpotFleetRequestsError", + "locationName":"error" + } + } + }, + "CancelSpotFleetRequestsErrorSet":{ + "type":"list", + "member":{ + "shape":"CancelSpotFleetRequestsErrorItem", + "locationName":"item" + } + }, + "CancelSpotFleetRequestsRequest":{ + "type":"structure", + "required":[ + "SpotFleetRequestIds", + "TerminateInstances" + ], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "SpotFleetRequestIds":{ + "shape":"ValueStringList", + "locationName":"spotFleetRequestId" + }, + "TerminateInstances":{ + "shape":"Boolean", + "locationName":"terminateInstances" + } + } + }, + "CancelSpotFleetRequestsResponse":{ + "type":"structure", + "members":{ + "UnsuccessfulFleetRequests":{ + "shape":"CancelSpotFleetRequestsErrorSet", + "locationName":"unsuccessfulFleetRequestSet" + }, + "SuccessfulFleetRequests":{ + "shape":"CancelSpotFleetRequestsSuccessSet", + "locationName":"successfulFleetRequestSet" + } + } + }, + "CancelSpotFleetRequestsSuccessItem":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "CurrentSpotFleetRequestState", + "PreviousSpotFleetRequestState" + ], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "CurrentSpotFleetRequestState":{ + "shape":"BatchState", + "locationName":"currentSpotFleetRequestState" + }, + "PreviousSpotFleetRequestState":{ + "shape":"BatchState", + "locationName":"previousSpotFleetRequestState" + } + } + }, + "CancelSpotFleetRequestsSuccessSet":{ + "type":"list", + "member":{ + "shape":"CancelSpotFleetRequestsSuccessItem", + "locationName":"item" + } + }, "CancelSpotInstanceRequestState":{ "type":"string", "enum":[ @@ -2802,6 +3077,23 @@ "VpcPeeringConnectionId":{ "shape":"String", "locationName":"vpcPeeringConnectionId" + }, + "ClientToken":{ + "shape":"String", + "locationName":"clientToken" + } + } + }, + "CreateRouteResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "locationName":"return" + }, + "ClientToken":{ + "shape":"String", + "locationName":"clientToken" } } }, @@ -2988,6 +3280,37 @@ "KmsKeyId":{"shape":"String"} } }, + "CreateVpcEndpointRequest":{ + "type":"structure", + "required":[ + "VpcId", + "ServiceName" + ], + "members":{ + "DryRun":{"shape":"Boolean"}, + "VpcId":{"shape":"String"}, + "ServiceName":{"shape":"String"}, + "PolicyDocument":{"shape":"String"}, + "RouteTableIds":{ + "shape":"ValueStringList", + "locationName":"RouteTableId" + }, + "ClientToken":{"shape":"String"} + } + }, + "CreateVpcEndpointResult":{ + "type":"structure", + "members":{ + "VpcEndpoint":{ + "shape":"VpcEndpoint", + "locationName":"vpcEndpoint" + }, + "ClientToken":{ + "shape":"String", + "locationName":"clientToken" + } + } + }, "CreateVpcPeeringConnectionRequest":{ "type":"structure", "members":{ @@ -3380,6 +3703,26 @@ "VolumeId":{"shape":"String"} } }, + "DeleteVpcEndpointsRequest":{ + "type":"structure", + "required":["VpcEndpointIds"], + "members":{ + "DryRun":{"shape":"Boolean"}, + "VpcEndpointIds":{ + "shape":"ValueStringList", + "locationName":"VpcEndpointId" + } + } + }, + "DeleteVpcEndpointsResult":{ + "type":"structure", + "members":{ + "Unsuccessful":{ + "shape":"UnsuccessfulItemSet", + "locationName":"unsuccessful" + } + } + }, "DeleteVpcPeeringConnectionRequest":{ "type":"structure", "required":["VpcPeeringConnectionId"], @@ -3951,6 +4294,44 @@ } } }, + "DescribeMovingAddressesRequest":{ + "type":"structure", + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "PublicIps":{ + "shape":"ValueStringList", + "locationName":"publicIp" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + }, + "Filters":{ + "shape":"FilterList", + "locationName":"filter" + }, + "MaxResults":{ + "shape":"Integer", + "locationName":"maxResults" + } + } + }, + "DescribeMovingAddressesResult":{ + "type":"structure", + "members":{ + "MovingAddressStatuses":{ + "shape":"MovingAddressStatusSet", + "locationName":"movingAddressStatusSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "DescribeNetworkAclsRequest":{ "type":"structure", "members":{ @@ -4072,6 +4453,35 @@ } } }, + "DescribePrefixListsRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"}, + "PrefixListIds":{ + "shape":"ValueStringList", + "locationName":"PrefixListId" + }, + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" + }, + "MaxResults":{"shape":"Integer"}, + "NextToken":{"shape":"String"} + } + }, + "DescribePrefixListsResult":{ + "type":"structure", + "members":{ + "PrefixLists":{ + "shape":"PrefixListSet", + "locationName":"prefixListSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "DescribeRegionsRequest":{ "type":"structure", "members":{ @@ -4383,6 +4793,148 @@ } } }, + "DescribeSpotFleetInstancesRequest":{ + "type":"structure", + "required":["SpotFleetRequestId"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + }, + "MaxResults":{ + "shape":"Integer", + "locationName":"maxResults" + } + } + }, + "DescribeSpotFleetInstancesResponse":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "ActiveInstances" + ], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "ActiveInstances":{ + "shape":"ActiveInstanceSet", + "locationName":"activeInstanceSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, + "DescribeSpotFleetRequestHistoryRequest":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "StartTime" + ], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "EventType":{ + "shape":"EventType", + "locationName":"eventType" + }, + "StartTime":{ + "shape":"DateTime", + "locationName":"startTime" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + }, + "MaxResults":{ + "shape":"Integer", + "locationName":"maxResults" + } + } + }, + "DescribeSpotFleetRequestHistoryResponse":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "StartTime", + "LastEvaluatedTime", + "HistoryRecords" + ], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "StartTime":{ + "shape":"DateTime", + "locationName":"startTime" + }, + "LastEvaluatedTime":{ + "shape":"DateTime", + "locationName":"lastEvaluatedTime" + }, + "HistoryRecords":{ + "shape":"HistoryRecords", + "locationName":"historyRecordSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, + "DescribeSpotFleetRequestsRequest":{ + "type":"structure", + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "SpotFleetRequestIds":{ + "shape":"ValueStringList", + "locationName":"spotFleetRequestId" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + }, + "MaxResults":{ + "shape":"Integer", + "locationName":"maxResults" + } + } + }, + "DescribeSpotFleetRequestsResponse":{ + "type":"structure", + "required":["SpotFleetRequestConfigs"], + "members":{ + "SpotFleetRequestConfigs":{ + "shape":"SpotFleetRequestConfigSet", + "locationName":"spotFleetRequestConfigSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "DescribeSpotInstanceRequestsRequest":{ "type":"structure", "members":{ @@ -4677,6 +5229,56 @@ } } }, + "DescribeVpcEndpointServicesRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"}, + "MaxResults":{"shape":"Integer"}, + "NextToken":{"shape":"String"} + } + }, + "DescribeVpcEndpointServicesResult":{ + "type":"structure", + "members":{ + "ServiceNames":{ + "shape":"ValueStringList", + "locationName":"serviceNameSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, + "DescribeVpcEndpointsRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"}, + "VpcEndpointIds":{ + "shape":"ValueStringList", + "locationName":"VpcEndpointId" + }, + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" + }, + "MaxResults":{"shape":"Integer"}, + "NextToken":{"shape":"String"} + } + }, + "DescribeVpcEndpointsResult":{ + "type":"structure", + "members":{ + "VpcEndpoints":{ + "shape":"VpcEndpointSet", + "locationName":"vpcEndpointSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "DescribeVpcPeeringConnectionsRequest":{ "type":"structure", "members":{ @@ -5207,6 +5809,31 @@ "instance-stop" ] }, + "EventInformation":{ + "type":"structure", + "members":{ + "InstanceId":{ + "shape":"String", + "locationName":"instanceId" + }, + "EventSubType":{ + "shape":"String", + "locationName":"eventSubType" + }, + "EventDescription":{ + "shape":"String", + "locationName":"eventDescription" + } + } + }, + "EventType":{ + "type":"string", + "enum":[ + "instanceChange", + "fleetRequestChange", + "error" + ] + }, "ExecutableByStringList":{ "type":"list", "member":{ @@ -5414,20 +6041,49 @@ } } }, - "GroupIdentifierList":{ + "GroupIdentifierList":{ + "type":"list", + "member":{ + "shape":"GroupIdentifier", + "locationName":"item" + } + }, + "GroupNameStringList":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"GroupName" + } + }, + "HistoryRecord":{ + "type":"structure", + "required":[ + "Timestamp", + "EventType", + "EventInformation" + ], + "members":{ + "Timestamp":{ + "shape":"DateTime", + "locationName":"timestamp" + }, + "EventType":{ + "shape":"EventType", + "locationName":"eventType" + }, + "EventInformation":{ + "shape":"EventInformation", + "locationName":"eventInformation" + } + } + }, + "HistoryRecords":{ "type":"list", "member":{ - "shape":"GroupIdentifier", + "shape":"HistoryRecord", "locationName":"item" } }, - "GroupNameStringList":{ - "type":"list", - "member":{ - "shape":"String", - "locationName":"GroupName" - } - }, "HypervisorType":{ "type":"string", "enum":[ @@ -5655,8 +6311,13 @@ "ImageState":{ "type":"string", "enum":[ + "pending", "available", - "deregistered" + "invalid", + "deregistered", + "transient", + "failed", + "error" ] }, "ImageTypeValues":{ @@ -6920,6 +7581,10 @@ "IpRanges":{ "shape":"IpRangeList", "locationName":"ipRanges" + }, + "PrefixListIds":{ + "shape":"PrefixListIdList", + "locationName":"prefixListIds" } } }, @@ -7082,6 +7747,14 @@ } } }, + "LaunchSpecsList":{ + "type":"list", + "member":{ + "shape":"LaunchSpecification", + "locationName":"item" + }, + "min":1 + }, "ListingState":{ "type":"string", "enum":[ @@ -7309,6 +7982,33 @@ "EnableDnsHostnames":{"shape":"AttributeBooleanValue"} } }, + "ModifyVpcEndpointRequest":{ + "type":"structure", + "required":["VpcEndpointId"], + "members":{ + "DryRun":{"shape":"Boolean"}, + "VpcEndpointId":{"shape":"String"}, + "ResetPolicy":{"shape":"Boolean"}, + "PolicyDocument":{"shape":"String"}, + "AddRouteTableIds":{ + "shape":"ValueStringList", + "locationName":"AddRouteTableId" + }, + "RemoveRouteTableIds":{ + "shape":"ValueStringList", + "locationName":"RemoveRouteTableId" + } + } + }, + "ModifyVpcEndpointResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "locationName":"return" + } + } + }, "MonitorInstancesRequest":{ "type":"structure", "required":["InstanceIds"], @@ -7350,6 +8050,60 @@ "pending" ] }, + "MoveAddressToVpcRequest":{ + "type":"structure", + "required":["PublicIp"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "PublicIp":{ + "shape":"String", + "locationName":"publicIp" + } + } + }, + "MoveAddressToVpcResult":{ + "type":"structure", + "members":{ + "AllocationId":{ + "shape":"String", + "locationName":"allocationId" + }, + "Status":{ + "shape":"Status", + "locationName":"status" + } + } + }, + "MoveStatus":{ + "type":"string", + "enum":[ + "movingToVpc", + "restoringToClassic" + ] + }, + "MovingAddressStatus":{ + "type":"structure", + "members":{ + "PublicIp":{ + "shape":"String", + "locationName":"publicIp" + }, + "MoveStatus":{ + "shape":"MoveStatus", + "locationName":"moveStatus" + } + } + }, + "MovingAddressStatusSet":{ + "type":"list", + "member":{ + "shape":"MovingAddressStatus", + "locationName":"item" + } + }, "NetworkAcl":{ "type":"structure", "members":{ @@ -7755,6 +8509,46 @@ } } }, + "PrefixList":{ + "type":"structure", + "members":{ + "PrefixListId":{ + "shape":"String", + "locationName":"prefixListId" + }, + "PrefixListName":{ + "shape":"String", + "locationName":"prefixListName" + }, + "Cidrs":{ + "shape":"ValueStringList", + "locationName":"cidrSet" + } + } + }, + "PrefixListId":{ + "type":"structure", + "members":{ + "PrefixListId":{ + "shape":"String", + "locationName":"prefixListId" + } + } + }, + "PrefixListIdList":{ + "type":"list", + "member":{ + "shape":"PrefixListId", + "locationName":"item" + } + }, + "PrefixListSet":{ + "type":"list", + "member":{ + "shape":"PrefixList", + "locationName":"item" + } + }, "PriceSchedule":{ "type":"structure", "members":{ @@ -8320,6 +9114,30 @@ "impaired" ] }, + "RequestSpotFleetRequest":{ + "type":"structure", + "required":["SpotFleetRequestConfig"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "SpotFleetRequestConfig":{ + "shape":"SpotFleetRequestConfigData", + "locationName":"spotFleetRequestConfig" + } + } + }, + "RequestSpotFleetResponse":{ + "type":"structure", + "required":["SpotFleetRequestId"], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + } + } + }, "RequestSpotInstancesRequest":{ "type":"structure", "required":["SpotPrice"], @@ -8332,6 +9150,10 @@ "shape":"String", "locationName":"spotPrice" }, + "ClientToken":{ + "shape":"String", + "locationName":"clientToken" + }, "InstanceCount":{ "shape":"Integer", "locationName":"instanceCount" @@ -8847,6 +9669,33 @@ "type":"list", "member":{"shape":"String"} }, + "RestoreAddressToClassicRequest":{ + "type":"structure", + "required":["PublicIp"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "locationName":"dryRun" + }, + "PublicIp":{ + "shape":"String", + "locationName":"publicIp" + } + } + }, + "RestoreAddressToClassicResult":{ + "type":"structure", + "members":{ + "Status":{ + "shape":"Status", + "locationName":"status" + }, + "PublicIp":{ + "shape":"String", + "locationName":"publicIp" + } + } + }, "RevokeSecurityGroupEgressRequest":{ "type":"structure", "required":["GroupId"], @@ -8914,6 +9763,10 @@ "shape":"String", "locationName":"destinationCidrBlock" }, + "DestinationPrefixListId":{ + "shape":"String", + "locationName":"destinationPrefixListId" + }, "GatewayId":{ "shape":"String", "locationName":"gatewayId" @@ -9412,6 +10265,78 @@ } } }, + "SpotFleetRequestConfig":{ + "type":"structure", + "required":[ + "SpotFleetRequestId", + "SpotFleetRequestState", + "SpotFleetRequestConfig" + ], + "members":{ + "SpotFleetRequestId":{ + "shape":"String", + "locationName":"spotFleetRequestId" + }, + "SpotFleetRequestState":{ + "shape":"BatchState", + "locationName":"spotFleetRequestState" + }, + "SpotFleetRequestConfig":{ + "shape":"SpotFleetRequestConfigData", + "locationName":"spotFleetRequestConfig" + } + } + }, + "SpotFleetRequestConfigData":{ + "type":"structure", + "required":[ + "SpotPrice", + "TargetCapacity", + "IamFleetRole", + "LaunchSpecifications" + ], + "members":{ + "ClientToken":{ + "shape":"String", + "locationName":"clientToken" + }, + "SpotPrice":{ + "shape":"String", + "locationName":"spotPrice" + }, + "TargetCapacity":{ + "shape":"Integer", + "locationName":"targetCapacity" + }, + "ValidFrom":{ + "shape":"DateTime", + "locationName":"validFrom" + }, + "ValidUntil":{ + "shape":"DateTime", + "locationName":"validUntil" + }, + "TerminateInstancesWithExpiration":{ + "shape":"Boolean", + "locationName":"terminateInstancesWithExpiration" + }, + "IamFleetRole":{ + "shape":"String", + "locationName":"iamFleetRole" + }, + "LaunchSpecifications":{ + "shape":"LaunchSpecsList", + "locationName":"launchSpecifications" + } + } + }, + "SpotFleetRequestConfigSet":{ + "type":"list", + "member":{ + "shape":"SpotFleetRequestConfig", + "locationName":"item" + } + }, "SpotInstanceRequest":{ "type":"structure", "members":{ @@ -9614,6 +10539,15 @@ } } }, + "State":{ + "type":"string", + "enum":[ + "Pending", + "Available", + "Deleting", + "Deleted" + ] + }, "StateReason":{ "type":"structure", "members":{ @@ -9627,6 +10561,14 @@ } } }, + "Status":{ + "type":"string", + "enum":[ + "MoveInProgress", + "InVpc", + "InClassic" + ] + }, "StatusName":{ "type":"string", "enum":["reachability"] @@ -9869,6 +10811,44 @@ } } }, + "UnsuccessfulItem":{ + "type":"structure", + "required":["Error"], + "members":{ + "ResourceId":{ + "shape":"String", + "locationName":"resourceId" + }, + "Error":{ + "shape":"UnsuccessfulItemError", + "locationName":"error" + } + } + }, + "UnsuccessfulItemError":{ + "type":"structure", + "required":[ + "Code", + "Message" + ], + "members":{ + "Code":{ + "shape":"String", + "locationName":"code" + }, + "Message":{ + "shape":"String", + "locationName":"message" + } + } + }, + "UnsuccessfulItemSet":{ + "type":"list", + "member":{ + "shape":"UnsuccessfulItem", + "locationName":"item" + } + }, "UserBucket":{ "type":"structure", "members":{ @@ -10361,6 +11341,46 @@ "locationName":"item" } }, + "VpcEndpoint":{ + "type":"structure", + "members":{ + "VpcEndpointId":{ + "shape":"String", + "locationName":"vpcEndpointId" + }, + "VpcId":{ + "shape":"String", + "locationName":"vpcId" + }, + "ServiceName":{ + "shape":"String", + "locationName":"serviceName" + }, + "State":{ + "shape":"State", + "locationName":"state" + }, + "PolicyDocument":{ + "shape":"String", + "locationName":"policyDocument" + }, + "RouteTableIds":{ + "shape":"ValueStringList", + "locationName":"routeTableIdSet" + }, + "CreationTimestamp":{ + "shape":"DateTime", + "locationName":"creationTimestamp" + } + } + }, + "VpcEndpointSet":{ + "type":"list", + "member":{ + "shape":"VpcEndpoint", + "locationName":"item" + } + }, "VpcIdStringList":{ "type":"list", "member":{ diff --git a/src/data/ec2/2015-03-01/docs-2.json b/src/data/ec2/2015-04-15/docs-2.json similarity index 72% rename from src/data/ec2/2015-03-01/docs-2.json rename to src/data/ec2/2015-04-15/docs-2.json index db08cbeffe..daa32c9a1c 100644 --- a/src/data/ec2/2015-03-01/docs-2.json +++ b/src/data/ec2/2015-04-15/docs-2.json @@ -10,7 +10,7 @@ "AttachClassicLinkVpc": "

Links an EC2-Classic instance to a ClassicLink-enabled VPC through one or more of the VPC's security groups. You cannot link an EC2-Classic instance to more than one VPC at a time. You can only link an instance that's in the running state. An instance is automatically unlinked from a VPC when it's stopped - you can link it to the VPC again when you restart it.

After you've linked an instance, you cannot change the VPC security groups that are associated with it. To change the security groups, you must first unlink the instance, and then link it again.

Linking your instance to a VPC is sometimes referred to as attaching your instance.

", "AttachInternetGateway": "

Attaches an Internet gateway to a VPC, enabling connectivity between the Internet and the VPC. For more information about your VPC and Internet gateway, see the Amazon Virtual Private Cloud User Guide.

", "AttachNetworkInterface": "

Attaches a network interface to an instance.

", - "AttachVolume": "

Attaches an Amazon EBS volume to a running or stopped instance and exposes it to the instance with the specified device name.

Encrypted Amazon EBS volumes may only be attached to instances that support Amazon EBS encryption. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

For a list of supported device names, see Attaching an Amazon EBS Volume to an Instance. Any device names that aren't reserved for instance store volumes can be used for Amazon EBS volumes. For more information, see Amazon EC2 Instance Store in the Amazon Elastic Compute Cloud User Guide.

If a volume has an AWS Marketplace product code:

For an overview of the AWS Marketplace, see Introducing AWS Marketplace.

For more information about Amazon EBS volumes, see Attaching Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", + "AttachVolume": "

Attaches an EBS volume to a running or stopped instance and exposes it to the instance with the specified device name.

Encrypted EBS volumes may only be attached to instances that support Amazon EBS encryption. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

For a list of supported device names, see Attaching an EBS Volume to an Instance. Any device names that aren't reserved for instance store volumes can be used for EBS volumes. For more information, see Amazon EC2 Instance Store in the Amazon Elastic Compute Cloud User Guide.

If a volume has an AWS Marketplace product code:

For an overview of the AWS Marketplace, see Introducing AWS Marketplace.

For more information about EBS volumes, see Attaching Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", "AttachVpnGateway": "

Attaches a virtual private gateway to a VPC. For more information, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

", "AuthorizeSecurityGroupEgress": "

Adds one or more egress rules to a security group for use with a VPC. Specifically, this action permits instances to send traffic to one or more destination CIDR IP address ranges, or to one or more destination security groups for the same VPC.

You can have up to 50 rules per security group (covering both ingress and egress rules).

A security group is for use with instances either in the EC2-Classic platform or in a specific VPC. This action doesn't apply to security groups for use in EC2-Classic. For more information, see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide.

Each rule consists of the protocol (for example, TCP), plus either a CIDR range or a source group. For the TCP and UDP protocols, you must also specify the destination port or port range. For the ICMP protocol, you must also specify the ICMP type and code. You can use -1 for the type or code to mean all types or all codes.

Rule changes are propagated to affected instances as quickly as possible. However, a small delay might occur.

", "AuthorizeSecurityGroupIngress": "

Adds one or more ingress rules to a security group.

EC2-Classic: You can have up to 100 rules per group.

EC2-VPC: You can have up to 50 rules per group (covering both ingress and egress rules).

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

[EC2-Classic] This action gives one or more CIDR IP address ranges permission to access a security group in your account, or gives one or more security groups (called the source groups) permission to access a security group for your account. A source group can be for your own AWS account, or another.

[EC2-VPC] This action gives one or more CIDR IP address ranges permission to access a security group in your VPC, or gives one or more other security groups (called the source groups) permission to access a security group for your VPC. The security groups must all be for the same VPC.

", @@ -20,14 +20,15 @@ "CancelExportTask": "

Cancels an active export task. The request removes all artifacts of the export, including any partially-created Amazon S3 objects. If the export task is complete or is in the process of transferring the final disk image, the command fails and returns an error.

", "CancelImportTask": "

Cancels an in-process import virtual machine or import snapshot task.

", "CancelReservedInstancesListing": "

Cancels the specified Reserved Instance listing in the Reserved Instance Marketplace.

For more information, see Reserved Instance Marketplace in the Amazon Elastic Compute Cloud User Guide.

", + "CancelSpotFleetRequests": "

Cancels the specified Spot fleet requests.

", "CancelSpotInstanceRequests": "

Cancels one or more Spot Instance requests. Spot Instances are instances that Amazon EC2 starts on your behalf when the bid price that you specify exceeds the current Spot Price. Amazon EC2 periodically sets the Spot Price based on available Spot Instance capacity and current Spot Instance requests. For more information, see Spot Instance Requests in the Amazon Elastic Compute Cloud User Guide.

Canceling a Spot Instance request does not terminate running Spot Instances associated with the request.

", "ConfirmProductInstance": "

Determines whether a product code is associated with an instance. This action can only be used by the owner of the product code. It is useful when a product code owner needs to verify whether another user's instance is eligible for support.

", - "CopyImage": "

Initiates the copy of an AMI from the specified source region to the current region. You specify the destination region by using its endpoint when making the request. AMIs that use encrypted Amazon EBS snapshots cannot be copied with this method.

For more information, see Copying AMIs in the Amazon Elastic Compute Cloud User Guide.

", - "CopySnapshot": "

Copies a point-in-time snapshot of an Amazon EBS volume and stores it in Amazon S3. You can copy the snapshot within the same region or from one region to another. You can use the snapshot to create Amazon EBS volumes or Amazon Machine Images (AMIs). The snapshot is copied to the regional endpoint that you send the HTTP request to.

Copies of encrypted Amazon EBS snapshots remain encrypted. Copies of unencrypted snapshots remain unencrypted.

Copying snapshots that were encrypted with non-default AWS Key Management Service (KMS) master keys is not supported at this time.

For more information, see Copying an Amazon EBS Snapshot in the Amazon Elastic Compute Cloud User Guide.

", + "CopyImage": "

Initiates the copy of an AMI from the specified source region to the current region. You specify the destination region by using its endpoint when making the request. AMIs that use encrypted EBS snapshots cannot be copied with this method.

For more information, see Copying AMIs in the Amazon Elastic Compute Cloud User Guide.

", + "CopySnapshot": "

Copies a point-in-time snapshot of an EBS volume and stores it in Amazon S3. You can copy the snapshot within the same region or from one region to another. You can use the snapshot to create EBS volumes or Amazon Machine Images (AMIs). The snapshot is copied to the regional endpoint that you send the HTTP request to.

Copies of encrypted EBS snapshots remain encrypted. Copies of unencrypted snapshots remain unencrypted.

Copying snapshots that were encrypted with non-default AWS Key Management Service (KMS) master keys is not supported at this time.

For more information, see Copying an Amazon EBS Snapshot in the Amazon Elastic Compute Cloud User Guide.

", "CreateCustomerGateway": "

Provides information to AWS about your VPN customer gateway device. The customer gateway is the appliance at your end of the VPN connection. (The device on the AWS side of the VPN connection is the virtual private gateway.) You must provide the Internet-routable IP address of the customer gateway's external interface. The IP address must be static and can't be behind a device performing network address translation (NAT).

For devices that use Border Gateway Protocol (BGP), you can also provide the device's BGP Autonomous System Number (ASN). You can use an existing ASN assigned to your network. If you don't have an ASN already, you can use a private ASN (in the 64512 - 65534 range).

Amazon EC2 supports all 2-byte ASN numbers in the range of 1 - 65534, with the exception of 7224, which is reserved in the us-east-1 region, and 9059, which is reserved in the eu-west-1 region.

For more information about VPN customer gateways, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

You cannot create more than one customer gateway with the same VPN type, IP address, and BGP ASN parameter values. If you run an identical request more than one time, the first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent requests do not create new customer gateway resources.

", "CreateDhcpOptions": "

Creates a set of DHCP options for your VPC. After creating the set, you must associate it with the VPC, causing all existing and new instances that you launch in the VPC to use this set of DHCP options. The following are the individual DHCP options you can specify. For more information about the options, see RFC 2132.

Your VPC automatically starts out with a set of DHCP options that includes only a DNS server that we provide (AmazonProvidedDNS). If you create a set of options, and if your VPC has an Internet gateway, make sure to set the domain-name-servers option either to AmazonProvidedDNS or to a domain name server of your choice. For more information about DHCP options, see DHCP Options Sets in the Amazon Virtual Private Cloud User Guide.

", "CreateImage": "

Creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance that is either running or stopped.

If you customized your instance with instance store volumes or EBS volumes in addition to the root device volume, the new AMI contains block device mapping information for those volumes. When you launch an instance from this new AMI, the instance automatically launches with those additional volumes.

For more information, see Creating Amazon EBS-Backed Linux AMIs in the Amazon Elastic Compute Cloud User Guide.

", - "CreateInstanceExportTask": "

Exports a running or stopped instance to an Amazon S3 bucket.

For information about the supported operating systems, image formats, and known limitations for the types of instances you can export, see Exporting EC2 Instances in the Amazon Elastic Compute Cloud User Guide.

", + "CreateInstanceExportTask": "

Exports a running or stopped instance to an S3 bucket.

For information about the supported operating systems, image formats, and known limitations for the types of instances you can export, see Exporting EC2 Instances in the Amazon Elastic Compute Cloud User Guide.

", "CreateInternetGateway": "

Creates an Internet gateway for use with a VPC. After creating the Internet gateway, you attach it to a VPC using AttachInternetGateway.

For more information about your VPC and Internet gateway, see the Amazon Virtual Private Cloud User Guide.

", "CreateKeyPair": "

Creates a 2048-bit RSA key pair with the specified name. Amazon EC2 stores the public key and displays the private key for you to save to a file. The private key is returned as an unencrypted PEM encoded PKCS#8 private key. If a key with the specified name already exists, Amazon EC2 returns an error.

You can have up to five thousand key pairs per region.

The key pair returned to you is available only in the region in which you create it. To create a key pair that is available in all regions, use ImportKeyPair.

For more information about key pairs, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

", "CreateNetworkAcl": "

Creates a network ACL in a VPC. Network ACLs provide an optional layer of security (in addition to security groups) for the instances in your VPC.

For more information about network ACLs, see Network ACLs in the Amazon Virtual Private Cloud User Guide.

", @@ -38,12 +39,13 @@ "CreateRoute": "

Creates a route in a route table within a VPC.

You must specify one of the following targets: Internet gateway or virtual private gateway, NAT instance, VPC peering connection, or network interface.

When determining how to route traffic, we use the route with the most specific match. For example, let's say the traffic is destined for 192.0.2.3, and the route table includes the following two routes:

Both routes apply to the traffic destined for 192.0.2.3. However, the second route in the list covers a smaller number of IP addresses and is therefore more specific, so we use that route to determine where to target the traffic.

For more information about route tables, see Route Tables in the Amazon Virtual Private Cloud User Guide.

", "CreateRouteTable": "

Creates a route table for the specified VPC. After you create a route table, you can add routes and associate the table with a subnet.

For more information about route tables, see Route Tables in the Amazon Virtual Private Cloud User Guide.

", "CreateSecurityGroup": "

Creates a security group.

A security group is for use with instances either in the EC2-Classic platform or in a specific VPC. For more information, see Amazon EC2 Security Groups in the Amazon Elastic Compute Cloud User Guide and Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide.

EC2-Classic: You can have up to 500 security groups.

EC2-VPC: You can create up to 100 security groups per VPC.

When you create a security group, you specify a friendly name of your choice. You can have a security group for use in EC2-Classic with the same name as a security group for use in a VPC. However, you can't have two security groups for use in EC2-Classic with the same name or two security groups for use in a VPC with the same name.

You have a default security group for use in EC2-Classic and a default security group for use in your VPC. If you don't specify a security group when you launch an instance, the instance is launched into the appropriate default security group. A default security group includes a default rule that grants instances unrestricted network access to each other.

You can add or remove rules from your security groups using AuthorizeSecurityGroupIngress, AuthorizeSecurityGroupEgress, RevokeSecurityGroupIngress, and RevokeSecurityGroupEgress.

", - "CreateSnapshot": "

Creates a snapshot of an Amazon EBS volume and stores it in Amazon S3. You can use snapshots for backups, to make copies of Amazon EBS volumes, and to save data before shutting down an instance.

When a snapshot is created, any AWS Marketplace product codes that are associated with the source volume are propagated to the snapshot.

You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued; this may exclude any data that has been cached by any applications or the operating system. If you can pause any file systems on the volume long enough to take a snapshot, your snapshot should be complete. However, if you cannot pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot. You may remount and use your volume while the snapshot status is pending.

To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.

Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes and any associated snapshots always remain protected.

For more information, see Amazon Elastic Block Store and Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

", + "CreateSnapshot": "

Creates a snapshot of an EBS volume and stores it in Amazon S3. You can use snapshots for backups, to make copies of EBS volumes, and to save data before shutting down an instance.

When a snapshot is created, any AWS Marketplace product codes that are associated with the source volume are propagated to the snapshot.

You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your EBS volume at the time the snapshot command is issued; this may exclude any data that has been cached by any applications or the operating system. If you can pause any file systems on the volume long enough to take a snapshot, your snapshot should be complete. However, if you cannot pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot. You may remount and use your volume while the snapshot status is pending.

To create a snapshot for EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.

Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes and any associated snapshots always remain protected.

For more information, see Amazon Elastic Block Store and Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

", "CreateSpotDatafeedSubscription": "

Creates a data feed for Spot Instances, enabling you to view Spot Instance usage logs. You can create one data feed per AWS account. For more information, see Spot Instance Data Feed in the Amazon Elastic Compute Cloud User Guide.

", "CreateSubnet": "

Creates a subnet in an existing VPC.

When you create each subnet, you provide the VPC ID and the CIDR block you want for the subnet. After you create a subnet, you can't change its CIDR block. The subnet's CIDR block can be the same as the VPC's CIDR block (assuming you want only a single subnet in the VPC), or a subset of the VPC's CIDR block. If you create more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. The smallest subnet (and VPC) you can create uses a /28 netmask (16 IP addresses), and the largest uses a /16 netmask (65,536 IP addresses).

AWS reserves both the first four and the last IP address in each subnet's CIDR block. They're not available for use.

If you add more than one subnet to a VPC, they're set up in a star topology with a logical router in the middle.

If you launch an instance in a VPC using an Amazon EBS-backed AMI, the IP address doesn't change if you stop and restart the instance (unlike a similar instance launched outside a VPC, which gets a new IP address when restarted). It's therefore possible to have a subnet with no running instances (they're all stopped), but no remaining IP addresses available.

For more information about subnets, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.

", "CreateTags": "

Adds or overwrites one or more tags for the specified Amazon EC2 resource or resources. Each resource can have a maximum of 10 tags. Each tag consists of a key and optional value. Tag keys must be unique per resource.

For more information about tags, see Tagging Your Resources in the Amazon Elastic Compute Cloud User Guide.

", - "CreateVolume": "

Creates an Amazon EBS volume that can be attached to an instance in the same Availability Zone. The volume is created in the regional endpoint that you send the HTTP request to. For more information see Regions and Endpoints.

You can create a new empty volume or restore a volume from an Amazon EBS snapshot. Any AWS Marketplace product codes from the snapshot are propagated to the volume.

You can create encrypted volumes with the Encrypted parameter. Encrypted volumes may only be attached to instances that support Amazon EBS encryption. Volumes that are created from encrypted snapshots are also automatically encrypted. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

For more information, see Creating or Restoring an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", + "CreateVolume": "

Creates an EBS volume that can be attached to an instance in the same Availability Zone. The volume is created in the regional endpoint that you send the HTTP request to. For more information see Regions and Endpoints.

You can create a new empty volume or restore a volume from an EBS snapshot. Any AWS Marketplace product codes from the snapshot are propagated to the volume.

You can create encrypted volumes with the Encrypted parameter. Encrypted volumes may only be attached to instances that support Amazon EBS encryption. Volumes that are created from encrypted snapshots are also automatically encrypted. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

For more information, see Creating or Restoring an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", "CreateVpc": "

Creates a VPC with the specified CIDR block.

The smallest VPC you can create uses a /28 netmask (16 IP addresses), and the largest uses a /16 netmask (65,536 IP addresses). To help you decide how big to make your VPC, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.

By default, each instance you launch in the VPC has the default DHCP options, which includes only a default DNS server that we provide (AmazonProvidedDNS). For more information about DHCP options, see DHCP Options Sets in the Amazon Virtual Private Cloud User Guide.

", + "CreateVpcEndpoint": "

Creates a VPC endpoint for a specified AWS service. An endpoint enables you to create a private connection between your VPC and another AWS service in your account. You can specify an endpoint policy to attach to the endpoint that will control access to the service from your VPC. You can also specify the VPC route tables that use the endpoint.

Currently, only endpoints to Amazon S3 are supported.

", "CreateVpcPeeringConnection": "

Requests a VPC peering connection between two VPCs: a requester VPC that you own and a peer VPC with which to create the connection. The peer VPC can belong to another AWS account. The requester VPC and peer VPC cannot have overlapping CIDR blocks.

The owner of the peer VPC must accept the peering request to activate the peering connection. The VPC peering connection request expires after 7 days, after which it cannot be accepted or rejected.

A CreateVpcPeeringConnection request between VPCs with overlapping CIDR blocks results in the VPC peering connection having a status of failed.

", "CreateVpnConnection": "

Creates a VPN connection between an existing virtual private gateway and a VPN customer gateway. The only supported connection type is ipsec.1.

The response includes information that you need to give to your network administrator to configure your customer gateway.

We strongly recommend that you use HTTPS when calling this operation because the response contains sensitive cryptographic information for configuring your customer gateway.

If you decide to shut down your VPN connection for any reason and later create a new VPN connection, you must reconfigure your customer gateway with the new information returned from this call.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

", "CreateVpnConnectionRoute": "

Creates a static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

", @@ -59,12 +61,13 @@ "DeleteRoute": "

Deletes the specified route from the specified route table.

", "DeleteRouteTable": "

Deletes the specified route table. You must disassociate the route table from any subnets before you can delete it. You can't delete the main route table.

", "DeleteSecurityGroup": "

Deletes a security group.

If you attempt to delete a security group that is associated with an instance, or is referenced by another security group, the operation fails with InvalidGroup.InUse in EC2-Classic or DependencyViolation in EC2-VPC.

", - "DeleteSnapshot": "

Deletes the specified snapshot.

When you make periodic snapshots of a volume, the snapshots are incremental, and only the blocks on the device that have changed since your last snapshot are saved in the new snapshot. When you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will have access to all the information needed to restore the volume.

You cannot delete a snapshot of the root device of an Amazon EBS volume used by a registered AMI. You must first de-register the AMI before you can delete the snapshot.

For more information, see Deleting an Amazon EBS Snapshot in the Amazon Elastic Compute Cloud User Guide.

", + "DeleteSnapshot": "

Deletes the specified snapshot.

When you make periodic snapshots of a volume, the snapshots are incremental, and only the blocks on the device that have changed since your last snapshot are saved in the new snapshot. When you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will have access to all the information needed to restore the volume.

You cannot delete a snapshot of the root device of an EBS volume used by a registered AMI. You must first de-register the AMI before you can delete the snapshot.

For more information, see Deleting an Amazon EBS Snapshot in the Amazon Elastic Compute Cloud User Guide.

", "DeleteSpotDatafeedSubscription": "

Deletes the data feed for Spot Instances. For more information, see Spot Instance Data Feed in the Amazon Elastic Compute Cloud User Guide.

", "DeleteSubnet": "

Deletes the specified subnet. You must terminate all running instances in the subnet before you can delete the subnet.

", "DeleteTags": "

Deletes the specified set of tags from the specified set of resources. This call is designed to follow a DescribeTags request.

For more information about tags, see Tagging Your Resources in the Amazon Elastic Compute Cloud User Guide.

", - "DeleteVolume": "

Deletes the specified Amazon EBS volume. The volume must be in the available state (not attached to an instance).

The volume may remain in the deleting state for several minutes.

For more information, see Deleting an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", + "DeleteVolume": "

Deletes the specified EBS volume. The volume must be in the available state (not attached to an instance).

The volume may remain in the deleting state for several minutes.

For more information, see Deleting an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", "DeleteVpc": "

Deletes the specified VPC. You must detach or delete all gateways and resources that are associated with the VPC before you can delete it. For example, you must terminate all instances running in the VPC, delete all security groups associated with the VPC (except the default one), delete all route tables associated with the VPC (except the default one), and so on.

", + "DeleteVpcEndpoints": "

Deletes one or more specified VPC endpoints. Deleting the endpoint also deletes the endpoint routes in the route tables that were associated with the endpoint.

", "DeleteVpcPeeringConnection": "

Deletes a VPC peering connection. Either the owner of the requester VPC or the owner of the peer VPC can delete the VPC peering connection if it's in the active state. The owner of the requester VPC can delete a VPC peering connection in the pending-acceptance state.

", "DeleteVpnConnection": "

Deletes the specified VPN connection.

If you're deleting the VPC and its associated components, we recommend that you detach the virtual private gateway from the VPC and delete the VPC before deleting the VPN connection. If you believe that the tunnel credentials for your VPN connection have been compromised, you can delete the VPN connection and create a new one that has new keys, without needing to delete the VPC or virtual private gateway. If you create a new VPN connection, you must reconfigure the customer gateway using the new configuration information returned with the new VPN connection ID.

", "DeleteVpnConnectionRoute": "

Deletes the specified static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway.

", @@ -82,16 +85,18 @@ "DescribeImageAttribute": "

Describes the specified attribute of the specified AMI. You can specify only one attribute at a time.

", "DescribeImages": "

Describes one or more of the images (AMIs, AKIs, and ARIs) available to you. Images available to you include public images, private images that you own, and private images owned by other AWS accounts but for which you have explicit launch permissions.

Deregistered images are included in the returned results for an unspecified interval after deregistration.

", "DescribeImportImageTasks": "

Displays details about an import virtual machine or import snapshot tasks that are already created.

", - "DescribeImportSnapshotTasks": "

Displays details about an import snapshot tasks that is already created.

", + "DescribeImportSnapshotTasks": "

Describes your import snapshot tasks.

", "DescribeInstanceAttribute": "

Describes the specified attribute of the specified instance. You can specify only one attribute at a time. Valid attribute values are: instanceType | kernel | ramdisk | userData | disableApiTermination | instanceInitiatedShutdownBehavior | rootDeviceName | blockDeviceMapping | productCodes | sourceDestCheck | groupSet | ebsOptimized | sriovNetSupport

", - "DescribeInstanceStatus": "

Describes the status of one or more instances, including any scheduled events.

Instance status has two main components:

Instance status provides information about four types of scheduled events for an instance that may require your attention:

When your instance is retired, it will either be terminated (if its root device type is the instance-store) or stopped (if its root device type is an EBS volume). Instances stopped due to retirement will not be restarted, but you can do so manually. You can also avoid retirement of EBS-backed instances by manually restarting your instance when its event code is instance-retirement. This ensures that your instance is started on a different underlying host.

For more information about failed status checks, see Troubleshooting Instances with Failed Status Checks in the Amazon Elastic Compute Cloud User Guide. For more information about working with scheduled events, see Working with an Instance That Has a Scheduled Event in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeInstanceStatus": "

Describes the status of one or more instances.

Instance status includes the following components:

", "DescribeInstances": "

Describes one or more of your instances.

If you specify one or more instance IDs, Amazon EC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results.

Recently terminated instances might appear in the returned results. This interval is usually less than one hour.

", "DescribeInternetGateways": "

Describes one or more of your Internet gateways.

", "DescribeKeyPairs": "

Describes one or more of your key pairs.

For more information about key pairs, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeMovingAddresses": "

Describes your Elastic IP addresses that are being moved to the EC2-VPC platform, or that are being restored to the EC2-Classic platform. This request does not return information about any other Elastic IP addresses in your account.

", "DescribeNetworkAcls": "

Describes one or more of your network ACLs.

For more information about network ACLs, see Network ACLs in the Amazon Virtual Private Cloud User Guide.

", "DescribeNetworkInterfaceAttribute": "

Describes a network interface attribute. You can specify only one attribute at a time.

", "DescribeNetworkInterfaces": "

Describes one or more of your network interfaces.

", "DescribePlacementGroups": "

Describes one or more of your placement groups. For more information about placement groups and cluster instances, see Cluster Instances in the Amazon Elastic Compute Cloud User Guide.

", + "DescribePrefixLists": "

Describes available AWS services in a prefix list format, which includes the prefix list name and prefix list ID of the service and the IP address range for the service. A prefix list ID is required for creating an outbound security group rule that allows traffic from a VPC to access an AWS service through a VPC endpoint.

", "DescribeRegions": "

Describes one or more regions that are currently available to you.

For a list of the regions supported by Amazon EC2, see Regions and Endpoints.

", "DescribeReservedInstances": "

Describes one or more of the Reserved Instances that you purchased.

For more information about Reserved Instances, see Reserved Instances in the Amazon Elastic Compute Cloud User Guide.

", "DescribeReservedInstancesListings": "

Describes your account's Reserved Instance listings in the Reserved Instance Marketplace.

The Reserved Instance Marketplace matches sellers who want to resell Reserved Instance capacity that they no longer need with buyers who want to purchase additional capacity. Reserved Instances bought and sold through the Reserved Instance Marketplace work like any other Reserved Instances.

As a seller, you choose to list some or all of your Reserved Instances, and you specify the upfront price to receive for them. Your Reserved Instances are then listed in the Reserved Instance Marketplace and are available for purchase.

As a buyer, you specify the configuration of the Reserved Instance to purchase, and the Marketplace matches what you're searching for with what's available. The Marketplace first sells the lowest priced Reserved Instances to you, and continues to sell available Reserved Instance listings to you until your demand is met. You are charged based on the total price of all of the listings that you purchase.

For more information, see Reserved Instance Marketplace in the Amazon Elastic Compute Cloud User Guide.

", @@ -99,18 +104,23 @@ "DescribeReservedInstancesOfferings": "

Describes Reserved Instance offerings that are available for purchase. With Reserved Instances, you purchase the right to launch instances for a period of time. During that time period, you do not receive insufficient capacity errors, and you pay a lower usage rate than the rate charged for On-Demand instances for the actual time used.

For more information, see Reserved Instance Marketplace in the Amazon Elastic Compute Cloud User Guide.

", "DescribeRouteTables": "

Describes one or more of your route tables.

For more information about route tables, see Route Tables in the Amazon Virtual Private Cloud User Guide.

", "DescribeSecurityGroups": "

Describes one or more of your security groups.

A security group is for use with instances either in the EC2-Classic platform or in a specific VPC. For more information, see Amazon EC2 Security Groups in the Amazon Elastic Compute Cloud User Guide and Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide.

", - "DescribeSnapshotAttribute": "

Describes the specified attribute of the specified snapshot. You can specify only one attribute at a time.

For more information about Amazon EBS snapshots, see Amazon EBS Snapshots in the Amazon Elastic Compute Cloud User Guide.

", - "DescribeSnapshots": "

Describes one or more of the Amazon EBS snapshots available to you. Available snapshots include public snapshots available for any AWS account to launch, private snapshots that you own, and private snapshots owned by another AWS account but for which you've been given explicit create volume permissions.

The create volume permissions fall into the following categories:

The list of snapshots returned can be modified by specifying snapshot IDs, snapshot owners, or AWS accounts with create volume permissions. If no options are specified, Amazon EC2 returns all snapshots for which you have create volume permissions.

If you specify one or more snapshot IDs, only snapshots that have the specified IDs are returned. If you specify an invalid snapshot ID, an error is returned. If you specify a snapshot ID for which you do not have access, it is not included in the returned results.

If you specify one or more snapshot owners, only snapshots from the specified owners and for which you have access are returned. The results can include the AWS account IDs of the specified owners, amazon for snapshots owned by Amazon, or self for snapshots that you own.

If you specify a list of restorable users, only snapshots with create snapshot permissions for those users are returned. You can specify AWS account IDs (if you own the snapshots), self for snapshots for which you own or have explicit permissions, or all for public snapshots.

If you are describing a long list of snapshots, you can paginate the output to make the list more manageable. The MaxResults parameter sets the maximum number of results returned in a single page. If the list of results exceeds your MaxResults value, then that number of results is returned along with a NextToken value that can be passed to a subsequent DescribeSnapshots request to retrieve the remaining results.

For more information about Amazon EBS snapshots, see Amazon EBS Snapshots in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeSnapshotAttribute": "

Describes the specified attribute of the specified snapshot. You can specify only one attribute at a time.

For more information about EBS snapshots, see Amazon EBS Snapshots in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeSnapshots": "

Describes one or more of the EBS snapshots available to you. Available snapshots include public snapshots available for any AWS account to launch, private snapshots that you own, and private snapshots owned by another AWS account but for which you've been given explicit create volume permissions.

The create volume permissions fall into the following categories:

The list of snapshots returned can be modified by specifying snapshot IDs, snapshot owners, or AWS accounts with create volume permissions. If no options are specified, Amazon EC2 returns all snapshots for which you have create volume permissions.

If you specify one or more snapshot IDs, only snapshots that have the specified IDs are returned. If you specify an invalid snapshot ID, an error is returned. If you specify a snapshot ID for which you do not have access, it is not included in the returned results.

If you specify one or more snapshot owners, only snapshots from the specified owners and for which you have access are returned. The results can include the AWS account IDs of the specified owners, amazon for snapshots owned by Amazon, or self for snapshots that you own.

If you specify a list of restorable users, only snapshots with create snapshot permissions for those users are returned. You can specify AWS account IDs (if you own the snapshots), self for snapshots for which you own or have explicit permissions, or all for public snapshots.

If you are describing a long list of snapshots, you can paginate the output to make the list more manageable. The MaxResults parameter sets the maximum number of results returned in a single page. If the list of results exceeds your MaxResults value, then that number of results is returned along with a NextToken value that can be passed to a subsequent DescribeSnapshots request to retrieve the remaining results.

For more information about EBS snapshots, see Amazon EBS Snapshots in the Amazon Elastic Compute Cloud User Guide.

", "DescribeSpotDatafeedSubscription": "

Describes the data feed for Spot Instances. For more information, see Spot Instance Data Feed in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeSpotFleetInstances": "

Describes the running instances for the specified Spot fleet.

", + "DescribeSpotFleetRequestHistory": "

Describes the events for the specified Spot fleet request during the specified time.

Spot fleet events are delayed by up to 30 seconds before they can be described. This ensures that you can query by the last evaluated time and not miss a recorded event.

", + "DescribeSpotFleetRequests": "

Describes your Spot fleet requests.

", "DescribeSpotInstanceRequests": "

Describes the Spot Instance requests that belong to your account. Spot Instances are instances that Amazon EC2 launches when the bid price that you specify exceeds the current Spot Price. Amazon EC2 periodically sets the Spot Price based on available Spot Instance capacity and current Spot Instance requests. For more information, see Spot Instance Requests in the Amazon Elastic Compute Cloud User Guide.

You can use DescribeSpotInstanceRequests to find a running Spot Instance by examining the response. If the status of the Spot Instance is fulfilled, the instance ID appears in the response and contains the identifier of the instance. Alternatively, you can use DescribeInstances with a filter to look for instances where the instance lifecycle is spot.

", "DescribeSpotPriceHistory": "

Describes the Spot Price history. The prices returned are listed in chronological order, from the oldest to the most recent, for up to the past 90 days. For more information, see Spot Instance Pricing History in the Amazon Elastic Compute Cloud User Guide.

When you specify a start and end time, this operation returns the prices of the instance types within the time range that you specified and the time when the price changed. The price is valid within the time period that you specified; the response merely indicates the last time that the price changed.

", "DescribeSubnets": "

Describes one or more of your subnets.

For more information about subnets, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.

", "DescribeTags": "

Describes one or more of the tags for your EC2 resources.

For more information about tags, see Tagging Your Resources in the Amazon Elastic Compute Cloud User Guide.

", - "DescribeVolumeAttribute": "

Describes the specified attribute of the specified volume. You can specify only one attribute at a time.

For more information about Amazon EBS volumes, see Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeVolumeAttribute": "

Describes the specified attribute of the specified volume. You can specify only one attribute at a time.

For more information about EBS volumes, see Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", "DescribeVolumeStatus": "

Describes the status of the specified volumes. Volume status provides the result of the checks performed on your volumes to determine events that can impair the performance of your volumes. The performance of a volume can be affected if an issue occurs on the volume's underlying host. If the volume's underlying host experiences a power outage or system issue, after the system is restored, there could be data inconsistencies on the volume. Volume events notify you if this occurs. Volume actions notify you if any action needs to be taken in response to the event.

The DescribeVolumeStatus operation provides the following information about the specified volumes:

Status: Reflects the current status of the volume. The possible values are ok, impaired , warning, or insufficient-data. If all checks pass, the overall status of the volume is ok. If the check fails, the overall status is impaired. If the status is insufficient-data, then the checks may still be taking place on your volume at the time. We recommend that you retry the request. For more information on volume status, see Monitoring the Status of Your Volumes.

Events: Reflect the cause of a volume status and may require you to take action. For example, if your volume returns an impaired status, then the volume event might be potential-data-inconsistency. This means that your volume has been affected by an issue with the underlying host, has all I/O operations disabled, and may have inconsistent data.

Actions: Reflect the actions you may have to take in response to an event. For example, if the status of the volume is impaired and the volume event shows potential-data-inconsistency, then the action shows enable-volume-io. This means that you may want to enable the I/O operations for the volume by calling the EnableVolumeIO action and then check the volume for data consistency.

Volume status is based on the volume status checks, and does not reflect the volume state. Therefore, volume status does not indicate volumes in the error state (for example, when a volume is incapable of accepting I/O.)

", - "DescribeVolumes": "

Describes the specified Amazon EBS volumes.

If you are describing a long list of volumes, you can paginate the output to make the list more manageable. The MaxResults parameter sets the maximum number of results returned in a single page. If the list of results exceeds your MaxResults value, then that number of results is returned along with a NextToken value that can be passed to a subsequent DescribeVolumes request to retrieve the remaining results.

For more information about Amazon EBS volumes, see Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", + "DescribeVolumes": "

Describes the specified EBS volumes.

If you are describing a long list of volumes, you can paginate the output to make the list more manageable. The MaxResults parameter sets the maximum number of results returned in a single page. If the list of results exceeds your MaxResults value, then that number of results is returned along with a NextToken value that can be passed to a subsequent DescribeVolumes request to retrieve the remaining results.

For more information about EBS volumes, see Amazon EBS Volumes in the Amazon Elastic Compute Cloud User Guide.

", "DescribeVpcAttribute": "

Describes the specified attribute of the specified VPC. You can specify only one attribute at a time.

", "DescribeVpcClassicLink": "

Describes the ClassicLink status of one or more VPCs.

", + "DescribeVpcEndpointServices": "

Describes all supported AWS services that can be specified when creating a VPC endpoint.

", + "DescribeVpcEndpoints": "

Describes one or more of your VPC endpoints.

", "DescribeVpcPeeringConnections": "

Describes one or more of your VPC peering connections.

", "DescribeVpcs": "

Describes one or more of your VPCs.

", "DescribeVpnConnections": "

Describes one or more of your VPN connections.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

", @@ -118,7 +128,7 @@ "DetachClassicLinkVpc": "

Unlinks (detaches) a linked EC2-Classic instance from a VPC. After the instance has been unlinked, the VPC security groups are no longer associated with it. An instance is automatically unlinked from a VPC when it's stopped.

", "DetachInternetGateway": "

Detaches an Internet gateway from a VPC, disabling connectivity between the Internet and the VPC. The VPC must not contain any running instances with Elastic IP addresses.

", "DetachNetworkInterface": "

Detaches a network interface from an instance.

", - "DetachVolume": "

Detaches an Amazon EBS volume from an instance. Make sure to unmount any file systems on the device within your operating system before detaching the volume. Failure to do so results in the volume being stuck in a busy state while detaching.

If an Amazon EBS volume is the root device of an instance, it can't be detached while the instance is running. To detach the root volume, stop the instance first.

When a volume with an AWS Marketplace product code is detached from an instance, the product code is no longer associated with the instance.

For more information, see Detaching an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", + "DetachVolume": "

Detaches an EBS volume from an instance. Make sure to unmount any file systems on the device within your operating system before detaching the volume. Failure to do so results in the volume being stuck in a busy state while detaching.

If an Amazon EBS volume is the root device of an instance, it can't be detached while the instance is running. To detach the root volume, stop the instance first.

When a volume with an AWS Marketplace product code is detached from an instance, the product code is no longer associated with the instance.

For more information, see Detaching an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.

", "DetachVpnGateway": "

Detaches a virtual private gateway from a VPC. You do this if you're planning to turn off the VPC and not use it anymore. You can confirm a virtual private gateway has been completely detached from a VPC by describing the virtual private gateway (any attachments to the virtual private gateway are also described).

You must wait for the attachment's state to switch to detached before you can delete the VPC or attach a different VPC to the virtual private gateway.

", "DisableVgwRoutePropagation": "

Disables a virtual private gateway (VGW) from propagating routes to a specified route table of a VPC.

", "DisableVpcClassicLink": "

Disables ClassicLink for a VPC. You cannot disable ClassicLink for a VPC that has EC2-Classic instances linked to it.

", @@ -129,10 +139,10 @@ "EnableVpcClassicLink": "

Enables a VPC for ClassicLink. You can then link EC2-Classic instances to your ClassicLink-enabled VPC to allow communication over private IP addresses. You cannot enable your VPC for ClassicLink if any of your VPC's route tables have existing routes for address ranges within the 10.0.0.0/8 IP address range, excluding local routes for VPCs in the 10.0.0.0/16 and 10.1.0.0/16 IP address ranges. For more information, see ClassicLink in the Amazon Elastic Compute Cloud User Guide.

", "GetConsoleOutput": "

Gets the console output for the specified instance.

Instances do not have a physical monitor through which you can view their console output. They also lack physical controls that allow you to power up, reboot, or shut them down. To allow these actions, we provide them through the Amazon EC2 API and command line interface.

Instance console output is buffered and posted shortly after instance boot, reboot, and termination. Amazon EC2 preserves the most recent 64 KB output which is available for at least one hour after the most recent post.

For Linux instances, the instance console output displays the exact console output that would normally be displayed on a physical monitor attached to a computer. This output is buffered because the instance produces it and then posts it to a store where the instance's owner can retrieve it.

For Windows instances, the instance console output includes output from the EC2Config service.

", "GetPasswordData": "

Retrieves the encrypted administrator password for an instance running Windows.

The Windows password is generated at boot if the EC2Config service plugin, Ec2SetPassword, is enabled. This usually only happens the first time an AMI is launched, and then Ec2SetPassword is automatically disabled. The password is not generated for rebundled AMIs unless Ec2SetPassword is enabled before bundling.

The password is encrypted using the key pair that you specified when you launched the instance. You must provide the corresponding key pair file.

Password generation and encryption takes a few moments. We recommend that you wait up to 15 minutes after launching an instance before trying to retrieve the generated password.

", - "ImportImage": "

Import single or multi-volume disk images or Amazon EBS snapshots into an Amazon Machine Image (AMI).

", - "ImportInstance": "

Creates an import instance task using metadata from the specified disk image. ImportInstance only supports single-volume VMs. To import multi-volume VMs, use ImportImage. After importing the image, you then upload it using the ec2-import-volume command in the EC2 command line tools. For more information, see Using the Command Line Tools to Import Your Virtual Machine to Amazon EC2 in the Amazon Elastic Compute Cloud User Guide.

", + "ImportImage": "

Import single or multi-volume disk images or EBS snapshots into an Amazon Machine Image (AMI).

", + "ImportInstance": "

Creates an import instance task using metadata from the specified disk image. ImportInstance only supports single-volume VMs. To import multi-volume VMs, use ImportImage. After importing the image, you then upload it using the ec2-import-volume command in the EC2 command line tools. For more information, see Using the Command Line Tools to Import Your Virtual Machine to Amazon EC2 in the Amazon Elastic Compute Cloud User Guide.

", "ImportKeyPair": "

Imports the public key from an RSA key pair that you created with a third-party tool. Compare this with CreateKeyPair, in which AWS creates the key pair and gives the keys to you (AWS keeps a copy of the public key). With ImportKeyPair, you create the key pair and give AWS just the public key. The private key is never transferred between you and AWS.

For more information about key pairs, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

", - "ImportSnapshot": "

Import a disk into an Amazon Elastic Block Store (Amazon EBS) snapshot.

", + "ImportSnapshot": "

Imports a disk into an EBS snapshot.

", "ImportVolume": "

Creates an import volume task using metadata from the specified disk image. After importing the image, you then upload it using the ec2-import-volume command in the Amazon EC2 command-line interface (CLI) tools. For more information, see Using the Command Line Tools to Import Your Virtual Machine to Amazon EC2 in the Amazon Elastic Compute Cloud User Guide.

", "ModifyImageAttribute": "

Modifies the specified attribute of the specified AMI. You can specify only one attribute at a time.

AWS Marketplace product codes cannot be modified. Images with an AWS Marketplace product code cannot be made public.

", "ModifyInstanceAttribute": "

Modifies the specified attribute of the specified instance. You can specify only one attribute at a time.

To modify some attributes, the instance must be stopped. For more information, see Modifying Attributes of a Stopped Instance in the Amazon Elastic Compute Cloud User Guide.

", @@ -142,7 +152,9 @@ "ModifySubnetAttribute": "

Modifies a subnet attribute.

", "ModifyVolumeAttribute": "

Modifies a volume attribute.

By default, all I/O operations for the volume are suspended when the data on the volume is determined to be potentially inconsistent, to prevent undetectable, latent data corruption. The I/O access to the volume can be resumed by first enabling I/O access and then checking the data consistency on your volume.

You can change the default behavior to resume I/O operations. We recommend that you change this only for boot volumes or for volumes that are stateless or disposable.

", "ModifyVpcAttribute": "

Modifies the specified attribute of the specified VPC.

", + "ModifyVpcEndpoint": "

Modifies attributes of a specified VPC endpoint. You can modify the policy associated with the endpoint, and you can add and remove route tables associated with the endpoint.

", "MonitorInstances": "

Enables monitoring for a running instance. For more information about monitoring instances, see Monitoring Your Instances and Volumes in the Amazon Elastic Compute Cloud User Guide.

", + "MoveAddressToVpc": "

Moves an Elastic IP address from the EC2-Classic platform to the EC2-VPC platform. The Elastic IP address must be allocated to your account, and it must not be associated with an instance. After the Elastic IP address is moved, it is no longer available for use in the EC2-Classic platform, unless you move it back using the RestoreAddressToClassic request. You cannot move an Elastic IP address that's allocated for use in the EC2-VPC platform to the EC2-Classic platform.

", "PurchaseReservedInstancesOffering": "

Purchases a Reserved Instance for use with your account. With Amazon EC2 Reserved Instances, you obtain a capacity reservation for a certain instance configuration over a specified period of time. You pay a lower usage rate than with On-Demand instances for the time that you actually use the capacity reservation.

Use DescribeReservedInstancesOfferings to get a list of Reserved Instance offerings that match your specifications. After you've purchased a Reserved Instance, you can check for your new Reserved Instance with DescribeReservedInstances.

For more information, see Reserved Instances and Reserved Instance Marketplace in the Amazon Elastic Compute Cloud User Guide.

", "RebootInstances": "

Requests a reboot of one or more instances. This operation is asynchronous; it only queues a request to reboot the specified instances. The operation succeeds if the instances are valid and belong to you. Requests to reboot terminated instances are ignored.

If a Linux/Unix instance does not cleanly shut down within four minutes, Amazon EC2 performs a hard reboot.

For more information about troubleshooting, see Getting Console Output and Rebooting Instances in the Amazon Elastic Compute Cloud User Guide.

", "RegisterImage": "

Registers an AMI. When you're creating an AMI, this is the final step you must complete before you can launch an instance from the AMI. For more information about creating AMIs, see Creating Your Own AMIs in the Amazon Elastic Compute Cloud User Guide.

For Amazon EBS-backed instances, CreateImage creates and registers the AMI in a single request, so you don't have to register the AMI yourself.

You can also use RegisterImage to create an Amazon EBS-backed AMI from a snapshot of a root device volume. For more information, see Launching an Instance from a Snapshot in the Amazon Elastic Compute Cloud User Guide.

If needed, you can deregister an AMI at any time. Any modifications you make to an AMI backed by an instance store volume invalidates its registration. If you make changes to an image, deregister the previous image and register the new image.

You can't register an image where a secondary (non-root) snapshot has AWS Marketplace product codes.

", @@ -153,17 +165,19 @@ "ReplaceRoute": "

Replaces an existing route within a route table in a VPC. You must provide only one of the following: Internet gateway or virtual private gateway, NAT instance, VPC peering connection, or network interface.

For more information about route tables, see Route Tables in the Amazon Virtual Private Cloud User Guide.

", "ReplaceRouteTableAssociation": "

Changes the route table associated with a given subnet in a VPC. After the operation completes, the subnet uses the routes in the new route table it's associated with. For more information about route tables, see Route Tables in the Amazon Virtual Private Cloud User Guide.

You can also use ReplaceRouteTableAssociation to change which table is the main route table in the VPC. You just specify the main route table's association ID and the route table to be the new main route table.

", "ReportInstanceStatus": "

Submits feedback about the status of an instance. The instance must be in the running state. If your experience with the instance differs from the instance status returned by DescribeInstanceStatus, use ReportInstanceStatus to report your experience with the instance. Amazon EC2 collects this information to improve the accuracy of status checks.

Use of this action does not change the value returned by DescribeInstanceStatus.

", + "RequestSpotFleet": "

Creates a Spot fleet request.

For more information, see Spot Fleets in the Amazon Elastic Compute Cloud User Guide.

", "RequestSpotInstances": "

Creates a Spot Instance request. Spot Instances are instances that Amazon EC2 launches when the bid price that you specify exceeds the current Spot Price. Amazon EC2 periodically sets the Spot Price based on available Spot Instance capacity and current Spot Instance requests. For more information, see Spot Instance Requests in the Amazon Elastic Compute Cloud User Guide.

", "ResetImageAttribute": "

Resets an attribute of an AMI to its default value.

The productCodes attribute can't be reset.

", "ResetInstanceAttribute": "

Resets an attribute of an instance to its default value. To reset the kernel or ramdisk, the instance must be in a stopped state. To reset the SourceDestCheck, the instance can be either running or stopped.

The SourceDestCheck attribute controls whether source/destination checking is enabled. The default value is true, which means checking is enabled. This value must be false for a NAT instance to perform NAT. For more information, see NAT Instances in the Amazon Virtual Private Cloud User Guide.

", "ResetNetworkInterfaceAttribute": "

Resets a network interface attribute. You can specify only one attribute at a time.

", "ResetSnapshotAttribute": "

Resets permission settings for the specified snapshot.

For more information on modifying snapshot permissions, see Sharing Snapshots in the Amazon Elastic Compute Cloud User Guide.

", + "RestoreAddressToClassic": "

Restores an Elastic IP address that was previously moved to the EC2-VPC platform back to the EC2-Classic platform. You cannot move an Elastic IP address that was originally allocated for use in EC2-VPC. The Elastic IP address must not be associated with an instance or network interface.

", "RevokeSecurityGroupEgress": "

Removes one or more egress rules from a security group for EC2-VPC. The values that you specify in the revoke request (for example, ports) must match the existing rule's values for the rule to be revoked.

Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

", "RevokeSecurityGroupIngress": "

Removes one or more ingress rules from a security group. The values that you specify in the revoke request (for example, ports) must match the existing rule's values for the rule to be removed.

Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

", "RunInstances": "

Launches the specified number of instances using an AMI for which you have permissions.

When you launch an instance, it enters the pending state. After the instance is ready for you, it enters the running state. To check the state of your instance, call DescribeInstances.

If you don't specify a security group when launching an instance, Amazon EC2 uses the default security group. For more information, see Security Groups in the Amazon Elastic Compute Cloud User Guide.

Linux instances have access to the public key of the key pair at boot. You can use this key to provide secure access to the instance. Amazon EC2 public images use this feature to provide secure access without passwords. For more information, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

You can provide optional user data when launching an instance. For more information, see Instance Metadata in the Amazon Elastic Compute Cloud User Guide.

If any of the AMIs have a product code attached for which the user has not subscribed, RunInstances fails.

T2 instance types can only be launched into a VPC. If you do not have a default VPC, or if you do not specify a subnet ID in the request, RunInstances fails.

For more information about troubleshooting, see What To Do If An Instance Immediately Terminates, and Troubleshooting Connecting to Your Instance in the Amazon Elastic Compute Cloud User Guide.

", "StartInstances": "

Starts an Amazon EBS-backed AMI that you've previously stopped.

Instances that use Amazon EBS volumes as their root devices can be quickly stopped and started. When an instance is stopped, the compute resources are released and you are not billed for hourly instance usage. However, your root partition Amazon EBS volume remains, continues to persist your data, and you are charged for Amazon EBS volume usage. You can restart your instance at any time. Each time you transition an instance from stopped to started, Amazon EC2 charges a full instance hour, even if transitions happen multiple times within a single hour.

Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Performing this operation on an instance that uses an instance store as its root device returns an error.

For more information, see Stopping Instances in the Amazon Elastic Compute Cloud User Guide.

", "StopInstances": "

Stops an Amazon EBS-backed instance. Each time you transition an instance from stopped to started, Amazon EC2 charges a full instance hour, even if transitions happen multiple times within a single hour.

You can't start or stop Spot Instances.

Instances that use Amazon EBS volumes as their root devices can be quickly stopped and started. When an instance is stopped, the compute resources are released and you are not billed for hourly instance usage. However, your root partition Amazon EBS volume remains, continues to persist your data, and you are charged for Amazon EBS volume usage. You can restart your instance at any time.

Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Performing this operation on an instance that uses an instance store as its root device returns an error.

You can stop, start, and terminate EBS-backed instances. You can only terminate instance store-backed instances. What happens to an instance differs if you stop it or terminate it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between stopping and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.

For more information about troubleshooting, see Troubleshooting Stopping Your Instance in the Amazon Elastic Compute Cloud User Guide.

", - "TerminateInstances": "

Shuts down one or more instances. This operation is idempotent; if you terminate an instance more than once, each call succeeds.

Terminated instances remain visible after termination (for approximately one hour).

By default, Amazon EC2 deletes all Amazon EBS volumes that were attached when the instance launched. Volumes attached after instance launch continue running.

You can stop, start, and terminate EBS-backed instances. You can only terminate instance store-backed instances. What happens to an instance differs if you stop it or terminate it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between stopping and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.

For more information about troubleshooting, see Troubleshooting Terminating Your Instance in the Amazon Elastic Compute Cloud User Guide.

", + "TerminateInstances": "

Shuts down one or more instances. This operation is idempotent; if you terminate an instance more than once, each call succeeds.

Terminated instances remain visible after termination (for approximately one hour).

By default, Amazon EC2 deletes all EBS volumes that were attached when the instance launched. Volumes attached after instance launch continue running.

You can stop, start, and terminate EBS-backed instances. You can only terminate instance store-backed instances. What happens to an instance differs if you stop it or terminate it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between stopping and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.

For more information about troubleshooting, see Troubleshooting Terminating Your Instance in the Amazon Elastic Compute Cloud User Guide.

", "UnassignPrivateIpAddresses": "

Unassigns one or more secondary private IP addresses from a network interface.

", "UnmonitorInstances": "

Disables monitoring for a running instance. For more information about monitoring instances, see Monitoring Your Instances and Volumes in the Amazon Elastic Compute Cloud User Guide.

" }, @@ -215,6 +229,18 @@ "AccountAttribute$AttributeValues": "

One or more values for the account attribute.

" } }, + "ActiveInstance": { + "base": "

Describes a running instance in a Spot fleet.

", + "refs": { + "ActiveInstanceSet$member": null + } + }, + "ActiveInstanceSet": { + "base": null, + "refs": { + "DescribeSpotFleetInstancesResponse$ActiveInstances": "

The running instances. Note that this list is refreshed periodically and might be out of date.

" + } + }, "Address": { "base": "

Describes an Elastic IP address.

", "refs": { @@ -417,6 +443,14 @@ "AvailabilityZone$State": "

The state of the Availability Zone (available | impaired | unavailable).

" } }, + "BatchState": { + "base": null, + "refs": { + "CancelSpotFleetRequestsSuccessItem$CurrentSpotFleetRequestState": "

The current state of the Spot fleet request.

", + "CancelSpotFleetRequestsSuccessItem$PreviousSpotFleetRequestState": "

The previous state of the Spot fleet request.

", + "SpotFleetRequestConfig$SpotFleetRequestState": "

The state of the Spot fleet request.

" + } + }, "BlockDeviceMapping": { "base": "

Describes a block device mapping.

", "refs": { @@ -444,149 +478,161 @@ "Boolean": { "base": null, "refs": { - "AcceptVpcPeeringConnectionRequest$DryRun": null, - "AllocateAddressRequest$DryRun": null, + "AcceptVpcPeeringConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AllocateAddressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "AssignPrivateIpAddressesRequest$AllowReassignment": "

Indicates whether to allow an IP address that is already assigned to another network interface or instance to be reassigned to the specified network interface.

", - "AssociateAddressRequest$DryRun": null, + "AssociateAddressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "AssociateAddressRequest$AllowReassociation": "

[EC2-VPC] Allows an Elastic IP address that is already associated with an instance or network interface to be re-associated with the specified instance or network interface. Otherwise, the operation fails.

Default: false

", - "AssociateDhcpOptionsRequest$DryRun": null, - "AssociateRouteTableRequest$DryRun": null, - "AttachClassicLinkVpcRequest$DryRun": null, + "AssociateDhcpOptionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AssociateRouteTableRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AttachClassicLinkVpcRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "AttachClassicLinkVpcResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "AttachInternetGatewayRequest$DryRun": null, - "AttachNetworkInterfaceRequest$DryRun": null, - "AttachVolumeRequest$DryRun": null, - "AttachVpnGatewayRequest$DryRun": null, + "AttachInternetGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AttachNetworkInterfaceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AttachVolumeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AttachVpnGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "AttributeBooleanValue$Value": "

Valid values are true or false.

", - "AuthorizeSecurityGroupEgressRequest$DryRun": null, - "AuthorizeSecurityGroupIngressRequest$DryRun": null, - "BundleInstanceRequest$DryRun": null, - "CancelBundleTaskRequest$DryRun": null, - "CancelConversionRequest$DryRun": null, - "CancelImportTaskRequest$DryRun": null, - "CancelSpotInstanceRequestsRequest$DryRun": null, - "ConfirmProductInstanceRequest$DryRun": null, - "CopyImageRequest$DryRun": null, - "CopySnapshotRequest$DryRun": null, - "CreateCustomerGatewayRequest$DryRun": null, - "CreateDhcpOptionsRequest$DryRun": null, - "CreateImageRequest$DryRun": null, + "AuthorizeSecurityGroupEgressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "AuthorizeSecurityGroupIngressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "BundleInstanceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CancelBundleTaskRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CancelConversionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CancelImportTaskRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CancelSpotFleetRequestsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CancelSpotFleetRequestsRequest$TerminateInstances": "

Indicates whether to terminate instances for a Spot fleet request if it is canceled successfully.

", + "CancelSpotInstanceRequestsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ConfirmProductInstanceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CopyImageRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CopySnapshotRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateCustomerGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateDhcpOptionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateImageRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "CreateImageRequest$NoReboot": "

By default, this parameter is set to false, which means Amazon EC2 attempts to shut down the instance cleanly before image creation and then reboots the instance. When the parameter is set to true, Amazon EC2 doesn't shut down the instance before creating the image. When this option is used, file system integrity on the created image can't be guaranteed.

", - "CreateInternetGatewayRequest$DryRun": null, - "CreateKeyPairRequest$DryRun": null, - "CreateNetworkAclEntryRequest$DryRun": null, + "CreateInternetGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateKeyPairRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateNetworkAclEntryRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "CreateNetworkAclEntryRequest$Egress": "

Indicates whether this is an egress rule (rule is applied to traffic leaving the subnet).

", - "CreateNetworkAclRequest$DryRun": null, - "CreateNetworkInterfaceRequest$DryRun": null, - "CreatePlacementGroupRequest$DryRun": null, - "CreateRouteRequest$DryRun": null, - "CreateRouteTableRequest$DryRun": null, - "CreateSecurityGroupRequest$DryRun": null, - "CreateSnapshotRequest$DryRun": null, - "CreateSpotDatafeedSubscriptionRequest$DryRun": null, - "CreateSubnetRequest$DryRun": null, - "CreateTagsRequest$DryRun": null, - "CreateVolumeRequest$DryRun": null, + "CreateNetworkAclRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateNetworkInterfaceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreatePlacementGroupRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateRouteRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateRouteResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", + "CreateRouteTableRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateSecurityGroupRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateSnapshotRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateSpotDatafeedSubscriptionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateSubnetRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateTagsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateVolumeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "CreateVolumeRequest$Encrypted": "

Specifies whether the volume should be encrypted. Encrypted Amazon EBS volumes may only be attached to instances that support Amazon EBS encryption. Volumes that are created from encrypted snapshots are automatically encrypted. There is no way to create an encrypted volume from an unencrypted snapshot or vice versa. If your AMI uses encrypted volumes, you can only launch it on supported instance types. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.

", - "CreateVpcPeeringConnectionRequest$DryRun": null, - "CreateVpcRequest$DryRun": null, - "CreateVpnConnectionRequest$DryRun": null, - "CreateVpnGatewayRequest$DryRun": null, - "DeleteCustomerGatewayRequest$DryRun": null, - "DeleteDhcpOptionsRequest$DryRun": null, - "DeleteInternetGatewayRequest$DryRun": null, - "DeleteKeyPairRequest$DryRun": null, - "DeleteNetworkAclEntryRequest$DryRun": null, + "CreateVpcEndpointRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateVpcPeeringConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateVpcRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateVpnConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "CreateVpnGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteCustomerGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteDhcpOptionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteInternetGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteKeyPairRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteNetworkAclEntryRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DeleteNetworkAclEntryRequest$Egress": "

Indicates whether the rule is an egress rule.

", - "DeleteNetworkAclRequest$DryRun": null, - "DeleteNetworkInterfaceRequest$DryRun": null, - "DeletePlacementGroupRequest$DryRun": null, - "DeleteRouteRequest$DryRun": null, - "DeleteRouteTableRequest$DryRun": null, - "DeleteSecurityGroupRequest$DryRun": null, - "DeleteSnapshotRequest$DryRun": null, - "DeleteSpotDatafeedSubscriptionRequest$DryRun": null, - "DeleteSubnetRequest$DryRun": null, - "DeleteTagsRequest$DryRun": null, - "DeleteVolumeRequest$DryRun": null, - "DeleteVpcPeeringConnectionRequest$DryRun": null, + "DeleteNetworkAclRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteNetworkInterfaceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeletePlacementGroupRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteRouteRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteRouteTableRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteSecurityGroupRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteSnapshotRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteSpotDatafeedSubscriptionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteSubnetRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteTagsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteVolumeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteVpcEndpointsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteVpcPeeringConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DeleteVpcPeeringConnectionResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "DeleteVpcRequest$DryRun": null, - "DeleteVpnConnectionRequest$DryRun": null, - "DeleteVpnGatewayRequest$DryRun": null, - "DeregisterImageRequest$DryRun": null, - "DescribeAccountAttributesRequest$DryRun": null, - "DescribeAddressesRequest$DryRun": null, - "DescribeAvailabilityZonesRequest$DryRun": null, - "DescribeBundleTasksRequest$DryRun": null, - "DescribeClassicLinkInstancesRequest$DryRun": null, - "DescribeConversionTasksRequest$DryRun": null, - "DescribeCustomerGatewaysRequest$DryRun": null, - "DescribeDhcpOptionsRequest$DryRun": null, - "DescribeImageAttributeRequest$DryRun": null, - "DescribeImagesRequest$DryRun": null, - "DescribeImportImageTasksRequest$DryRun": null, - "DescribeImportSnapshotTasksRequest$DryRun": null, - "DescribeInstanceAttributeRequest$DryRun": null, - "DescribeInstanceStatusRequest$DryRun": null, + "DeleteVpcRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteVpnConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeleteVpnGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DeregisterImageRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeAccountAttributesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeAddressesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeAvailabilityZonesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeBundleTasksRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeClassicLinkInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeConversionTasksRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeCustomerGatewaysRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeDhcpOptionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeImageAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeImagesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeImportImageTasksRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeImportSnapshotTasksRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeInstanceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeInstanceStatusRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DescribeInstanceStatusRequest$IncludeAllInstances": "

When true, includes the health status for all instances. When false, includes the health status for running instances only.

Default: false

", - "DescribeInstancesRequest$DryRun": null, - "DescribeInternetGatewaysRequest$DryRun": null, - "DescribeKeyPairsRequest$DryRun": null, - "DescribeNetworkAclsRequest$DryRun": null, - "DescribeNetworkInterfaceAttributeRequest$DryRun": null, - "DescribeNetworkInterfacesRequest$DryRun": null, - "DescribePlacementGroupsRequest$DryRun": null, - "DescribeRegionsRequest$DryRun": null, - "DescribeReservedInstancesOfferingsRequest$DryRun": null, + "DescribeInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeInternetGatewaysRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeKeyPairsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeMovingAddressesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeNetworkAclsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeNetworkInterfaceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeNetworkInterfacesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribePlacementGroupsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribePrefixListsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeRegionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeReservedInstancesOfferingsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DescribeReservedInstancesOfferingsRequest$IncludeMarketplace": "

Include Marketplace offerings in the response.

", - "DescribeReservedInstancesRequest$DryRun": null, - "DescribeRouteTablesRequest$DryRun": null, - "DescribeSecurityGroupsRequest$DryRun": null, - "DescribeSnapshotAttributeRequest$DryRun": null, - "DescribeSnapshotsRequest$DryRun": null, - "DescribeSpotDatafeedSubscriptionRequest$DryRun": null, - "DescribeSpotInstanceRequestsRequest$DryRun": null, - "DescribeSpotPriceHistoryRequest$DryRun": null, - "DescribeSubnetsRequest$DryRun": null, - "DescribeTagsRequest$DryRun": null, - "DescribeVolumeAttributeRequest$DryRun": null, - "DescribeVolumeStatusRequest$DryRun": null, - "DescribeVolumesRequest$DryRun": null, - "DescribeVpcAttributeRequest$DryRun": null, - "DescribeVpcClassicLinkRequest$DryRun": null, - "DescribeVpcPeeringConnectionsRequest$DryRun": null, - "DescribeVpcsRequest$DryRun": null, - "DescribeVpnConnectionsRequest$DryRun": null, - "DescribeVpnGatewaysRequest$DryRun": null, - "DetachClassicLinkVpcRequest$DryRun": null, + "DescribeReservedInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeRouteTablesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSecurityGroupsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSnapshotAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSnapshotsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotDatafeedSubscriptionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotFleetInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotFleetRequestHistoryRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotFleetRequestsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotInstanceRequestsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSpotPriceHistoryRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeSubnetsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeTagsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVolumeAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVolumeStatusRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVolumesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcClassicLinkRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcEndpointServicesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcEndpointsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcPeeringConnectionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpcsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpnConnectionsRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DescribeVpnGatewaysRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DetachClassicLinkVpcRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DetachClassicLinkVpcResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "DetachInternetGatewayRequest$DryRun": null, - "DetachNetworkInterfaceRequest$DryRun": null, + "DetachInternetGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DetachNetworkInterfaceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DetachNetworkInterfaceRequest$Force": "

Specifies whether to force a detachment.

", - "DetachVolumeRequest$DryRun": null, + "DetachVolumeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DetachVolumeRequest$Force": "

Forces detachment if the previous detachment attempt did not occur cleanly (for example, logging into an instance, unmounting the volume, and detaching normally). This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance won't have an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures.

", - "DetachVpnGatewayRequest$DryRun": null, - "DisableVpcClassicLinkRequest$DryRun": null, + "DetachVpnGatewayRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DisableVpcClassicLinkRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "DisableVpcClassicLinkResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "DisassociateAddressRequest$DryRun": null, - "DisassociateRouteTableRequest$DryRun": null, - "EbsBlockDevice$DeleteOnTermination": "

Indicates whether the Amazon EBS volume is deleted on instance termination.

", - "EbsBlockDevice$Encrypted": "

Indicates whether the Amazon EBS volume is encrypted. Encrypted Amazon EBS volumes may only be attached to instances that support Amazon EBS encryption.

", + "DisassociateAddressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "DisassociateRouteTableRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "EbsBlockDevice$DeleteOnTermination": "

Indicates whether the EBS volume is deleted on instance termination.

", + "EbsBlockDevice$Encrypted": "

Indicates whether the EBS volume is encrypted. Encrypted Amazon EBS volumes may only be attached to instances that support Amazon EBS encryption.

", "EbsInstanceBlockDevice$DeleteOnTermination": "

Indicates whether the volume is deleted on instance termination.

", "EbsInstanceBlockDeviceSpecification$DeleteOnTermination": "

Indicates whether the volume is deleted on instance termination.

", - "EnableVolumeIORequest$DryRun": null, - "EnableVpcClassicLinkRequest$DryRun": null, + "EnableVolumeIORequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "EnableVpcClassicLinkRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "EnableVpcClassicLinkResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "GetConsoleOutputRequest$DryRun": null, - "GetPasswordDataRequest$DryRun": null, + "GetConsoleOutputRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "GetPasswordDataRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "Image$Public": "

Indicates whether the image has public launch permissions. The value is true if this image has public launch permissions or false if it has only implicit and explicit launch permissions.

", - "ImportImageRequest$DryRun": null, - "ImportInstanceLaunchSpecification$Monitoring": null, - "ImportInstanceRequest$DryRun": null, - "ImportKeyPairRequest$DryRun": null, - "ImportSnapshotRequest$DryRun": null, - "ImportVolumeRequest$DryRun": null, + "ImportImageRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ImportInstanceLaunchSpecification$Monitoring": "

Indicates whether monitoring is enabled.

", + "ImportInstanceRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ImportKeyPairRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ImportSnapshotRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ImportVolumeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "Instance$SourceDestCheck": "

Specifies whether to enable an instance launched in a VPC to perform NAT. This controls whether source/destination checking is enabled on the instance. A value of true means checking is enabled, and false means checking is disabled. The value must be false for the instance to perform NAT. For more information, see NAT Instances in the Amazon Virtual Private Cloud User Guide.

", "Instance$EbsOptimized": "

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

", "InstanceNetworkInterface$SourceDestCheck": "

Indicates whether to validate network traffic to or from this network interface.

", @@ -595,12 +641,16 @@ "InstanceNetworkInterfaceSpecification$AssociatePublicIpAddress": "

Indicates whether to assign a public IP address to an instance you launch in a VPC. The public IP address can only be assigned to a network interface for eth0, and can only be assigned to a new network interface, not an existing one. You cannot specify more than one network interface in the request. If launching into a default subnet, the default value is true.

", "InstancePrivateIpAddress$Primary": "

Indicates whether this IP address is the primary private IP address of the network interface.

", "LaunchSpecification$EbsOptimized": "

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

Default: false

", - "ModifyImageAttributeRequest$DryRun": null, - "ModifyInstanceAttributeRequest$DryRun": null, - "ModifyNetworkInterfaceAttributeRequest$DryRun": null, - "ModifySnapshotAttributeRequest$DryRun": null, - "ModifyVolumeAttributeRequest$DryRun": null, - "MonitorInstancesRequest$DryRun": null, + "ModifyImageAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifyInstanceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifyNetworkInterfaceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifySnapshotAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifyVolumeAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifyVpcEndpointRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ModifyVpcEndpointRequest$ResetPolicy": "

Specify true to reset the policy document to the default policy. The default policy allows access to the service.

", + "ModifyVpcEndpointResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", + "MonitorInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "MoveAddressToVpcRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "NetworkAcl$IsDefault": "

Indicates whether this is the default network ACL for the VPC.

", "NetworkAclEntry$Egress": "

Indicates whether the rule is an egress rule (applied to traffic leaving the subnet).

", "NetworkInterface$RequesterManaged": "

Indicates whether the network interface is being managed by AWS.

", @@ -610,41 +660,44 @@ "NetworkInterfacePrivateIpAddress$Primary": "

Indicates whether this IP address is the primary private IP address of the network interface.

", "PriceSchedule$Active": "

The current price schedule, as determined by the term remaining for the Reserved Instance in the listing.

A specific price schedule is always in effect, but only one price schedule can be active at any time. Take, for example, a Reserved Instance listing that has five months remaining in its term. When you specify price schedules for five months and two months, this means that schedule 1, covering the first three months of the remaining term, will be active during months 5, 4, and 3. Then schedule 2, covering the last two months of the term, will be active for months 2 and 1.

", "PrivateIpAddressSpecification$Primary": "

Indicates whether the private IP address is the primary private IP address. Only one IP address can be designated as primary.

", - "PurchaseReservedInstancesOfferingRequest$DryRun": null, - "RebootInstancesRequest$DryRun": null, - "RegisterImageRequest$DryRun": null, - "RejectVpcPeeringConnectionRequest$DryRun": null, + "PurchaseReservedInstancesOfferingRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RebootInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RegisterImageRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RejectVpcPeeringConnectionRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "RejectVpcPeeringConnectionResult$Return": "

Returns true if the request succeeds; otherwise, it returns an error.

", - "ReleaseAddressRequest$DryRun": null, - "ReplaceNetworkAclAssociationRequest$DryRun": null, - "ReplaceNetworkAclEntryRequest$DryRun": null, + "ReleaseAddressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ReplaceNetworkAclAssociationRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ReplaceNetworkAclEntryRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "ReplaceNetworkAclEntryRequest$Egress": "

Indicates whether to replace the egress rule.

Default: If no value is specified, we replace the ingress rule.

", - "ReplaceRouteRequest$DryRun": null, - "ReplaceRouteTableAssociationRequest$DryRun": null, - "ReportInstanceStatusRequest$DryRun": null, - "RequestSpotInstancesRequest$DryRun": null, + "ReplaceRouteRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ReplaceRouteTableAssociationRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ReportInstanceStatusRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RequestSpotFleetRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RequestSpotInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "ReservedInstancesOffering$Marketplace": "

Indicates whether the offering is available through the Reserved Instance Marketplace (resale) or AWS. If it's a Reserved Instance Marketplace offering, this is true.

", - "ResetImageAttributeRequest$DryRun": null, - "ResetInstanceAttributeRequest$DryRun": null, - "ResetNetworkInterfaceAttributeRequest$DryRun": null, - "ResetSnapshotAttributeRequest$DryRun": null, - "RevokeSecurityGroupEgressRequest$DryRun": null, - "RevokeSecurityGroupIngressRequest$DryRun": null, + "ResetImageAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ResetInstanceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ResetNetworkInterfaceAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "ResetSnapshotAttributeRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RestoreAddressToClassicRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RevokeSecurityGroupEgressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "RevokeSecurityGroupIngressRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "RouteTableAssociation$Main": "

Indicates whether this is the main route table.

", "RunInstancesMonitoringEnabled$Enabled": "

Indicates whether monitoring is enabled for the instance.

", - "RunInstancesRequest$DryRun": null, + "RunInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "RunInstancesRequest$DisableApiTermination": "

If you set this parameter to true, you can't terminate the instance using the Amazon EC2 console, CLI, or API; otherwise, you can. If you set this parameter to true and then later want to be able to terminate the instance, you must first change the value of the disableApiTermination attribute to false using ModifyInstanceAttribute. Alternatively, if you set InstanceInitiatedShutdownBehavior to terminate, you can terminate the instance by running the shutdown command from the instance.

Default: false

", - "RunInstancesRequest$EbsOptimized": "

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.

Default: false

", + "RunInstancesRequest$EbsOptimized": "

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.

Default: false

", "Snapshot$Encrypted": "

Indicates whether the snapshot is encrypted.

", - "StartInstancesRequest$DryRun": null, - "StopInstancesRequest$DryRun": null, + "SpotFleetRequestConfigData$TerminateInstancesWithExpiration": "

Indicates whether running instances should be terminated when the Spot fleet request expires.

", + "StartInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "StopInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "StopInstancesRequest$Force": "

Forces the instances to stop. The instances do not have an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures. This option is not recommended for Windows instances.

Default: false

", "Subnet$DefaultForAz": "

Indicates whether this is the default subnet for the Availability Zone.

", "Subnet$MapPublicIpOnLaunch": "

Indicates whether instances launched in this subnet receive a public IP address.

", - "TerminateInstancesRequest$DryRun": null, - "UnmonitorInstancesRequest$DryRun": null, + "TerminateInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "UnmonitorInstancesRequest$DryRun": "

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "Volume$Encrypted": "

Indicates whether the volume will be encrypted.

", - "VolumeAttachment$DeleteOnTermination": "

Indicates whether the Amazon EBS volume is deleted on instance termination.

", + "VolumeAttachment$DeleteOnTermination": "

Indicates whether the EBS volume is deleted on instance termination.

", "Vpc$IsDefault": "

Indicates whether the VPC is the default VPC.

", "VpcClassicLink$ClassicLinkEnabled": "

Indicates whether the VPC is enabled for ClassicLink.

", "VpnConnectionOptions$StaticRoutesOnly": "

Indicates whether the VPN connection uses static routes only. Static routes must be used for devices that don't support BGP.

", @@ -673,7 +726,7 @@ "refs": { "BundleInstanceResult$BundleTask": "

Information about the bundle task.

", "BundleTaskList$member": null, - "CancelBundleTaskResult$BundleTask": "

The bundle task.

" + "CancelBundleTaskResult$BundleTask": "

Information about the bundle task.

" } }, "BundleTaskError": { @@ -694,6 +747,12 @@ "BundleTask$State": "

The state of the task.

" } }, + "CancelBatchErrorCode": { + "base": null, + "refs": { + "CancelSpotFleetRequestsError$Code": "

The error code.

" + } + }, "CancelBundleTaskRequest": { "base": null, "refs": { @@ -734,6 +793,46 @@ "refs": { } }, + "CancelSpotFleetRequestsError": { + "base": "

Describes a Spot fleet error.

", + "refs": { + "CancelSpotFleetRequestsErrorItem$Error": "

The error.

" + } + }, + "CancelSpotFleetRequestsErrorItem": { + "base": "

Describes a Spot fleet request that was not successfully canceled.

", + "refs": { + "CancelSpotFleetRequestsErrorSet$member": null + } + }, + "CancelSpotFleetRequestsErrorSet": { + "base": null, + "refs": { + "CancelSpotFleetRequestsResponse$UnsuccessfulFleetRequests": "

Information about the Spot fleet requests that are not successfully canceled.

" + } + }, + "CancelSpotFleetRequestsRequest": { + "base": "

Contains the parameters for CancelSpotFleetRequests.

", + "refs": { + } + }, + "CancelSpotFleetRequestsResponse": { + "base": "

Contains the output of CancelSpotFleetRequests.

", + "refs": { + } + }, + "CancelSpotFleetRequestsSuccessItem": { + "base": "

Describes a Spot fleet request that was successfully canceled.

", + "refs": { + "CancelSpotFleetRequestsSuccessSet$member": null + } + }, + "CancelSpotFleetRequestsSuccessSet": { + "base": null, + "refs": { + "CancelSpotFleetRequestsResponse$SuccessfulFleetRequests": "

Information about the Spot fleet requests that are successfully canceled.

" + } + }, "CancelSpotInstanceRequestState": { "base": null, "refs": { @@ -741,12 +840,12 @@ } }, "CancelSpotInstanceRequestsRequest": { - "base": null, + "base": "

Contains the parameters for CancelSpotInstanceRequests.

", "refs": { } }, "CancelSpotInstanceRequestsResult": { - "base": null, + "base": "

Contains the output of CancelSpotInstanceRequests.

", "refs": { } }, @@ -775,10 +874,10 @@ } }, "ClientData": { - "base": "

Client-specific data.

", + "base": "

Describes the client-specific data.

", "refs": { - "ImportImageRequest$ClientData": "

Client-specific data.

", - "ImportSnapshotRequest$ClientData": null + "ImportImageRequest$ClientData": "

The client-specific data.

", + "ImportSnapshotRequest$ClientData": "

The client-specific data.

" } }, "ConfirmProductInstanceRequest": { @@ -795,7 +894,7 @@ "base": null, "refs": { "ExportToS3Task$ContainerFormat": "

The container format used to combine disk images with metadata (such as OVF). If absent, only the disk image is exported.

", - "ExportToS3TaskSpecification$ContainerFormat": null + "ExportToS3TaskSpecification$ContainerFormat": "

The container format used to combine disk images with metadata (such as OVF). If absent, only the disk image is exported.

" } }, "ConversionIdStringList": { @@ -808,8 +907,8 @@ "base": "

Describes a conversion task.

", "refs": { "DescribeConversionTaskList$member": null, - "ImportInstanceResult$ConversionTask": null, - "ImportVolumeResult$ConversionTask": null + "ImportInstanceResult$ConversionTask": "

Information about the conversion task.

", + "ImportVolumeResult$ConversionTask": "

Information about the conversion task.

" } }, "ConversionTaskState": { @@ -938,6 +1037,11 @@ "refs": { } }, + "CreateRouteResult": { + "base": null, + "refs": { + } + }, "CreateRouteTableRequest": { "base": null, "refs": { @@ -964,12 +1068,12 @@ } }, "CreateSpotDatafeedSubscriptionRequest": { - "base": null, + "base": "

Contains the parameters for CreateSpotDatafeedSubscription.

", "refs": { } }, "CreateSpotDatafeedSubscriptionResult": { - "base": null, + "base": "

Contains the output of CreateSpotDatafeedSubscription.

", "refs": { } }, @@ -989,7 +1093,7 @@ } }, "CreateVolumePermission": { - "base": null, + "base": "

Describes the user or group to be added or removed from the permissions for a volume.

", "refs": { "CreateVolumePermissionList$member": null } @@ -1003,7 +1107,7 @@ } }, "CreateVolumePermissionModifications": { - "base": null, + "base": "

Describes modifications to the permissions for a volume.

", "refs": { "ModifySnapshotAttributeRequest$CreateVolumePermission": "

A JSON representation of the snapshot attribute modification.

" } @@ -1013,6 +1117,16 @@ "refs": { } }, + "CreateVpcEndpointRequest": { + "base": null, + "refs": { + } + }, + "CreateVpcEndpointResult": { + "base": null, + "refs": { + } + }, "CreateVpcPeeringConnectionRequest": { "base": null, "refs": { @@ -1100,11 +1214,15 @@ "BundleTask$UpdateTime": "

The time of the most recent update for the task.

", "ClientData$UploadStart": "

The time that the disk upload starts.

", "ClientData$UploadEnd": "

The time that the disk upload ends.

", - "DescribeSpotPriceHistoryRequest$StartTime": "

The date and time, up to the past 90 days, from which to start retrieving the price history data.

", - "DescribeSpotPriceHistoryRequest$EndTime": "

The date and time, up to the current date, from which to stop retrieving the price history data.

", + "DescribeSpotFleetRequestHistoryRequest$StartTime": "

The starting date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", + "DescribeSpotFleetRequestHistoryResponse$StartTime": "

The starting date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", + "DescribeSpotFleetRequestHistoryResponse$LastEvaluatedTime": "

The last date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). All records up to this time were retrieved.

If nextToken indicates that there are more results, this value is not present.

", + "DescribeSpotPriceHistoryRequest$StartTime": "

The date and time, up to the past 90 days, from which to start retrieving the price history data, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", + "DescribeSpotPriceHistoryRequest$EndTime": "

The date and time, up to the current date, from which to stop retrieving the price history data, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", "EbsInstanceBlockDevice$AttachTime": "

The time stamp when the attachment initiated.

", "GetConsoleOutputResult$Timestamp": "

The time the output was last updated.

", "GetPasswordDataResult$Timestamp": "

The time the data was last updated.

", + "HistoryRecord$Timestamp": "

The date and time of the event, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", "Instance$LaunchTime": "

The time the instance was launched.

", "InstanceNetworkInterfaceAttachment$AttachTime": "

The time stamp when the attachment initiated.

", "InstanceStatusDetails$ImpairedSince": "

The time when a status check failed. For an instance that was launched and impaired, this is the time when the instance was launched.

", @@ -1123,16 +1241,19 @@ "ReservedInstancesModification$UpdateDate": "

The time when the modification request was last updated.

", "ReservedInstancesModification$EffectiveDate": "

The time for the modification to become effective.

", "Snapshot$StartTime": "

The time stamp when the snapshot was initiated.

", - "SpotInstanceRequest$ValidFrom": "

The start date of the request. If this is a one-time request, the request becomes active at this date and time and remains active until all instances launch, the request expires, or the request is canceled. If the request is persistent, the request becomes active at this date and time and remains active until it expires or is canceled.

", - "SpotInstanceRequest$ValidUntil": "

The end date of the request. If this is a one-time request, the request remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date is reached.

", - "SpotInstanceRequest$CreateTime": "

The time stamp when the Spot Instance request was created.

", - "SpotInstanceStatus$UpdateTime": "

The time of the most recent status update.

", - "SpotPrice$Timestamp": "

The date and time the request was created.

", + "SpotFleetRequestConfigData$ValidFrom": "

The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.

", + "SpotFleetRequestConfigData$ValidUntil": "

The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot Instance requests are placed or enabled to fulfill the request.

", + "SpotInstanceRequest$ValidFrom": "

The start date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). If this is a one-time request, the request becomes active at this date and time and remains active until all instances launch, the request expires, or the request is canceled. If the request is persistent, the request becomes active at this date and time and remains active until it expires or is canceled.

", + "SpotInstanceRequest$ValidUntil": "

The end date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). If this is a one-time request, the request remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date is reached.

", + "SpotInstanceRequest$CreateTime": "

The date and time when the Spot Instance request was created, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", + "SpotInstanceStatus$UpdateTime": "

The date and time of the most recent status update, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", + "SpotPrice$Timestamp": "

The date and time the request was created, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ).

", "VgwTelemetry$LastStatusChange": "

The date and time of the last change in status.

", "Volume$CreateTime": "

The time stamp when volume creation was initiated.

", "VolumeAttachment$AttachTime": "

The time stamp when the attachment initiated.

", "VolumeStatusEvent$NotBefore": "

The earliest start time of the event.

", "VolumeStatusEvent$NotAfter": "

The latest end time of the event.

", + "VpcEndpoint$CreationTimestamp": "

The date and time the VPC endpoint was created.

", "VpcPeeringConnection$ExpirationTime": "

The time that an unaccepted VPC peering connection will expire.

" } }, @@ -1197,7 +1318,7 @@ } }, "DeleteSpotDatafeedSubscriptionRequest": { - "base": null, + "base": "

Contains the parameters for DeleteSpotDatafeedSubscription.

", "refs": { } }, @@ -1216,6 +1337,16 @@ "refs": { } }, + "DeleteVpcEndpointsRequest": { + "base": null, + "refs": { + } + }, + "DeleteVpcEndpointsResult": { + "base": null, + "refs": { + } + }, "DeleteVpcPeeringConnectionRequest": { "base": null, "refs": { @@ -1304,7 +1435,7 @@ "DescribeConversionTaskList": { "base": null, "refs": { - "DescribeConversionTasksResult$ConversionTasks": null + "DescribeConversionTasksResult$ConversionTasks": "

Information about the conversion tasks.

" } }, "DescribeConversionTasksRequest": { @@ -1427,6 +1558,16 @@ "refs": { } }, + "DescribeMovingAddressesRequest": { + "base": null, + "refs": { + } + }, + "DescribeMovingAddressesResult": { + "base": null, + "refs": { + } + }, "DescribeNetworkAclsRequest": { "base": null, "refs": { @@ -1467,6 +1608,16 @@ "refs": { } }, + "DescribePrefixListsRequest": { + "base": null, + "refs": { + } + }, + "DescribePrefixListsResult": { + "base": null, + "refs": { + } + }, "DescribeRegionsRequest": { "base": null, "refs": { @@ -1558,32 +1709,62 @@ } }, "DescribeSpotDatafeedSubscriptionRequest": { - "base": null, + "base": "

Contains the parameters for DescribeSpotDatafeedSubscription.

", "refs": { } }, "DescribeSpotDatafeedSubscriptionResult": { - "base": null, + "base": "

Contains the output of DescribeSpotDatafeedSubscription.

", + "refs": { + } + }, + "DescribeSpotFleetInstancesRequest": { + "base": "

Contains the parameters for DescribeSpotFleetInstances.

", + "refs": { + } + }, + "DescribeSpotFleetInstancesResponse": { + "base": "

Contains the output of DescribeSpotFleetInstances.

", + "refs": { + } + }, + "DescribeSpotFleetRequestHistoryRequest": { + "base": "

Contains the parameters for DescribeSpotFleetRequestHistory.

", + "refs": { + } + }, + "DescribeSpotFleetRequestHistoryResponse": { + "base": "

Contains the output of DescribeSpotFleetRequestHistory.

", + "refs": { + } + }, + "DescribeSpotFleetRequestsRequest": { + "base": "

Contains the parameters for DescribeSpotFleetRequests.

", + "refs": { + } + }, + "DescribeSpotFleetRequestsResponse": { + "base": "

Contains the output of DescribeSpotFleetRequests.

", "refs": { } }, "DescribeSpotInstanceRequestsRequest": { - "base": null, + "base": "

Contains the parameters for DescribeSpotInstanceRequests.

", "refs": { } }, "DescribeSpotInstanceRequestsResult": { - "base": null, + "base": "

Contains the output of DescribeSpotInstanceRequests.

", "refs": { } }, "DescribeSpotPriceHistoryRequest": { - "base": null, + "base": "

Contains the parameters for DescribeSpotPriceHistory.

", "refs": { } }, "DescribeSpotPriceHistoryResult": { - "base": null, + "base": "

Contains the output of DescribeSpotPriceHistory.

", "refs": { } }, @@ -1657,6 +1838,26 @@ "refs": { } }, + "DescribeVpcEndpointServicesRequest": { + "base": null, + "refs": { + } + }, + "DescribeVpcEndpointServicesResult": { + "base": null, + "refs": { + } + }, + "DescribeVpcEndpointsRequest": { + "base": null, + "refs": { + } + }, + "DescribeVpcEndpointsResult": { + "base": null, + "refs": { + } + }, "DescribeVpcPeeringConnectionsRequest": { "base": null, "refs": { @@ -1730,8 +1931,8 @@ "DeviceType": { "base": null, "refs": { - "Image$RootDeviceType": "

The type of root device used by the AMI. The AMI can use an Amazon EBS volume or an instance store volume.

", - "Instance$RootDeviceType": "

The root device type used by the AMI. The AMI can use an Amazon EBS volume or an instance store volume.

" + "Image$RootDeviceType": "

The type of root device used by the AMI. The AMI can use an EBS volume or an instance store volume.

", + "Instance$RootDeviceType": "

The root device type used by the AMI. The AMI can use an EBS volume or an instance store volume.

" } }, "DhcpConfiguration": { @@ -1797,17 +1998,17 @@ } }, "DiskImageDescription": { - "base": null, + "base": "

Describes a disk image.

", "refs": { "ImportInstanceVolumeDetailItem$Image": "

The image.

", "ImportVolumeTaskDetails$Image": "

The image.

" } }, "DiskImageDetail": { - "base": null, + "base": "

Describes a disk image.

", "refs": { - "DiskImage$Image": null, - "ImportVolumeRequest$Image": null + "DiskImage$Image": "

Information about the disk image.

", + "ImportVolumeRequest$Image": "

The disk image.

" } }, "DiskImageFormat": { @@ -1816,17 +2017,17 @@ "DiskImageDescription$Format": "

The disk image format.

", "DiskImageDetail$Format": "

The disk image format.

", "ExportToS3Task$DiskImageFormat": "

The format for the exported image.

", - "ExportToS3TaskSpecification$DiskImageFormat": null + "ExportToS3TaskSpecification$DiskImageFormat": "

The format for the exported image.

" } }, "DiskImageList": { "base": null, "refs": { - "ImportInstanceRequest$DiskImages": null + "ImportInstanceRequest$DiskImages": "

The disk image.

" } }, "DiskImageVolumeDescription": { - "base": null, + "base": "

Describes a disk image volume.

", "refs": { "ImportInstanceVolumeDetailItem$Volume": "

The volume.

", "ImportVolumeTaskDetails$Volume": "

The volume.

" @@ -1843,32 +2044,32 @@ "Double": { "base": null, "refs": { - "ClientData$UploadSize": "

The size of the uploaded disk image.

", + "ClientData$UploadSize": "

The size of the uploaded disk image, in GiB.

", "PriceSchedule$Price": "

The fixed price for the term.

", "PriceScheduleSpecification$Price": "

The fixed price for the term.

", "PricingDetail$Price": "

The price per instance.

", "RecurringCharge$Amount": "

The amount of the recurring charge.

", "ReservedInstanceLimitPrice$Amount": "

Used for Reserved Instance Marketplace offerings. Specifies the limit price on the total order (instanceCount * price).

", - "SnapshotDetail$DiskImageSize": "

The size of the disk in the snapshot.

", - "SnapshotTaskDetail$DiskImageSize": "

The size of the disk in the snapshot.

" + "SnapshotDetail$DiskImageSize": "

The size of the disk in the snapshot, in GiB.

", + "SnapshotTaskDetail$DiskImageSize": "

The size of the disk in the snapshot, in GiB.

" } }, "EbsBlockDevice": { - "base": "

Describes an Amazon EBS block device.

", + "base": "

Describes a block device for an EBS volume.

", "refs": { - "BlockDeviceMapping$Ebs": "

Parameters used to automatically set up Amazon EBS volumes when the instance is launched.

" + "BlockDeviceMapping$Ebs": "

Parameters used to automatically set up EBS volumes when the instance is launched.

" } }, "EbsInstanceBlockDevice": { - "base": "

Describes a parameter used to set up an Amazon EBS volume in a block device mapping.

", + "base": "

Describes a parameter used to set up an EBS volume in a block device mapping.

", "refs": { - "InstanceBlockDeviceMapping$Ebs": "

Parameters used to automatically set up Amazon EBS volumes when the instance is launched.

" + "InstanceBlockDeviceMapping$Ebs": "

Parameters used to automatically set up EBS volumes when the instance is launched.

" } }, "EbsInstanceBlockDeviceSpecification": { "base": null, "refs": { - "InstanceBlockDeviceMappingSpecification$Ebs": "

Parameters used to automatically set up Amazon EBS volumes when the instance is launched.

" + "InstanceBlockDeviceMappingSpecification$Ebs": "

Parameters used to automatically set up EBS volumes when the instance is launched.

" } }, "EnableVgwRoutePropagationRequest": { @@ -1894,7 +2095,20 @@ "EventCode": { "base": null, "refs": { - "InstanceStatusEvent$Code": "

The associated code of the event.

" + "InstanceStatusEvent$Code": "

The event code.

" + } + }, + "EventInformation": { + "base": "

Describes a Spot fleet event.

", + "refs": { + "HistoryRecord$EventInformation": "

Information about the event.

" + } + }, + "EventType": { + "base": null, + "refs": { + "DescribeSpotFleetRequestHistoryRequest$EventType": "

The type of events to describe. By default, all events are described.

", + "HistoryRecord$EventType": "

The event type.

" } }, "ExecutableByStringList": { @@ -1911,9 +2125,9 @@ } }, "ExportTask": { - "base": "

Describes an export task.

", + "base": "

Describes an instance export task.

", "refs": { - "CreateInstanceExportTaskResult$ExportTask": null, + "CreateInstanceExportTaskResult$ExportTask": "

Information about the instance export task.

", "ExportTaskList$member": null } }, @@ -1926,25 +2140,25 @@ "ExportTaskList": { "base": null, "refs": { - "DescribeExportTasksResult$ExportTasks": null + "DescribeExportTasksResult$ExportTasks": "

Information about the export tasks.

" } }, "ExportTaskState": { "base": null, "refs": { - "ExportTask$State": "

The state of the conversion task.

" + "ExportTask$State": "

The state of the export task.

" } }, "ExportToS3Task": { - "base": null, + "base": "

Describes the format and location for an instance export task.

", "refs": { - "ExportTask$ExportToS3Task": null + "ExportTask$ExportToS3Task": "

Information about the export task.

" } }, "ExportToS3TaskSpecification": { - "base": null, + "base": "

Describes an instance export task.

", "refs": { - "CreateInstanceExportTaskRequest$ExportToS3Task": null + "CreateInstanceExportTaskRequest$ExportToS3Task": "

The format and location for an instance export task.

" } }, "Filter": { @@ -1960,34 +2174,37 @@ "DescribeAvailabilityZonesRequest$Filters": "

One or more filters.

", "DescribeBundleTasksRequest$Filters": "

One or more filters.

", "DescribeClassicLinkInstancesRequest$Filters": "

One or more filters.

", - "DescribeConversionTasksRequest$Filters": null, + "DescribeConversionTasksRequest$Filters": "

One or more filters.

", "DescribeCustomerGatewaysRequest$Filters": "

One or more filters.

", "DescribeDhcpOptionsRequest$Filters": "

One or more filters.

", - "DescribeImagesRequest$Filters": "

One or more filters.

", - "DescribeImportImageTasksRequest$Filters": "

Filters to be applied on a describe request.

", - "DescribeImportSnapshotTasksRequest$Filters": "

The filters to be applied on a describe request.

", - "DescribeInstanceStatusRequest$Filters": "

One or more filters.

", - "DescribeInstancesRequest$Filters": "

One or more filters.

", + "DescribeImagesRequest$Filters": "

One or more filters.

", + "DescribeImportImageTasksRequest$Filters": "

One or more filters.

", + "DescribeImportSnapshotTasksRequest$Filters": "

One or more filters.

", + "DescribeInstanceStatusRequest$Filters": "

One or more filters.

", + "DescribeInstancesRequest$Filters": "

One or more filters.

", "DescribeInternetGatewaysRequest$Filters": "

One or more filters.

", "DescribeKeyPairsRequest$Filters": "

One or more filters.

", + "DescribeMovingAddressesRequest$Filters": "

One or more filters.

", "DescribeNetworkAclsRequest$Filters": "

One or more filters.

", "DescribeNetworkInterfacesRequest$Filters": "

One or more filters.

", "DescribePlacementGroupsRequest$Filters": "

One or more filters.

", + "DescribePrefixListsRequest$Filters": "

One or more filters.

", "DescribeRegionsRequest$Filters": "

One or more filters.

", "DescribeReservedInstancesListingsRequest$Filters": "

One or more filters.

", "DescribeReservedInstancesModificationsRequest$Filters": "

One or more filters.

", "DescribeReservedInstancesOfferingsRequest$Filters": "

One or more filters.

", - "DescribeReservedInstancesRequest$Filters": "

One or more filters.

", - "DescribeRouteTablesRequest$Filters": "

One or more filters.

", - "DescribeSecurityGroupsRequest$Filters": "

One or more filters.

", + "DescribeReservedInstancesRequest$Filters": "

One or more filters.

", + "DescribeRouteTablesRequest$Filters": "

One or more filters.

", + "DescribeSecurityGroupsRequest$Filters": "

One or more filters.

", "DescribeSnapshotsRequest$Filters": "

One or more filters.

", "DescribeSpotInstanceRequestsRequest$Filters": "

One or more filters.

", - "DescribeSpotPriceHistoryRequest$Filters": "

One or more filters.

", + "DescribeSpotPriceHistoryRequest$Filters": "

One or more filters.

", "DescribeSubnetsRequest$Filters": "

One or more filters.

", "DescribeTagsRequest$Filters": "

One or more filters.

", "DescribeVolumeStatusRequest$Filters": "

One or more filters.

", "DescribeVolumesRequest$Filters": "

One or more filters.

", "DescribeVpcClassicLinkRequest$Filters": "

One or more filters.

", + "DescribeVpcEndpointsRequest$Filters": "

One or more filters.

", "DescribeVpcPeeringConnectionsRequest$Filters": "

One or more filters.

", "DescribeVpcsRequest$Filters": "

One or more filters.

", "DescribeVpnConnectionsRequest$Filters": "

One or more filters.

", @@ -2066,6 +2283,18 @@ "ModifySnapshotAttributeRequest$GroupNames": "

The group to modify for the snapshot.

" } }, + "HistoryRecord": { + "base": "

Describes an event in the history of the Spot fleet request.

", + "refs": { + "HistoryRecords$member": null + } + }, + "HistoryRecords": { + "base": null, + "refs": { + "DescribeSpotFleetRequestHistoryResponse$HistoryRecords": "

Information about the events in the history of the Spot fleet request.

" + } + }, "HypervisorType": { "base": null, "refs": { @@ -2113,7 +2342,7 @@ } }, "ImageDiskContainer": { - "base": "

The disk container object for an ImportImage task.

", + "base": "

Describes the disk container object for an import image task.

", "refs": { "ImageDiskContainerList$member": null } @@ -2121,7 +2350,7 @@ "ImageDiskContainerList": { "base": null, "refs": { - "ImportImageRequest$DiskContainers": null + "ImportImageRequest$DiskContainers": "

Information about the disk containers.

" } }, "ImageIdStringList": { @@ -2159,7 +2388,7 @@ } }, "ImportImageTask": { - "base": null, + "base": "

Describes an import image task.

", "refs": { "ImportImageTaskList$member": null } @@ -2167,13 +2396,13 @@ "ImportImageTaskList": { "base": null, "refs": { - "DescribeImportImageTasksResult$ImportImageTasks": "

A list of zero or more ImportImage tasks that are currently active or completed/cancelled in the previous 7 days.

" + "DescribeImportImageTasksResult$ImportImageTasks": "

A list of zero or more import image tasks that are currently active or were completed or canceled in the previous 7 days.

" } }, "ImportInstanceLaunchSpecification": { - "base": null, + "base": "

Describes the launch specification for VM import.

", "refs": { - "ImportInstanceRequest$LaunchSpecification": null + "ImportInstanceRequest$LaunchSpecification": "

The launch specification.

" } }, "ImportInstanceRequest": { @@ -2187,7 +2416,7 @@ } }, "ImportInstanceTaskDetails": { - "base": null, + "base": "

Describes an import instance task.

", "refs": { "ConversionTask$ImportInstance": "

If the task is for importing an instance, this contains information about the import instance task.

" } @@ -2201,7 +2430,7 @@ "ImportInstanceVolumeDetailSet": { "base": null, "refs": { - "ImportInstanceTaskDetails$Volumes": null + "ImportInstanceTaskDetails$Volumes": "

One or more volumes.

" } }, "ImportKeyPairRequest": { @@ -2225,7 +2454,7 @@ } }, "ImportSnapshotTask": { - "base": null, + "base": "

Describes an import snapshot task.

", "refs": { "ImportSnapshotTaskList$member": null } @@ -2233,14 +2462,14 @@ "ImportSnapshotTaskList": { "base": null, "refs": { - "DescribeImportSnapshotTasksResult$ImportSnapshotTasks": "

A list of zero or more ImportSnapshot tasks that are currently active or completed/cancelled in the previous 7 days.

" + "DescribeImportSnapshotTasksResult$ImportSnapshotTasks": "

A list of zero or more import snapshot tasks that are currently active or were completed or canceled in the previous 7 days.

" } }, "ImportTaskIdList": { "base": null, "refs": { - "DescribeImportImageTasksRequest$ImportTaskIds": "

A list of ImportImage task IDs to describe.

", - "DescribeImportSnapshotTasksRequest$ImportTaskIds": "

A list of IDs of the ImportSnapshot tasks to describe.

" + "DescribeImportImageTasksRequest$ImportTaskIds": "

A list of import image task IDs.

", + "DescribeImportSnapshotTasksRequest$ImportTaskIds": "

A list of import snapshot task IDs.

" } }, "ImportVolumeRequest": { @@ -2316,9 +2545,9 @@ } }, "InstanceExportDetails": { - "base": "

Describes an instance export task.

", + "base": "

Describes an instance to export.

", "refs": { - "ExportTask$InstanceExportDetails": "

The instance being exported.

" + "ExportTask$InstanceExportDetails": "

Information about the instance to export.

" } }, "InstanceIdStringList": { @@ -2460,7 +2689,7 @@ } }, "InstanceStatusEvent": { - "base": "

Describes an instance event.

", + "base": "

Describes a scheduled event for an instance.

", "refs": { "InstanceStatusEventList$member": null } @@ -2468,7 +2697,7 @@ "InstanceStatusEventList": { "base": null, "refs": { - "InstanceStatus$Events": "

Extra information regarding events associated with the instance.

" + "InstanceStatus$Events": "

Any scheduled events associated with the instance.

" } }, "InstanceStatusList": { @@ -2488,7 +2717,7 @@ "base": null, "refs": { "DescribeReservedInstancesOfferingsRequest$InstanceType": "

The instance type on which the Reserved Instance can be used. For more information, see Instance Types in the Amazon Elastic Compute Cloud User Guide.

", - "ImportInstanceLaunchSpecification$InstanceType": "

The instance type. This is not supported for VMs imported into a VPC, which are assigned the default security group. After a VM is imported into a VPC, you can specify another security group using the AWS Management Console. For more information, see Instance Types in the Amazon Elastic Compute Cloud User Guide. For more information about the Linux instance types you can import, see Before You Get Started in the Amazon Elastic Compute Cloud User Guide.

", + "ImportInstanceLaunchSpecification$InstanceType": "

The instance type. For more information about the instance types that you can import, see Before You Get Started in the Amazon Elastic Compute Cloud User Guide.

", "Instance$InstanceType": "

The instance type.

", "InstanceTypeList$member": null, "LaunchSpecification$InstanceType": "

The instance type.

", @@ -2523,23 +2752,30 @@ "CreateVolumeRequest$Iops": "

Only valid for Provisioned IOPS (SSD) volumes. The number of I/O operations per second (IOPS) to provision for the volume, with a maximum ratio of 30 IOPS/GiB.

Constraint: Range is 100 to 20000 for Provisioned IOPS (SSD) volumes

", "DeleteNetworkAclEntryRequest$RuleNumber": "

The rule number of the entry to delete.

", "DescribeClassicLinkInstancesRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. You cannot specify this parameter and the instance IDs parameter in the same request.

Constraint: If the value is greater than 1000, we return only 1000 items.

", - "DescribeImportImageTasksRequest$MaxResults": "

The maximum number of results in a page.

", - "DescribeImportSnapshotTasksRequest$MaxResults": "

The maximum number of results in a page.

", + "DescribeImportImageTasksRequest$MaxResults": "

The maximum number of results to return in a single request.

", + "DescribeImportSnapshotTasksRequest$MaxResults": "

The maximum number of results to return in a single request.

", "DescribeInstanceStatusRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. You cannot specify this parameter and the instance IDs parameter in the same request.

", "DescribeInstancesRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. You cannot specify this parameter and the instance IDs parameter in the same request.

", + "DescribeMovingAddressesRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value outside of this range, an error is returned.

Default: If no value is provided, the default is 1000.

", + "DescribePrefixListsRequest$MaxResults": "

The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.

Constraint: If the value specified is greater than 1000, we return only 1000 items.

", "DescribeReservedInstancesOfferingsRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. The maximum is 100.

Default: 100

", "DescribeReservedInstancesOfferingsRequest$MaxInstanceCount": "

The maximum number of instances to filter when searching for offerings.

Default: 20

", "DescribeSnapshotsRequest$MaxResults": "

The maximum number of snapshot results returned by DescribeSnapshots in paginated output. When this parameter is used, DescribeSnapshots only returns MaxResults results in a single page along with a NextToken response element. The remaining results of the initial request can be seen by sending another DescribeSnapshots request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. If this parameter is not used, then DescribeSnapshots returns all results. You cannot specify this parameter and the snapshot IDs parameter in the same request.

", - "DescribeSpotPriceHistoryRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned.

", + "DescribeSpotFleetInstancesRequest$MaxResults": "

The maximum number of results to return in a single call. Specify a value between 1 and 1000. The default value is 1000. To retrieve the remaining results, make another call with the returned NextToken value.

", + "DescribeSpotFleetRequestHistoryRequest$MaxResults": "

The maximum number of results to return in a single call. Specify a value between 1 and 1000. The default value is 1000. To retrieve the remaining results, make another call with the returned NextToken value.

", + "DescribeSpotFleetRequestsRequest$MaxResults": "

The maximum number of results to return in a single call. Specify a value between 1 and 1000. The default value is 1000. To retrieve the remaining results, make another call with the returned NextToken value.

", + "DescribeSpotPriceHistoryRequest$MaxResults": "

The maximum number of results to return in a single call. Specify a value between 1 and 1000. The default value is 1000. To retrieve the remaining results, make another call with the returned NextToken value.

", "DescribeTagsRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned.

", "DescribeVolumeStatusRequest$MaxResults": "

The maximum number of volume results returned by DescribeVolumeStatus in paginated output. When this parameter is used, the request only returns MaxResults results in a single page along with a NextToken response element. The remaining results of the initial request can be seen by sending another request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. If this parameter is not used, then DescribeVolumeStatus returns all results. You cannot specify this parameter and the volume IDs parameter in the same request.

", "DescribeVolumesRequest$MaxResults": "

The maximum number of volume results returned by DescribeVolumes in paginated output. When this parameter is used, DescribeVolumes only returns MaxResults results in a single page along with a NextToken response element. The remaining results of the initial request can be seen by sending another DescribeVolumes request with the returned NextToken value. This value can be between 5 and 1000; if MaxResults is given a value larger than 1000, only 1000 results are returned. If this parameter is not used, then DescribeVolumes returns all results. You cannot specify this parameter and the volume IDs parameter in the same request.

", + "DescribeVpcEndpointServicesRequest$MaxResults": "

The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.

Constraint: If the value is greater than 1000, we return only 1000 items.

", + "DescribeVpcEndpointsRequest$MaxResults": "

The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.

Constraint: If the value is greater than 1000, we return only 1000 items.

", "EbsBlockDevice$VolumeSize": "

The size of the volume, in GiB.

Constraints: 1-1024 for standard volumes, 1-16384 for gp2 volumes, and 4-16384 for io1 volumes. If you specify a snapshot, the volume size must be equal to or larger than the snapshot size.

Default: If you're creating the volume from a snapshot and don't specify a volume size, the default is the snapshot size.

", "EbsBlockDevice$Iops": "

The number of I/O operations per second (IOPS) that the volume supports. For Provisioned IOPS (SSD) volumes, this represents the number of IOPS that are provisioned for the volume. For General Purpose (SSD) volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. For more information on General Purpose (SSD) baseline performance, I/O credits, and bursting, see Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide.

Constraint: Range is 100 to 20000 for Provisioned IOPS (SSD) volumes and 3 to 10000 for General Purpose (SSD) volumes.

Condition: This parameter is required for requests to create io1 volumes; it is not used in requests to create standard or gp2 volumes.

", "IcmpTypeCode$Type": "

The ICMP code. A value of -1 means all codes for the specified ICMP type.

", "IcmpTypeCode$Code": "

The ICMP type. A value of -1 means all types.

", "Instance$AmiLaunchIndex": "

The AMI launch index, which can be used to find this instance in the launch group.

", - "InstanceCount$InstanceCount": "

he number of listed Reserved Instances in the state specified by the state.

", + "InstanceCount$InstanceCount": "

The number of listed Reserved Instances in the state specified by the state.

", "InstanceNetworkInterfaceAttachment$DeviceIndex": "

The index of the device on the instance for the network interface attachment.

", "InstanceNetworkInterfaceSpecification$DeviceIndex": "

The index of the device on the instance for the network interface attachment. If you are specifying a network interface in a RunInstances request, you must provide the device index.

", "InstanceNetworkInterfaceSpecification$SecondaryPrivateIpAddressCount": "

The number of secondary private IP addresses. You can't specify this option and specify more than one private IP address using the private IP addresses option.

", @@ -2563,6 +2799,7 @@ "RunInstancesRequest$MinCount": "

The minimum number of instances to launch. If you specify a minimum that is more instances than Amazon EC2 can launch in the target Availability Zone, Amazon EC2 launches no instances.

Constraints: Between 1 and the maximum number you're allowed for the specified instance type. For more information about the default limits, and how to request an increase, see How many instances can I run in Amazon EC2 in the Amazon EC2 General FAQ.

", "RunInstancesRequest$MaxCount": "

The maximum number of instances to launch. If you specify more instances than Amazon EC2 can launch in the target Availability Zone, Amazon EC2 launches the largest possible number of instances above MinCount.

Constraints: Between 1 and the maximum number you're allowed for the specified instance type. For more information about the default limits, and how to request an increase, see How many instances can I run in Amazon EC2 in the Amazon EC2 General FAQ.

", "Snapshot$VolumeSize": "

The size of the volume, in GiB.

", + "SpotFleetRequestConfigData$TargetCapacity": "

The maximum number of Spot Instances to launch.

", "Subnet$AvailableIpAddressCount": "

The number of unused IP addresses in the subnet. Note that the IP addresses for any stopped instances are considered unavailable.

", "VgwTelemetry$AcceptedRouteCount": "

The number of accepted routes.

", "Volume$Size": "

The size of the volume, in GiBs.

", @@ -2669,9 +2906,16 @@ "LaunchSpecification": { "base": "

Describes the launch specification for an instance.

", "refs": { + "LaunchSpecsList$member": null, "SpotInstanceRequest$LaunchSpecification": "

Additional information for launching instances.

" } }, + "LaunchSpecsList": { + "base": null, + "refs": { + "SpotFleetRequestConfigData$LaunchSpecifications": "

Information about the launch specifications for the instances.

" + } + }, "ListingState": { "base": null, "refs": { @@ -2689,9 +2933,9 @@ "refs": { "DescribeReservedInstancesOfferingsRequest$MinDuration": "

The minimum duration (in seconds) to filter when searching for offerings.

Default: 2592000 (1 month)

", "DescribeReservedInstancesOfferingsRequest$MaxDuration": "

The maximum duration (in seconds) to filter when searching for offerings.

Default: 94608000 (3 years)

", - "DiskImageDescription$Size": "

The size of the disk image.

", - "DiskImageDetail$Bytes": null, - "DiskImageVolumeDescription$Size": "

The size of the volume.

", + "DiskImageDescription$Size": "

The size of the disk image, in GiB.

", + "DiskImageDetail$Bytes": "

The size of the disk image, in GiB.

", + "DiskImageVolumeDescription$Size": "

The size of the volume, in GiB.

", "ImportInstanceVolumeDetailItem$BytesConverted": "

The number of bytes converted so far.

", "ImportVolumeTaskDetails$BytesConverted": "

The number of bytes converted so far.

", "PriceSchedule$Term": "

The number of months remaining in the reservation. For example, 2 is the second to the last month before the capacity reservation expires.

", @@ -2746,6 +2990,16 @@ "refs": { } }, + "ModifyVpcEndpointRequest": { + "base": null, + "refs": { + } + }, + "ModifyVpcEndpointResult": { + "base": null, + "refs": { + } + }, "MonitorInstancesRequest": { "base": null, "refs": { @@ -2769,6 +3023,34 @@ "Monitoring$State": "

Indicates whether monitoring is enabled for the instance.

" } }, + "MoveAddressToVpcRequest": { + "base": null, + "refs": { + } + }, + "MoveAddressToVpcResult": { + "base": null, + "refs": { + } + }, + "MoveStatus": { + "base": null, + "refs": { + "MovingAddressStatus$MoveStatus": "

The status of the Elastic IP address that's being moved to the EC2-VPC platform, or restored to the EC2-Classic platform.

" + } + }, + "MovingAddressStatus": { + "base": "

Describes the status of a moving Elastic IP address.

", + "refs": { + "MovingAddressStatusSet$member": null + } + }, + "MovingAddressStatusSet": { + "base": null, + "refs": { + "DescribeMovingAddressesResult$MovingAddressStatuses": "

The status for each Elastic IP address.

" + } + }, "NetworkAcl": { "base": "

Describes a network ACL.

", "refs": { @@ -2896,7 +3178,7 @@ "Placement": { "base": "

Describes the placement for the instance.

", "refs": { - "ImportInstanceLaunchSpecification$Placement": null, + "ImportInstanceLaunchSpecification$Placement": "

The placement information for the instance.

", "Instance$Placement": "

The location where the instance launched.

", "RunInstancesRequest$Placement": "

The placement for the instance.

" } @@ -2949,6 +3231,30 @@ "ReplaceNetworkAclEntryRequest$PortRange": "

TCP or UDP protocols: The range of ports the rule applies to. Required if specifying 6 (TCP) or 17 (UDP) for the protocol.

" } }, + "PrefixList": { + "base": "

Describes prefixes for AWS services.

", + "refs": { + "PrefixListSet$member": null + } + }, + "PrefixListId": { + "base": "

The ID of the prefix.

", + "refs": { + "PrefixListIdList$member": null + } + }, + "PrefixListIdList": { + "base": null, + "refs": { + "IpPermission$PrefixListIds": "

(Valid for AuthorizeSecurityGroupEgress, RevokeSecurityGroupEgress and DescribeSecurityGroups only) One or more prefix list IDs for an AWS service. In an AuthorizeSecurityGroupEgress request, this is the AWS service that you want to access through a VPC endpoint from instances associated with the security group.

" + } + }, + "PrefixListSet": { + "base": null, + "refs": { + "DescribePrefixListsResult$PrefixLists": "

All available prefix lists.

" + } + }, "PriceSchedule": { "base": "

Describes the price for a Reserved Instance.

", "refs": { @@ -3198,13 +3504,23 @@ "ReportInstanceStatusRequest$Status": "

The status of all instances listed.

" } }, + "RequestSpotFleetRequest": { + "base": "

Contains the parameters for RequestSpotFleet.

", + "refs": { + } + }, + "RequestSpotFleetResponse": { + "base": "

Contains the output of RequestSpotFleet.

", + "refs": { + } + }, "RequestSpotInstancesRequest": { - "base": null, + "base": "

Contains the parameters for RequestSpotInstances.

", "refs": { } }, "RequestSpotInstancesResult": { - "base": null, + "base": "

Contains the output of RequestSpotInstances.

", "refs": { } }, @@ -3383,6 +3699,16 @@ "DescribeSnapshotsRequest$RestorableByUserIds": "

One or more AWS accounts IDs that can create volumes from the snapshot.

" } }, + "RestoreAddressToClassicRequest": { + "base": null, + "refs": { + } + }, + "RestoreAddressToClassicResult": { + "base": null, + "refs": { + } + }, "RevokeSecurityGroupEgressRequest": { "base": null, "refs": { @@ -3520,7 +3846,7 @@ } }, "SnapshotDetail": { - "base": "

The details of the snapshot created from the imported disk.

", + "base": "

Describes the snapshot created from the imported disk.

", "refs": { "SnapshotDetailList$member": null } @@ -3528,14 +3854,14 @@ "SnapshotDetailList": { "base": null, "refs": { - "ImportImageResult$SnapshotDetails": null, - "ImportImageTask$SnapshotDetails": null + "ImportImageResult$SnapshotDetails": "

Information about the snapshots.

", + "ImportImageTask$SnapshotDetails": "

Information about the snapshots.

" } }, "SnapshotDiskContainer": { - "base": "

The disk container object for the ImportSnapshot request.

", + "base": "

The disk container object for the import snapshot request.

", "refs": { - "ImportSnapshotRequest$DiskContainer": null + "ImportSnapshotRequest$DiskContainer": "

Information about the disk container.

" } }, "SnapshotIdStringList": { @@ -3547,7 +3873,7 @@ "SnapshotList": { "base": null, "refs": { - "DescribeSnapshotsResult$Snapshots": null + "DescribeSnapshotsResult$Snapshots": "

Information about the snapshots.

" } }, "SnapshotState": { @@ -3559,8 +3885,8 @@ "SnapshotTaskDetail": { "base": "

Details about the import snapshot task.

", "refs": { - "ImportSnapshotResult$SnapshotTaskDetail": null, - "ImportSnapshotTask$SnapshotTaskDetail": null + "ImportSnapshotResult$SnapshotTaskDetail": "

Information about the import snapshot task.

", + "ImportSnapshotTask$SnapshotTaskDetail": "

Describes an import snapshot task.

" } }, "SpotDatafeedSubscription": { @@ -3570,6 +3896,25 @@ "DescribeSpotDatafeedSubscriptionResult$SpotDatafeedSubscription": "

The Spot Instance data feed subscription.

" } }, + "SpotFleetRequestConfig": { + "base": "

Describes a Spot fleet request.

", + "refs": { + "SpotFleetRequestConfigSet$member": null + } + }, + "SpotFleetRequestConfigData": { + "base": "

Describes the configuration of a Spot fleet request.

", + "refs": { + "RequestSpotFleetRequest$SpotFleetRequestConfig": "

The configuration for the Spot fleet request.

", + "SpotFleetRequestConfig$SpotFleetRequestConfig": "

Information about the configuration of the Spot fleet request.

" + } + }, + "SpotFleetRequestConfigSet": { + "base": null, + "refs": { + "DescribeSpotFleetRequestsResponse$SpotFleetRequestConfigs": "

Information about the configuration of your Spot fleet.

" + } + }, "SpotInstanceRequest": { "base": "

Describe a Spot Instance request.

", "refs": { @@ -3645,6 +3990,12 @@ "refs": { } }, + "State": { + "base": null, + "refs": { + "VpcEndpoint$State": "

The state of the VPC endpoint.

" + } + }, "StateReason": { "base": "

Describes a state change.

", "refs": { @@ -3652,6 +4003,13 @@ "Instance$StateReason": "

The reason for the most recent state transition.

" } }, + "Status": { + "base": null, + "refs": { + "MoveAddressToVpcResult$Status": "

The status of the move of the IP address.

", + "RestoreAddressToClassicResult$Status": "

The move status for the IP address.

" + } + }, "StatusName": { "base": null, "refs": { @@ -3687,7 +4045,10 @@ "AcceptVpcPeeringConnectionRequest$VpcPeeringConnectionId": "

The ID of the VPC peering connection.

", "AccountAttribute$AttributeName": "

The name of the account attribute.

", "AccountAttributeValue$AttributeValue": "

The value of the attribute.

", - "Address$InstanceId": "

The ID of the instance the address is associated with (if any).

", + "ActiveInstance$InstanceType": "

The instance type.

", + "ActiveInstance$InstanceId": "

The ID of the instance.

", + "ActiveInstance$SpotInstanceRequestId": "

The ID of the Spot Instance request.

", + "Address$InstanceId": "

The ID of the instance that the address is associated with (if any).

", "Address$PublicIp": "

The Elastic IP address.

", "Address$AllocationId": "

The ID representing the allocation of the address for use with EC2-VPC.

", "Address$AssociationId": "

The ID representing the association of the address with an instance in a VPC.

", @@ -3716,7 +4077,7 @@ "AttachNetworkInterfaceRequest$NetworkInterfaceId": "

The ID of the network interface.

", "AttachNetworkInterfaceRequest$InstanceId": "

The ID of the instance.

", "AttachNetworkInterfaceResult$AttachmentId": "

The ID of the network interface attachment.

", - "AttachVolumeRequest$VolumeId": "

The ID of the Amazon EBS volume. The volume and instance must be within the same Availability Zone.

", + "AttachVolumeRequest$VolumeId": "

The ID of the EBS volume. The volume and instance must be within the same Availability Zone.

", "AttachVolumeRequest$InstanceId": "

The ID of the instance.

", "AttachVolumeRequest$Device": "

The device name to expose to the instance (for example, /dev/sdh or xvdh).

", "AttachVpnGatewayRequest$VpnGatewayId": "

The ID of the virtual private gateway.

", @@ -3748,18 +4109,21 @@ "BundleTaskError$Message": "

The error message.

", "CancelBundleTaskRequest$BundleId": "

The ID of the bundle task.

", "CancelConversionRequest$ConversionTaskId": "

The ID of the conversion task.

", - "CancelConversionRequest$ReasonMessage": null, + "CancelConversionRequest$ReasonMessage": "

The reason for canceling the conversion task.

", "CancelExportTaskRequest$ExportTaskId": "

The ID of the export task. This is the ID returned by CreateInstanceExportTask.

", - "CancelImportTaskRequest$ImportTaskId": "

The ID of the ImportImage or ImportSnapshot task to be cancelled.

", + "CancelImportTaskRequest$ImportTaskId": "

The ID of the import image or import snapshot task to be canceled.

", "CancelImportTaskRequest$CancelReason": "

The reason for canceling the task.

", - "CancelImportTaskResult$ImportTaskId": "

The task ID of the ImportImage or ImportSnapshot task being canceled.

", - "CancelImportTaskResult$State": "

The current state of the ImportImage or ImportSnapshot task being canceled.

", - "CancelImportTaskResult$PreviousState": "

The current state of the ImportImage or ImportSnapshot task being canceled.

", + "CancelImportTaskResult$ImportTaskId": "

The ID of the task being canceled.

", + "CancelImportTaskResult$State": "

The current state of the task being canceled.

", + "CancelImportTaskResult$PreviousState": "

The current state of the task being canceled.

", "CancelReservedInstancesListingRequest$ReservedInstancesListingId": "

The ID of the Reserved Instance listing.

", + "CancelSpotFleetRequestsError$Message": "

The description for the error code.

", + "CancelSpotFleetRequestsErrorItem$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "CancelSpotFleetRequestsSuccessItem$SpotFleetRequestId": "

The ID of the Spot fleet request.

", "CancelledSpotInstanceRequest$SpotInstanceRequestId": "

The ID of the Spot Instance request.

", "ClassicLinkInstance$InstanceId": "

The ID of the instance.

", "ClassicLinkInstance$VpcId": "

The ID of the VPC.

", - "ClientData$Comment": "

User-defined comment about the upload.

", + "ClientData$Comment": "

A user-defined comment about the disk upload.

", "ConfirmProductInstanceRequest$ProductCode": "

The product code. This must be a product code that you own.

", "ConfirmProductInstanceRequest$InstanceId": "

The ID of the instance.

", "ConfirmProductInstanceResult$OwnerId": "

The AWS account ID of the instance owner. This is only present if the product code is attached to the instance.

", @@ -3774,10 +4138,10 @@ "CopyImageRequest$ClientToken": "

Unique, case-sensitive identifier you provide to ensure idempotency of the request. For more information, see How to Ensure Idempotency in the Amazon Elastic Compute Cloud User Guide.

", "CopyImageResult$ImageId": "

The ID of the new AMI.

", "CopySnapshotRequest$SourceRegion": "

The ID of the region that contains the snapshot to be copied.

", - "CopySnapshotRequest$SourceSnapshotId": "

The ID of the Amazon EBS snapshot to copy.

", - "CopySnapshotRequest$Description": "

A description for the new Amazon EBS snapshot.

", + "CopySnapshotRequest$SourceSnapshotId": "

The ID of the EBS snapshot to copy.

", + "CopySnapshotRequest$Description": "

A description for the EBS snapshot.

", "CopySnapshotRequest$DestinationRegion": "

The destination region to use in the PresignedUrl parameter of a snapshot copy operation. This parameter is only valid for specifying the destination region in a PresignedUrl parameter, where it is required.

CopySnapshot sends the snapshot copy to the regional endpoint that you send the HTTP request to, such as ec2.us-east-1.amazonaws.com (in the AWS CLI, this is specified with the --region parameter or the default region in your AWS configuration file).

", - "CopySnapshotRequest$PresignedUrl": "

The pre-signed URL that facilitates copying an encrypted snapshot. This parameter is only required when copying an encrypted snapshot with the Amazon EC2 Query API; it is available as an optional parameter in all other cases. The PresignedUrl should use the snapshot source endpoint, the CopySnapshot action, and include the SourceRegion, SourceSnapshotId, and DestinationRegion parameters. The PresignedUrl must be signed using AWS Signature Version 4. Because Amazon EBS snapshots are stored in Amazon S3, the signing algorithm for this parameter uses the same logic that is described in Authenticating Requests by Using Query Parameters (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference. An invalid or improperly signed PresignedUrl will cause the copy operation to fail asynchronously, and the snapshot will move to an error state.

", + "CopySnapshotRequest$PresignedUrl": "

The pre-signed URL that facilitates copying an encrypted snapshot. This parameter is only required when copying an encrypted snapshot with the Amazon EC2 Query API; it is available as an optional parameter in all other cases. The PresignedUrl should use the snapshot source endpoint, the CopySnapshot action, and include the SourceRegion, SourceSnapshotId, and DestinationRegion parameters. The PresignedUrl must be signed using AWS Signature Version 4. Because EBS snapshots are stored in Amazon S3, the signing algorithm for this parameter uses the same logic that is described in Authenticating Requests by Using Query Parameters (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference. An invalid or improperly signed PresignedUrl will cause the copy operation to fail asynchronously, and the snapshot will move to an error state.

", "CopySnapshotResult$SnapshotId": "

The ID of the new snapshot.

", "CreateCustomerGatewayRequest$PublicIp": "

The Internet-routable IP address for the customer gateway's outside interface. The address must be static.

", "CreateImageRequest$InstanceId": "

The ID of the instance.

", @@ -3803,12 +4167,14 @@ "CreateRouteRequest$InstanceId": "

The ID of a NAT instance in your VPC. The operation fails if you specify an instance ID unless exactly one network interface is attached.

", "CreateRouteRequest$NetworkInterfaceId": "

The ID of a network interface.

", "CreateRouteRequest$VpcPeeringConnectionId": "

The ID of a VPC peering connection.

", + "CreateRouteRequest$ClientToken": "

Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.

", + "CreateRouteResult$ClientToken": "

Unique, case-sensitive identifier you provide to ensure the idempotency of the request.

", "CreateRouteTableRequest$VpcId": "

The ID of the VPC.

", "CreateSecurityGroupRequest$GroupName": "

The name of the security group.

Constraints: Up to 255 characters in length

Constraints for EC2-Classic: ASCII characters

Constraints for EC2-VPC: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*

", "CreateSecurityGroupRequest$Description": "

A description for the security group. This is informational only.

Constraints: Up to 255 characters in length

Constraints for EC2-Classic: ASCII characters

Constraints for EC2-VPC: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*

", "CreateSecurityGroupRequest$VpcId": "

[EC2-VPC] The ID of the VPC. Required for EC2-VPC.

", "CreateSecurityGroupResult$GroupId": "

The ID of the security group.

", - "CreateSnapshotRequest$VolumeId": "

The ID of the Amazon EBS volume.

", + "CreateSnapshotRequest$VolumeId": "

The ID of the EBS volume.

", "CreateSnapshotRequest$Description": "

A description for the snapshot.

", "CreateSpotDatafeedSubscriptionRequest$Bucket": "

The Amazon S3 bucket in which to store the Spot Instance data feed.

", "CreateSpotDatafeedSubscriptionRequest$Prefix": "

A prefix for the data feed file names.

", @@ -3819,6 +4185,11 @@ "CreateVolumeRequest$SnapshotId": "

The snapshot from which to create the volume.

", "CreateVolumeRequest$AvailabilityZone": "

The Availability Zone in which to create the volume. Use DescribeAvailabilityZones to list the Availability Zones that are currently available to you.

", "CreateVolumeRequest$KmsKeyId": "

The full ARN of the AWS Key Management Service (KMS) master key to use when creating the encrypted volume. This parameter is only required if you want to use a non-default master key; if this parameter is not specified, the default master key is used. The ARN contains the arn:aws:kms namespace, followed by the region of the master key, the AWS account ID of the master key owner, the key namespace, and then the master key ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.

", + "CreateVpcEndpointRequest$VpcId": "

The ID of the VPC in which the endpoint will be used.

", + "CreateVpcEndpointRequest$ServiceName": "The AWS service name, in the form com.amazonaws.<region>.<service>. To get a list of available services, use the DescribeVpcEndpointServices request.", + "CreateVpcEndpointRequest$PolicyDocument": "

A policy to attach to the endpoint that controls access to the service. The policy must be in valid JSON format. If this parameter is not specified, we attach a default policy that allows full access to the service.

", + "CreateVpcEndpointRequest$ClientToken": "

Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.

", + "CreateVpcEndpointResult$ClientToken": "

Unique, case-sensitive identifier you provide to ensure the idempotency of the request.

", "CreateVpcPeeringConnectionRequest$VpcId": "

The ID of the requester VPC.

", "CreateVpcPeeringConnectionRequest$PeerVpcId": "

The ID of the VPC with which you are creating the VPC peering connection.

", "CreateVpcPeeringConnectionRequest$PeerOwnerId": "

The AWS account ID of the owner of the peer VPC.

Default: Your AWS account ID

", @@ -3848,7 +4219,7 @@ "DeleteRouteTableRequest$RouteTableId": "

The ID of the route table.

", "DeleteSecurityGroupRequest$GroupName": "

[EC2-Classic, default VPC] The name of the security group. You can specify either the security group name or the security group ID.

", "DeleteSecurityGroupRequest$GroupId": "

The ID of the security group. Required for a nondefault VPC.

", - "DeleteSnapshotRequest$SnapshotId": "

The ID of the Amazon EBS snapshot.

", + "DeleteSnapshotRequest$SnapshotId": "

The ID of the EBS snapshot.

", "DeleteSubnetRequest$SubnetId": "

The ID of the subnet.

", "DeleteVolumeRequest$VolumeId": "

The ID of the volume.

", "DeleteVpcPeeringConnectionRequest$VpcPeeringConnectionId": "

The ID of the VPC peering connection.

", @@ -3861,17 +4232,21 @@ "DescribeClassicLinkInstancesRequest$NextToken": "

The token to retrieve the next page of results.

", "DescribeClassicLinkInstancesResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeImageAttributeRequest$ImageId": "

The ID of the AMI.

", - "DescribeImportImageTasksRequest$NextToken": "

The token to get the next page of paginated describe requests.

", - "DescribeImportImageTasksResult$NextToken": "

The token to get the next page of paginated describe requests.

", - "DescribeImportSnapshotTasksRequest$NextToken": "

The token to get to the next page of paginated describe requests.

", - "DescribeImportSnapshotTasksResult$NextToken": "

The token to get to the next page of paginated describe requests.

", + "DescribeImportImageTasksRequest$NextToken": "

A token that indicates the next page of results.

", + "DescribeImportImageTasksResult$NextToken": "

The token to use to get the next page of results. This value is null when there are no more results to return.

", + "DescribeImportSnapshotTasksRequest$NextToken": "

A token that indicates the next page of results.

", + "DescribeImportSnapshotTasksResult$NextToken": "

The token to use to get the next page of results. This value is null when there are no more results to return.

", "DescribeInstanceAttributeRequest$InstanceId": "

The ID of the instance.

", "DescribeInstanceStatusRequest$NextToken": "

The token to retrieve the next page of results.

", "DescribeInstanceStatusResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeInstancesRequest$NextToken": "

The token to request the next page of results.

", "DescribeInstancesResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", + "DescribeMovingAddressesRequest$NextToken": "

The token to use to retrieve the next page of results.

", + "DescribeMovingAddressesResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeNetworkInterfaceAttributeRequest$NetworkInterfaceId": "

The ID of the network interface.

", "DescribeNetworkInterfaceAttributeResult$NetworkInterfaceId": "

The ID of the network interface.

", + "DescribePrefixListsRequest$NextToken": "

The token for the next set of items to return. (You received this token from a prior call.)

", + "DescribePrefixListsResult$NextToken": "

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", "DescribeReservedInstancesListingsRequest$ReservedInstancesId": "

One or more Reserved Instance IDs.

", "DescribeReservedInstancesListingsRequest$ReservedInstancesListingId": "

One or more Reserved Instance Listing IDs.

", "DescribeReservedInstancesModificationsRequest$NextToken": "

The token to retrieve the next page of results.

", @@ -3879,13 +4254,23 @@ "DescribeReservedInstancesOfferingsRequest$AvailabilityZone": "

The Availability Zone in which the Reserved Instance can be used.

", "DescribeReservedInstancesOfferingsRequest$NextToken": "

The token to retrieve the next page of results.

", "DescribeReservedInstancesOfferingsResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", - "DescribeSnapshotAttributeRequest$SnapshotId": "

The ID of the Amazon EBS snapshot.

", - "DescribeSnapshotAttributeResult$SnapshotId": "

The ID of the Amazon EBS snapshot.

", + "DescribeSnapshotAttributeRequest$SnapshotId": "

The ID of the EBS snapshot.

", + "DescribeSnapshotAttributeResult$SnapshotId": "

The ID of the EBS snapshot.

", "DescribeSnapshotsRequest$NextToken": "

The NextToken value returned from a previous paginated DescribeSnapshots request where MaxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the NextToken value. This value is null when there are no more results to return.

", "DescribeSnapshotsResult$NextToken": "

The NextToken value to include in a future DescribeSnapshots request. When the results of a DescribeSnapshots request exceed MaxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

", + "DescribeSpotFleetInstancesRequest$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "DescribeSpotFleetInstancesRequest$NextToken": "

The token for the next set of results.

", + "DescribeSpotFleetInstancesResponse$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "DescribeSpotFleetInstancesResponse$NextToken": "

The token required to retrieve the next set of results. This value is null when there are no more results to return.

", + "DescribeSpotFleetRequestHistoryRequest$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "DescribeSpotFleetRequestHistoryRequest$NextToken": "

The token for the next set of results.

", + "DescribeSpotFleetRequestHistoryResponse$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "DescribeSpotFleetRequestHistoryResponse$NextToken": "

The token required to retrieve the next set of results. This value is null when there are no more results to return.

", + "DescribeSpotFleetRequestsRequest$NextToken": "

The token for the next set of results.

", + "DescribeSpotFleetRequestsResponse$NextToken": "

The token required to retrieve the next set of results. This value is null when there are no more results to return.

", "DescribeSpotPriceHistoryRequest$AvailabilityZone": "

Filters the results by the specified Availability Zone.

", - "DescribeSpotPriceHistoryRequest$NextToken": "

The token to retrieve the next page of results.

", - "DescribeSpotPriceHistoryResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", + "DescribeSpotPriceHistoryRequest$NextToken": "

The token for the next set of results.

", + "DescribeSpotPriceHistoryResult$NextToken": "

The token required to retrieve the next set of results. This value is null when there are no more results to return.

", "DescribeTagsRequest$NextToken": "

The token to retrieve the next page of results.

", "DescribeTagsResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return..

", "DescribeVolumeAttributeRequest$VolumeId": "

The ID of the volume.

", @@ -3896,6 +4281,10 @@ "DescribeVolumesResult$NextToken": "

The NextToken value to include in a future DescribeVolumes request. When the results of a DescribeVolumes request exceed MaxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeVpcAttributeRequest$VpcId": "

The ID of the VPC.

", "DescribeVpcAttributeResult$VpcId": "

The ID of the VPC.

", + "DescribeVpcEndpointServicesRequest$NextToken": "

The token for the next set of items to return. (You received this token from a prior call.)

", + "DescribeVpcEndpointServicesResult$NextToken": "

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", + "DescribeVpcEndpointsRequest$NextToken": "

The token for the next set of items to return. (You received this token from a prior call.)

", + "DescribeVpcEndpointsResult$NextToken": "

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", "DetachClassicLinkVpcRequest$InstanceId": "

The ID of the instance to unlink from the VPC.

", "DetachClassicLinkVpcRequest$VpcId": "

The ID of the VPC to which the instance is linked.

", "DetachInternetGatewayRequest$InternetGatewayId": "

The ID of the Internet gateway.

", @@ -3915,27 +4304,30 @@ "DisassociateAddressRequest$PublicIp": "

[EC2-Classic] The Elastic IP address. Required for EC2-Classic.

", "DisassociateAddressRequest$AssociationId": "

[EC2-VPC] The association ID. Required for EC2-VPC.

", "DisassociateRouteTableRequest$AssociationId": "

The association ID representing the current association between the route table and subnet.

", - "DiskImage$Description": null, + "DiskImage$Description": "

A description of the disk image.

", "DiskImageDescription$ImportManifestUrl": "

A presigned URL for the import manifest stored in Amazon S3. For information about creating a presigned URL for an Amazon S3 object, read the \"Query String Request Authentication Alternative\" section of the Authenticating REST Requests topic in the Amazon Simple Storage Service Developer Guide.

", "DiskImageDescription$Checksum": "

The checksum computed for the disk image.

", "DiskImageDetail$ImportManifestUrl": "

A presigned URL for the import manifest stored in Amazon S3 and presented here as an Amazon S3 presigned URL. For information about creating a presigned URL for an Amazon S3 object, read the \"Query String Request Authentication Alternative\" section of the Authenticating REST Requests topic in the Amazon Simple Storage Service Developer Guide.

", "DiskImageVolumeDescription$Id": "

The volume identifier.

", "EbsBlockDevice$SnapshotId": "

The ID of the snapshot.

", - "EbsInstanceBlockDevice$VolumeId": "

The ID of the Amazon EBS volume.

", - "EbsInstanceBlockDeviceSpecification$VolumeId": "

The ID of the Amazon EBS volume.

", + "EbsInstanceBlockDevice$VolumeId": "

The ID of the EBS volume.

", + "EbsInstanceBlockDeviceSpecification$VolumeId": "

The ID of the EBS volume.

", "EnableVgwRoutePropagationRequest$RouteTableId": "

The ID of the route table.

", "EnableVgwRoutePropagationRequest$GatewayId": "

The ID of the virtual private gateway.

", "EnableVolumeIORequest$VolumeId": "

The ID of the volume.

", "EnableVpcClassicLinkRequest$VpcId": "

The ID of the VPC.

", + "EventInformation$InstanceId": "

The ID of the instance. This information is available only for instanceChange events.

", + "EventInformation$EventSubType": "

The event.

The following are the error events.

The following are the fleetRequestChange events.

The following are the instanceChange events.

", + "EventInformation$EventDescription": "

The description of the event.

", "ExecutableByStringList$member": null, "ExportTask$ExportTaskId": "

The ID of the export task.

", "ExportTask$Description": "

A description of the resource being exported.

", "ExportTask$StatusMessage": "

The status message related to the export task.

", "ExportTaskIdStringList$member": null, - "ExportToS3Task$S3Bucket": "

The Amazon S3 bucket for the destination image. The destination bucket must exist and grant WRITE and READ_ACP permissions to the AWS account vm-import-export@amazon.com.

", - "ExportToS3Task$S3Key": null, - "ExportToS3TaskSpecification$S3Bucket": null, - "ExportToS3TaskSpecification$S3Prefix": "

The image is written to a single object in the Amazon S3 bucket at the S3 key s3prefix + exportTaskId + '.' + diskImageFormat.

", + "ExportToS3Task$S3Bucket": "

The S3 bucket for the destination image. The destination bucket must exist and grant WRITE and READ_ACP permissions to the AWS account vm-import-export@amazon.com.

", + "ExportToS3Task$S3Key": "

The encryption key for your S3 bucket.

", + "ExportToS3TaskSpecification$S3Bucket": "

The S3 bucket for the destination image. The destination bucket must exist and grant WRITE and READ_ACP permissions to the AWS account vm-import-export@amazon.com.

", + "ExportToS3TaskSpecification$S3Prefix": "

The image is written to a single object in the S3 bucket at the S3 key s3prefix + exportTaskId + '.' + diskImageFormat.

", "Filter$Name": "

The name of the filter. Filter names are case-sensitive.

", "GetConsoleOutputRequest$InstanceId": "

The ID of the instance.

", "GetConsoleOutputResult$InstanceId": "

The ID of the instance.

", @@ -3963,62 +4355,62 @@ "Image$Description": "

The description of the AMI that was provided during image creation.

", "Image$RootDeviceName": "

The device name of the root device (for example, /dev/sda1 or /dev/xvda).

", "ImageAttribute$ImageId": "

The ID of the AMI.

", - "ImageDiskContainer$Description": "

The description of the disk image (optional).

", - "ImageDiskContainer$Format": "

The format of the disk image being imported (optional).

", + "ImageDiskContainer$Description": "

The description of the disk image.

", + "ImageDiskContainer$Format": "

The format of the disk image being imported.

Valid values: RAW | VHD | VMDK | OVA

", "ImageDiskContainer$Url": "

The URL to the Amazon S3-based disk image being imported. The URL can either be a https URL (https://..) or an Amazon S3 URL (s3://..)

", - "ImageDiskContainer$DeviceName": "

The Amazon EBS block device mapping for the disk (optional).

", - "ImageDiskContainer$SnapshotId": "

The Amazon EBS snapshot ID to be used for importing the snapshot.

", + "ImageDiskContainer$DeviceName": "

The block device mapping for the disk.

", + "ImageDiskContainer$SnapshotId": "

The ID of the EBS snapshot to be used for importing the snapshot.

", "ImageIdStringList$member": null, - "ImportImageRequest$Description": "

A description string for the import image task (optional).

", - "ImportImageRequest$LicenseType": "

The license type to be used for the Amazon Machine Image (AMI) after importing (optional).

Note: You may only use BYOL if you have existing licenses with rights to use these licenses in a third party cloud like AWS. For more information, see VM Import/Export Prerequisites in the Amazon Elastic Compute Cloud User Guide.

Valid Values: AWS | BYOL

", - "ImportImageRequest$Hypervisor": "

The target hypervisor platform to use (optional).

", - "ImportImageRequest$Architecture": "

The architecture of the virtual machine being imported (optional).

", - "ImportImageRequest$Platform": "

The operating system of the virtual machine being imported (optional).

", - "ImportImageRequest$ClientToken": "

The token to enable idempotency for VM import requests (optional).

", - "ImportImageRequest$RoleName": "

The name of the role to use when not using the default role name 'vmimport' (optional).

", - "ImportImageResult$ImportTaskId": "

The task id of the ImportImage task.

", - "ImportImageResult$Architecture": "

Architecture of the virtual machine being imported.

", - "ImportImageResult$LicenseType": "

License type of the virtual machine being imported.

", - "ImportImageResult$Platform": "

Operating system of the VM being imported.

", - "ImportImageResult$Hypervisor": "

Target hypervisor of the import task.

", + "ImportImageRequest$Description": "

A description string for the import image task.

", + "ImportImageRequest$LicenseType": "

The license type to be used for the Amazon Machine Image (AMI) after importing.

Note: You may only use BYOL if you have existing licenses with rights to use these licenses in a third party cloud like AWS. For more information, see VM Import/Export Prerequisites in the Amazon Elastic Compute Cloud User Guide.

Valid values: AWS | BYOL

", + "ImportImageRequest$Hypervisor": "

The target hypervisor platform.

Valid values: xen

", + "ImportImageRequest$Architecture": "

The architecture of the virtual machine.

Valid values: i386 | x86_64

", + "ImportImageRequest$Platform": "

The operating system of the virtual machine.

Valid values: Windows | Linux

", + "ImportImageRequest$ClientToken": "

The token to enable idempotency for VM import requests.

", + "ImportImageRequest$RoleName": "

The name of the role to use when not using the default role, 'vmimport'.

", + "ImportImageResult$ImportTaskId": "

The task ID of the import image task.

", + "ImportImageResult$Architecture": "

The architecture of the virtual machine.

", + "ImportImageResult$LicenseType": "

The license type of the virtual machine.

", + "ImportImageResult$Platform": "

The operating system of the virtual machine.

", + "ImportImageResult$Hypervisor": "

The target hypervisor of the import task.

", "ImportImageResult$Description": "

A description of the import task.

", - "ImportImageResult$ImageId": "

The Amazon Machine Image (AMI) ID created by the import task.

", - "ImportImageResult$Progress": "

The task's progress.

", + "ImportImageResult$ImageId": "

The ID of the Amazon Machine Image (AMI) created by the import task.

", + "ImportImageResult$Progress": "

The progress of the task.

", "ImportImageResult$StatusMessage": "

A detailed status message of the import task.

", "ImportImageResult$Status": "

A brief status of the task.

", - "ImportImageTask$ImportTaskId": "

The ID of the import task.

", - "ImportImageTask$Architecture": "

Architecture of the VM being imported.

", - "ImportImageTask$LicenseType": "

License type of the VM being imported.

", + "ImportImageTask$ImportTaskId": "

The ID of the import image task.

", + "ImportImageTask$Architecture": "

The architecture of the virtual machine.

Valid values: i386 | x86_64

", + "ImportImageTask$LicenseType": "

The license type of the virtual machine.

", "ImportImageTask$Platform": "

The description string for the import image task.

", - "ImportImageTask$Hypervisor": "

Target hypervisor for the import task.

", - "ImportImageTask$Description": "

Description of the import task.

", - "ImportImageTask$ImageId": "

The Amazon Machine Image (AMI) ID of the imported virtual machine.

", - "ImportImageTask$Progress": "

The percentage of progress of the ImportImage task.

", - "ImportImageTask$StatusMessage": "

A descriptive status message for the ImportImage task.

", - "ImportImageTask$Status": "

A brief status for the ImportImage task.

", - "ImportInstanceLaunchSpecification$AdditionalInfo": null, - "ImportInstanceLaunchSpecification$SubnetId": "

[EC2-VPC] The ID of the subnet to launch the instance into.

", - "ImportInstanceLaunchSpecification$PrivateIpAddress": "

[EC2-VPC] Optionally, you can use this parameter to assign the instance a specific available IP address from the IP address range of the subnet.

", + "ImportImageTask$Hypervisor": "

The target hypervisor for the import task.

Valid values: xen

", + "ImportImageTask$Description": "

A description of the import task.

", + "ImportImageTask$ImageId": "

The ID of the Amazon Machine Image (AMI) of the imported virtual machine.

", + "ImportImageTask$Progress": "

The percentage of progress of the import image task.

", + "ImportImageTask$StatusMessage": "

A descriptive status message for the import image task.

", + "ImportImageTask$Status": "

A brief status for the import image task.

", + "ImportInstanceLaunchSpecification$AdditionalInfo": "

Reserved.

", + "ImportInstanceLaunchSpecification$SubnetId": "

[EC2-VPC] The ID of the subnet in which to launch the instance.

", + "ImportInstanceLaunchSpecification$PrivateIpAddress": "

[EC2-VPC] An available IP address from the IP address range of the subnet.

", "ImportInstanceRequest$Description": "

A description for the instance being imported.

", - "ImportInstanceTaskDetails$InstanceId": null, - "ImportInstanceTaskDetails$Description": null, + "ImportInstanceTaskDetails$InstanceId": "

The ID of the instance.

", + "ImportInstanceTaskDetails$Description": "

A description of the task.

", "ImportInstanceVolumeDetailItem$AvailabilityZone": "

The Availability Zone where the resulting instance will reside.

", "ImportInstanceVolumeDetailItem$Status": "

The status of the import of this particular disk image.

", "ImportInstanceVolumeDetailItem$StatusMessage": "

The status information or errors related to the disk image.

", - "ImportInstanceVolumeDetailItem$Description": null, + "ImportInstanceVolumeDetailItem$Description": "

A description of the task.

", "ImportKeyPairRequest$KeyName": "

A unique name for the key pair.

", "ImportKeyPairResult$KeyName": "

The key pair name you provided.

", "ImportKeyPairResult$KeyFingerprint": "

The MD5 public key fingerprint as specified in section 4 of RFC 4716.

", - "ImportSnapshotRequest$Description": "

The description string for the ImportSnapshot task.

", - "ImportSnapshotRequest$ClientToken": "

Token to enable idempotency for VM import requests (optional).

", - "ImportSnapshotRequest$RoleName": "

The name of the role to use when not using the default role name 'vmimport' (optional).

", - "ImportSnapshotResult$ImportTaskId": "

Task ID of the ImportSnapshot task.

", - "ImportSnapshotResult$Description": "

Description of the import snapshot task.

", - "ImportSnapshotTask$ImportTaskId": "

The task ID of the ImportSnapshot task.

", - "ImportSnapshotTask$Description": "

Description for the import snapshot task.

", + "ImportSnapshotRequest$Description": "

The description string for the import snapshot task.

", + "ImportSnapshotRequest$ClientToken": "

Token to enable idempotency for VM import requests.

", + "ImportSnapshotRequest$RoleName": "

The name of the role to use when not using the default role, 'vmimport'.

", + "ImportSnapshotResult$ImportTaskId": "

The ID of the import snapshot task.

", + "ImportSnapshotResult$Description": "

A description of the import snapshot task.

", + "ImportSnapshotTask$ImportTaskId": "

The ID of the import snapshot task.

", + "ImportSnapshotTask$Description": "

A description of the import snapshot task.

", "ImportTaskIdList$member": null, - "ImportVolumeRequest$AvailabilityZone": "

The Availability Zone for the resulting Amazon EBS volume.

", - "ImportVolumeRequest$Description": "

An optional description for the volume being imported.

", + "ImportVolumeRequest$AvailabilityZone": "

The Availability Zone for the resulting EBS volume.

", + "ImportVolumeRequest$Description": "

A description of the volume.

", "ImportVolumeTaskDetails$AvailabilityZone": "

The Availability Zone where the resulting volume will reside.

", "ImportVolumeTaskDetails$Description": "

The description you provided when starting the import volume task.

", "Instance$InstanceId": "

The ID of the instance.

", @@ -4066,7 +4458,7 @@ "InstanceStateChange$InstanceId": "

The ID of the instance.

", "InstanceStatus$InstanceId": "

The ID of the instance.

", "InstanceStatus$AvailabilityZone": "

The Availability Zone of the instance.

", - "InstanceStatusEvent$Description": "

A description of the event.

", + "InstanceStatusEvent$Description": "

A description of the event.

After a scheduled event is completed, it can still be described for up to a week. If the event has been completed, this description starts with the following text: [Completed].

", "InternetGateway$InternetGatewayId": "

The ID of the Internet gateway.

", "InternetGatewayAttachment$VpcId": "

The ID of the VPC.

", "IpPermission$IpProtocol": "

The protocol.

When you call DescribeSecurityGroups, the protocol value returned is the number. Exception: For TCP, UDP, and ICMP, the value returned is the name (for example, tcp, udp, or icmp). For a list of protocol numbers, see Protocol Numbers. (VPC only) When you call AuthorizeSecurityGroupIngress, you can use -1 to specify all.

", @@ -4099,6 +4491,11 @@ "ModifySubnetAttributeRequest$SubnetId": "

The ID of the subnet.

", "ModifyVolumeAttributeRequest$VolumeId": "

The ID of the volume.

", "ModifyVpcAttributeRequest$VpcId": "

The ID of the VPC.

", + "ModifyVpcEndpointRequest$VpcEndpointId": "

The ID of the endpoint.

", + "ModifyVpcEndpointRequest$PolicyDocument": "

A policy document to attach to the endpoint. The policy must be in valid JSON format.

", + "MoveAddressToVpcRequest$PublicIp": "

The Elastic IP address.

", + "MoveAddressToVpcResult$AllocationId": "

The allocation ID for the Elastic IP address.

", + "MovingAddressStatus$PublicIp": "

The Elastic IP address.

", "NetworkAcl$NetworkAclId": "

The ID of the network ACL.

", "NetworkAcl$VpcId": "

The ID of the VPC for the network ACL.

", "NetworkAclAssociation$NetworkAclAssociationId": "

The ID of the association between a network ACL and a subnet.

", @@ -4133,6 +4530,9 @@ "Placement$GroupName": "

The name of the placement group the instance is in (for cluster compute instances).

", "PlacementGroup$GroupName": "

The name of the placement group.

", "PlacementGroupStringList$member": null, + "PrefixList$PrefixListId": "

The ID of the prefix.

", + "PrefixList$PrefixListName": "

The name of the prefix.

", + "PrefixListId$PrefixListId": "

The ID of the prefix.

", "PrivateIpAddressSpecification$PrivateIpAddress": "

The private IP addresses.

", "PrivateIpAddressStringList$member": null, "ProductCode$ProductCodeId": "

The product code.

", @@ -4173,7 +4573,9 @@ "ReplaceRouteTableAssociationRequest$RouteTableId": "

The ID of the new route table to associate with the subnet.

", "ReplaceRouteTableAssociationResult$NewAssociationId": "

The ID of the new association.

", "ReportInstanceStatusRequest$Description": "

Descriptive text about the health state of your instance.

", + "RequestSpotFleetResponse$SpotFleetRequestId": "

The ID of the Spot fleet request.

", "RequestSpotInstancesRequest$SpotPrice": "

The maximum hourly price (bid) for any Spot Instance launched to fulfill the request.

", + "RequestSpotInstancesRequest$ClientToken": "

Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency in the Amazon Elastic Compute Cloud User Guide.

", "RequestSpotInstancesRequest$LaunchGroup": "

The instance launch group. Launch groups are Spot Instances that launch together and terminate together.

Default: Instances are launched and terminated individually

", "RequestSpotInstancesRequest$AvailabilityZoneGroup": "

The user-specified name for a logical grouping of bids.

When you specify an Availability Zone group in a Spot Instance request, all Spot Instances in the request are launched in the same Availability Zone. Instance proximity is maintained with this parameter, but the choice of Availability Zone is not. The group applies only to bids for Spot Instances of the same instance type. Any additional Spot Instance requests that are specified with the same Availability Zone group name are launched in that same Availability Zone, as long as at least one instance from the group is still active.

If there is no active instance running in the Availability Zone group that you specify for a new Spot Instance request (all instances are terminated, the bid is expired, or the bid falls below current market), then Amazon EC2 launches the instance in any Availability Zone where the constraint can be met. Consequently, the subsequent set of Spot Instances could be placed in a different zone from the original request, even if you specified the same Availability Zone group.

Default: Instances are launched in any available Availability Zone.

", "Reservation$ReservationId": "

The ID of the reservation.

", @@ -4205,6 +4607,8 @@ "ResetSnapshotAttributeRequest$SnapshotId": "

The ID of the snapshot.

", "ResourceIdList$member": null, "RestorableByStringList$member": null, + "RestoreAddressToClassicRequest$PublicIp": "

The Elastic IP address.

", + "RestoreAddressToClassicResult$PublicIp": "

The Elastic IP address.

", "RevokeSecurityGroupEgressRequest$GroupId": "

The ID of the security group.

", "RevokeSecurityGroupEgressRequest$SourceSecurityGroupName": "

[EC2-Classic, default VPC] The name of the destination security group. You can't specify a destination security group and a CIDR IP address range.

", "RevokeSecurityGroupEgressRequest$SourceSecurityGroupOwnerId": "

The ID of the destination security group. You can't specify a destination security group and a CIDR IP address range.

", @@ -4217,6 +4621,7 @@ "RevokeSecurityGroupIngressRequest$IpProtocol": "

The IP protocol name (tcp, udp, icmp) or number (see Protocol Numbers). Use -1 to specify all.

", "RevokeSecurityGroupIngressRequest$CidrIp": "

The CIDR IP address range. You can't specify this parameter when specifying a source security group.

", "Route$DestinationCidrBlock": "

The CIDR block used for the destination match.

", + "Route$DestinationPrefixListId": "

The prefix of the AWS service.

", "Route$GatewayId": "

The ID of a gateway attached to your VPC.

", "Route$InstanceId": "

The ID of a NAT instance in your VPC.

", "Route$InstanceOwnerId": "

The AWS account ID of the owner of the instance.

", @@ -4228,7 +4633,7 @@ "RouteTableAssociation$RouteTableId": "

The ID of the route table.

", "RouteTableAssociation$SubnetId": "

The ID of the subnet.

", "RunInstancesRequest$ImageId": "

The ID of the AMI, which you can get by calling DescribeImages.

", - "RunInstancesRequest$KeyName": "

The name of the key pair. You can create a key pair using CreateKeyPair or ImportKeyPair.

If you launch an instance without specifying a key pair, you can't connect to the instance.

", + "RunInstancesRequest$KeyName": "

The name of the key pair. You can create a key pair using CreateKeyPair or ImportKeyPair.

If you do not specify a key pair, you can't connect to the instance unless you choose an AMI that is configured to allow users another way to log in.

", "RunInstancesRequest$UserData": "

The Base64-encoded MIME user data for the instances.

", "RunInstancesRequest$KernelId": "

The ID of the kernel.

We recommend that you use PV-GRUB instead of kernels and RAM disks. For more information, see PV-GRUB in the Amazon Elastic Compute Cloud User Guide.

", "RunInstancesRequest$RamdiskId": "

The ID of the RAM disk.

We recommend that you use PV-GRUB instead of kernels and RAM disks. For more information, see PV-GRUB in the Amazon Elastic Compute Cloud User Guide.

", @@ -4250,32 +4655,36 @@ "Snapshot$SnapshotId": "

The ID of the snapshot.

", "Snapshot$VolumeId": "

The ID of the volume.

", "Snapshot$Progress": "

The progress of the snapshot, as a percentage.

", - "Snapshot$OwnerId": "

The AWS account ID of the Amazon EBS snapshot owner.

", + "Snapshot$OwnerId": "

The AWS account ID of the EBS snapshot owner.

", "Snapshot$Description": "

The description for the snapshot.

", "Snapshot$OwnerAlias": "

The AWS account alias (for example, amazon, self) or AWS account ID that owns the snapshot.

", "Snapshot$KmsKeyId": "

The full ARN of the AWS Key Management Service (KMS) master key that was used to protect the volume encryption key for the parent volume.

", - "SnapshotDetail$Description": "

Description for the snapshot.

", + "SnapshotDetail$Description": "

A description for the snapshot.

", "SnapshotDetail$Format": "

The format of the disk image from which the snapshot is created.

", "SnapshotDetail$Url": "

The URL used to access the disk image.

", - "SnapshotDetail$DeviceName": "

The Amazon EBS block device mapping for the snapshot.

", + "SnapshotDetail$DeviceName": "

The block device mapping for the snapshot.

", "SnapshotDetail$SnapshotId": "

The snapshot ID of the disk being imported.

", "SnapshotDetail$Progress": "

The percentage of progress for the task.

", "SnapshotDetail$StatusMessage": "

A detailed status message for the snapshot creation.

", "SnapshotDetail$Status": "

A brief status of the snapshot creation.

", "SnapshotDiskContainer$Description": "

The description of the disk image being imported.

", - "SnapshotDiskContainer$Format": "

The format of the disk image being imported.

", + "SnapshotDiskContainer$Format": "

The format of the disk image being imported.

Valid values: RAW | VHD | VMDK | OVA

", "SnapshotDiskContainer$Url": "

The URL to the Amazon S3-based disk image being imported. It can either be a https URL (https://..) or an Amazon S3 URL (s3://..).

", "SnapshotIdStringList$member": null, "SnapshotTaskDetail$Description": "

The description of the snapshot.

", "SnapshotTaskDetail$Format": "

The format of the disk image from which the snapshot is created.

", "SnapshotTaskDetail$Url": "

The URL of the disk image from which the snapshot is created.

", "SnapshotTaskDetail$SnapshotId": "

The snapshot ID of the disk being imported.

", - "SnapshotTaskDetail$Progress": "

The percentage of completion for the ImportSnapshot task.

", - "SnapshotTaskDetail$StatusMessage": "

A detailed status message for the ImportSnapshot task.

", - "SnapshotTaskDetail$Status": "

A brief status for the ImportSnapshot task.

", + "SnapshotTaskDetail$Progress": "

The percentage of completion for the import snapshot task.

", + "SnapshotTaskDetail$StatusMessage": "

A detailed status message for the import snapshot task.

", + "SnapshotTaskDetail$Status": "

A brief status for the import snapshot task.

", "SpotDatafeedSubscription$OwnerId": "

The AWS account ID of the account.

", "SpotDatafeedSubscription$Bucket": "

The Amazon S3 bucket where the Spot Instance data feed is located.

", "SpotDatafeedSubscription$Prefix": "

The prefix that is prepended to data feed files.

", + "SpotFleetRequestConfig$SpotFleetRequestId": "

The ID of the Spot fleet request.

", + "SpotFleetRequestConfigData$ClientToken": "

A unique, case-sensitive identifier you provide to ensure idempotency of your listings. This helps avoid duplicate listings. For more information, see Ensuring Idempotency.

", + "SpotFleetRequestConfigData$SpotPrice": "

The maximum hourly price (bid) for any Spot Instance launched to fulfill the request.

", + "SpotFleetRequestConfigData$IamFleetRole": "

Grants the Spot fleet service permission to terminate instances on your behalf when you cancel a Spot fleet request using CancelSpotFleetRequests or when the Spot fleet request expires, if you set terminateInstancesWithExpiration.

", "SpotInstanceRequest$SpotInstanceRequestId": "

The ID of the Spot Instance request.

", "SpotInstanceRequest$SpotPrice": "

The maximum hourly price (bid) for any Spot Instance launched to fulfill the request.

", "SpotInstanceRequest$LaunchGroup": "

The instance launch group. Launch groups are Spot Instances that launch together and terminate together.

", @@ -4305,11 +4714,14 @@ "TagDescription$Key": "

The tag key.

", "TagDescription$Value": "

The tag value.

", "UnassignPrivateIpAddressesRequest$NetworkInterfaceId": "

The ID of the network interface.

", - "UserBucket$S3Bucket": "

The Amazon S3 bucket name where the disk image is located.

", - "UserBucket$S3Key": "

The Amazon S3 Key for the disk image.

", - "UserBucketDetails$S3Bucket": "

The Amazon S3 bucket from which the disk image was created.

", - "UserBucketDetails$S3Key": "

The Amazon S3 key from which the disk image was created.

", - "UserData$Data": null, + "UnsuccessfulItem$ResourceId": "

The ID of the resource.

", + "UnsuccessfulItemError$Code": "

The error code.

", + "UnsuccessfulItemError$Message": "

The error message accompanying the error code.

", + "UserBucket$S3Bucket": "

The name of the S3 bucket where the disk image is located.

", + "UserBucket$S3Key": "

The key for the disk image.

", + "UserBucketDetails$S3Bucket": "

The S3 bucket from which the disk image was created.

", + "UserBucketDetails$S3Key": "

The key from which the disk image was created.

", + "UserData$Data": "

The Base64-encoded MIME user data for the instance.

", "UserGroupStringList$member": null, "UserIdGroupPair$UserId": "

The ID of an AWS account. EC2-Classic only.

", "UserIdGroupPair$GroupName": "

The name of the security group. In a request, use this parameter for a security group in EC2-Classic or a default VPC only. For a security group in a nondefault VPC, use GroupId.

", @@ -4342,6 +4754,10 @@ "VpcAttachment$VpcId": "

The ID of the VPC.

", "VpcClassicLink$VpcId": "

The ID of the VPC.

", "VpcClassicLinkIdList$member": null, + "VpcEndpoint$VpcEndpointId": "

The ID of the VPC endpoint.

", + "VpcEndpoint$VpcId": "

The ID of the VPC to which the endpoint is associated.

", + "VpcEndpoint$ServiceName": "

The name of the AWS service to which the endpoint is associated.

", + "VpcEndpoint$PolicyDocument": "

The policy document associated with the endpoint.

", "VpcIdStringList$member": null, "VpcPeeringConnection$VpcPeeringConnectionId": "

The ID of the VPC peering connection.

", "VpcPeeringConnectionStateReason$Code": "

The status of the VPC peering connection.

", @@ -4422,7 +4838,7 @@ "base": null, "refs": { "ClassicLinkInstance$Tags": "

Any tags assigned to the instance.

", - "ConversionTask$Tags": null, + "ConversionTask$Tags": "

Any tags assigned to the task.

", "CreateTagsRequest$Tags": "

One or more tags. The value parameter is required, but if you don't want the tag to have a value, specify the parameter with no value, and we set the value to an empty string.

", "CustomerGateway$Tags": "

Any tags assigned to the customer gateway.

", "DeleteTagsRequest$Tags": "

One or more tags to delete. If you omit the value parameter, we delete the tag regardless of its value. If you specify this parameter with an empty string as the value, we delete the key only if its value is an empty string.

", @@ -4489,24 +4905,42 @@ "refs": { } }, + "UnsuccessfulItem": { + "base": "

Information about items that were not successfully processed in a batch call.

", + "refs": { + "UnsuccessfulItemSet$member": null + } + }, + "UnsuccessfulItemError": { + "base": "

Information about the error that occured. For more information about errors, see Error Codes.

", + "refs": { + "UnsuccessfulItem$Error": "

Information about the error.

" + } + }, + "UnsuccessfulItemSet": { + "base": null, + "refs": { + "DeleteVpcEndpointsResult$Unsuccessful": "

Information about the endpoints that were not successfully deleted.

" + } + }, "UserBucket": { - "base": "

User's Amazon S3 bucket details used to access the image.

", + "base": "

Describes the S3 bucket for the disk image.

", "refs": { - "ImageDiskContainer$UserBucket": null, + "ImageDiskContainer$UserBucket": "

The S3 bucket for the disk image.

", "SnapshotDiskContainer$UserBucket": null } }, "UserBucketDetails": { - "base": "

User's Amazon S3 bucket details used to access the image.

", + "base": "

Describes the S3 bucket for the disk image.

", "refs": { "SnapshotDetail$UserBucket": null, - "SnapshotTaskDetail$UserBucket": null + "SnapshotTaskDetail$UserBucket": "

The S3 bucket for the disk image.

" } }, "UserData": { - "base": null, + "base": "

Describes the user data to be made available to an instance.

", "refs": { - "ImportInstanceLaunchSpecification$UserData": "

User data to be made available to the instance.

" + "ImportInstanceLaunchSpecification$UserData": "

The Base64-encoded MIME user data to be made available to the instance.

" } }, "UserGroupStringList": { @@ -4537,11 +4971,23 @@ "ValueStringList": { "base": null, "refs": { + "CancelSpotFleetRequestsRequest$SpotFleetRequestIds": "

The IDs of the Spot fleet requests.

", + "CreateVpcEndpointRequest$RouteTableIds": "

One or more route table IDs.

", + "DeleteVpcEndpointsRequest$VpcEndpointIds": "

One or more endpoint IDs.

", "DescribeInternetGatewaysRequest$InternetGatewayIds": "

One or more Internet gateway IDs.

Default: Describes all your Internet gateways.

", + "DescribeMovingAddressesRequest$PublicIps": "

One or more Elastic IP addresses.

", "DescribeNetworkAclsRequest$NetworkAclIds": "

One or more network ACL IDs.

Default: Describes all your network ACLs.

", + "DescribePrefixListsRequest$PrefixListIds": "

One or more prefix list IDs.

", "DescribeRouteTablesRequest$RouteTableIds": "

One or more route table IDs.

Default: Describes all your route tables.

", + "DescribeSpotFleetRequestsRequest$SpotFleetRequestIds": "

The IDs of the Spot fleet requests.

", + "DescribeVpcEndpointServicesResult$ServiceNames": "

A list of supported AWS services.

", + "DescribeVpcEndpointsRequest$VpcEndpointIds": "

One or more endpoint IDs.

", "DescribeVpcPeeringConnectionsRequest$VpcPeeringConnectionIds": "

One or more VPC peering connection IDs.

Default: Describes all your VPC peering connections.

", "Filter$Values": "

One or more filter values. Filter values are case-sensitive.

", + "ModifyVpcEndpointRequest$AddRouteTableIds": "

One or more route tables IDs to associate with the endpoint.

", + "ModifyVpcEndpointRequest$RemoveRouteTableIds": "

One or more route table IDs to disassociate from the endpoint.

", + "PrefixList$Cidrs": "

The IP address range of the AWS service.

", + "VpcEndpoint$RouteTableIds": "

One or more route tables associated with the endpoint.

", "NewDhcpConfiguration$Values": null, "RequestSpotLaunchSpecification$SecurityGroups": null, "RequestSpotLaunchSpecification$SecurityGroupIds": null @@ -4581,7 +5027,7 @@ "VolumeAttachmentList": { "base": null, "refs": { - "Volume$Attachments": null + "Volume$Attachments": "

Information about the volume attachments.

" } }, "VolumeAttachmentState": { @@ -4597,10 +5043,10 @@ } }, "VolumeDetail": { - "base": "

Describes an Amazon EBS volume.

", + "base": "

Describes an EBS volume.

", "refs": { - "DiskImage$Volume": null, - "ImportVolumeRequest$Volume": null + "DiskImage$Volume": "

Information about the volume.

", + "ImportVolumeRequest$Volume": "

The volume size.

" } }, "VolumeIdStringList": { @@ -4613,7 +5059,7 @@ "VolumeList": { "base": null, "refs": { - "DescribeVolumesResult$Volumes": null + "DescribeVolumesResult$Volumes": "

Information about the volumes.

" } }, "VolumeState": { @@ -4740,6 +5186,19 @@ "DescribeVpcClassicLinkResult$Vpcs": "

The ClassicLink status of one or more VPCs.

" } }, + "VpcEndpoint": { + "base": "

Describes a VPC endpoint.

", + "refs": { + "CreateVpcEndpointResult$VpcEndpoint": "

Information about the endpoint.

", + "VpcEndpointSet$member": null + } + }, + "VpcEndpointSet": { + "base": null, + "refs": { + "DescribeVpcEndpointsResult$VpcEndpoints": "

Information about the endpoints.

" + } + }, "VpcIdStringList": { "base": null, "refs": { diff --git a/src/data/ec2/2015-03-01/paginators-1.json b/src/data/ec2/2015-04-15/paginators-1.json similarity index 100% rename from src/data/ec2/2015-03-01/paginators-1.json rename to src/data/ec2/2015-04-15/paginators-1.json diff --git a/src/data/ec2/2015-03-01/waiters-2.json b/src/data/ec2/2015-04-15/waiters-2.json similarity index 91% rename from src/data/ec2/2015-03-01/waiters-2.json rename to src/data/ec2/2015-04-15/waiters-2.json index 8647c7d146..0599f2422b 100644 --- a/src/data/ec2/2015-03-01/waiters-2.json +++ b/src/data/ec2/2015-04-15/waiters-2.json @@ -252,6 +252,42 @@ } ] }, + "KeyPairExists": { + "operation": "DescribeKeyPairs", + "delay": 5, + "maxAttempts": 6, + "acceptors": [ + { + "expected": true, + "matcher": "pathAll", + "state": "success", + "argument": "length(KeyPairs[].KeyName) > `0`" + }, + { + "expected": "InvalidKeyPairNotFound", + "matcher": "error", + "state": "retry" + } + ] + }, + "NetworkInterfaceAvailable": { + "operation": "DescribeNetworkInterfaces", + "delay": 20, + "maxAttempts": 10, + "acceptors": [ + { + "expected": "available", + "matcher": "pathAll", + "state": "success", + "argument": "NetworkInterfaces[].Status" + }, + { + "expected": "InvalidNetworkInterfaceIDNotFound", + "matcher": "error", + "state": "failure" + } + ] + }, "PasswordDataAvailable": { "operation": "GetPasswordData", "maxAttempts": 40, @@ -370,6 +406,11 @@ "matcher": "pathAll", "state": "success", "argument": "Volumes[].State" + }, + { + "matcher": "error", + "expected": "InvalidVolumeNotFound", + "state": "success" } ] }, diff --git a/src/data/elasticbeanstalk/2010-12-01/api-2.json b/src/data/elasticbeanstalk/2010-12-01/api-2.json index fa7f5f0fd2..5b2a7998c1 100644 --- a/src/data/elasticbeanstalk/2010-12-01/api-2.json +++ b/src/data/elasticbeanstalk/2010-12-01/api-2.json @@ -727,6 +727,7 @@ "ConfigurationOptionSetting":{ "type":"structure", "members":{ + "ResourceName":{"shape":"ResourceName"}, "Namespace":{"shape":"OptionNamespace"}, "OptionName":{"shape":"ConfigurationOptionName"}, "Value":{"shape":"ConfigurationOptionValue"} @@ -1240,6 +1241,7 @@ "OptionSpecification":{ "type":"structure", "members":{ + "ResourceName":{"shape":"ResourceName"}, "Namespace":{"shape":"OptionNamespace"}, "OptionName":{"shape":"ConfigurationOptionName"} } @@ -1279,6 +1281,11 @@ }, "RequestId":{"type":"string"}, "ResourceId":{"type":"string"}, + "ResourceName":{ + "type":"string", + "min":1, + "max":256 + }, "RestartAppServerMessage":{ "type":"structure", "members":{ diff --git a/src/data/elasticbeanstalk/2010-12-01/docs-2.json b/src/data/elasticbeanstalk/2010-12-01/docs-2.json index fa7710a9bb..3f6dfb2db1 100644 --- a/src/data/elasticbeanstalk/2010-12-01/docs-2.json +++ b/src/data/elasticbeanstalk/2010-12-01/docs-2.json @@ -32,7 +32,7 @@ "UpdateEnvironment": "

Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment.

Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error.

When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values.

", "ValidateConfigurationSettings": "

Takes a set of configuration settings and either a configuration template or environment, and determines whether those values are valid.

This action returns a list of messages indicating any errors or warnings associated with the selection of option values.

" }, - "service": "AWS Elastic Beanstalk

This is the AWS Elastic Beanstalk API Reference. This guide provides detailed information about AWS Elastic Beanstalk actions, data types, parameters, and errors.

AWS Elastic Beanstalk is a tool that makes it easy for you to create, deploy, and manage scalable, fault-tolerant applications running on Amazon Web Services cloud resources.

For more information about this product, go to the AWS Elastic Beanstalk details page. The location of the latest AWS Elastic Beanstalk WSDL is http://elasticbeanstalk.s3.amazonaws.com/doc/2010-12-01/AWSElasticBeanstalk.wsdl.

Endpoints

For a list of region-specific endpoints that AWS Elastic Beanstalk supports, go to Regions and Endpoints in the Amazon Web Services Glossary.

", + "service": "AWS Elastic Beanstalk

This is the AWS Elastic Beanstalk API Reference. This guide provides detailed information about AWS Elastic Beanstalk actions, data types, parameters, and errors.

AWS Elastic Beanstalk is a tool that makes it easy for you to create, deploy, and manage scalable, fault-tolerant applications running on Amazon Web Services cloud resources.

For more information about this product, go to the AWS Elastic Beanstalk details page. The location of the latest AWS Elastic Beanstalk WSDL is http://elasticbeanstalk.s3.amazonaws.com/doc/2010-12-01/AWSElasticBeanstalk.wsdl. To install the Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools that enable you to access the API, go to Tools for Amazon Web Services.

Endpoints

For a list of region-specific endpoints that AWS Elastic Beanstalk supports, go to Regions and Endpoints in the Amazon Web Services Glossary.

", "shapes": { "AbortEnvironmentUpdateMessage": { "base": "

", @@ -42,7 +42,7 @@ "AbortableOperationInProgress": { "base": null, "refs": { - "EnvironmentDescription$AbortableOperationInProgress": "

Lists in-progress environment updates and application version deployments that you can cancel.

" + "EnvironmentDescription$AbortableOperationInProgress": "

Indicates if there is an in-progress environment configuration update or application version deployment that you can cancel.

true: There is an update in progress.

false: There are no updates currently in progress.

" } }, "ApplicationDescription": { @@ -570,7 +570,7 @@ "refs": { "CreateEnvironmentMessage$Tier": "

This specifies the tier to use for creating this environment.

", "EnvironmentDescription$Tier": "

Describes the current tier of this environment.

", - "UpdateEnvironmentMessage$Tier": "

This specifies the tier to use to update the environment.

Condition: You can only update the tier version for an environment. If you change the name of the type, AWS Elastic Beanstalk returns InvalidParameterValue error.

" + "UpdateEnvironmentMessage$Tier": "

This specifies the tier to use to update the environment.

Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue error.

" } }, "EventDate": { @@ -813,6 +813,13 @@ "Trigger$Name": "

The name of the trigger.

" } }, + "ResourceName": { + "base": null, + "refs": { + "ConfigurationOptionSetting$ResourceName": "

A unique resource name for a time-based scaling configuration option.

", + "OptionSpecification$ResourceName": "

A unique resource name for a time-based scaling configuration option.

" + } + }, "RestartAppServerMessage": { "base": "

", "refs": { diff --git a/src/data/elasticfilesystem/2015-02-01/api-2.json b/src/data/elasticfilesystem/2015-02-01/api-2.json new file mode 100644 index 0000000000..e5728b031a --- /dev/null +++ b/src/data/elasticfilesystem/2015-02-01/api-2.json @@ -0,0 +1,917 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2015-02-01", + "endpointPrefix":"elasticfilesystem", + "serviceAbbreviation":"EFS", + "serviceFullName":"Amazon Elastic File System", + "signatureVersion":"v4", + "protocol":"rest-json" + }, + "operations":{ + "CreateFileSystem":{ + "name":"CreateFileSystem", + "http":{ + "method":"POST", + "requestUri":"/2015-02-01/file-systems", + "responseCode":201 + }, + "input":{"shape":"CreateFileSystemRequest"}, + "output":{"shape":"FileSystemDescription"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemAlreadyExists", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"FileSystemLimitExceeded", + "error":{"httpStatusCode":403}, + "exception":true + } + ] + }, + "CreateMountTarget":{ + "name":"CreateMountTarget", + "http":{ + "method":"POST", + "requestUri":"/2015-02-01/mount-targets", + "responseCode":200 + }, + "input":{"shape":"CreateMountTargetRequest"}, + "output":{"shape":"MountTargetDescription"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"IncorrectFileSystemLifeCycleState", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"MountTargetConflict", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"SubnetNotFound", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"NoFreeAddressesInSubnet", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"IpAddressInUse", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"NetworkInterfaceLimitExceeded", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"SecurityGroupLimitExceeded", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"SecurityGroupNotFound", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"UnsupportedAvailabilityZone", + "error":{"httpStatusCode":400}, + "exception":true + } + ] + }, + "CreateTags":{ + "name":"CreateTags", + "http":{ + "method":"POST", + "requestUri":"/2015-02-01/create-tags/{FileSystemId}", + "responseCode":204 + }, + "input":{"shape":"CreateTagsRequest"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "DeleteFileSystem":{ + "name":"DeleteFileSystem", + "http":{ + "method":"DELETE", + "requestUri":"/2015-02-01/file-systems/{FileSystemId}", + "responseCode":204 + }, + "input":{"shape":"DeleteFileSystemRequest"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"FileSystemInUse", + "error":{"httpStatusCode":409}, + "exception":true + } + ] + }, + "DeleteMountTarget":{ + "name":"DeleteMountTarget", + "http":{ + "method":"DELETE", + "requestUri":"/2015-02-01/mount-targets/{MountTargetId}", + "responseCode":204 + }, + "input":{"shape":"DeleteMountTargetRequest"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"DependencyTimeout", + "error":{"httpStatusCode":504}, + "exception":true + }, + { + "shape":"MountTargetNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "DeleteTags":{ + "name":"DeleteTags", + "http":{ + "method":"POST", + "requestUri":"/2015-02-01/delete-tags/{FileSystemId}", + "responseCode":204 + }, + "input":{"shape":"DeleteTagsRequest"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "DescribeFileSystems":{ + "name":"DescribeFileSystems", + "http":{ + "method":"GET", + "requestUri":"/2015-02-01/file-systems", + "responseCode":200 + }, + "input":{"shape":"DescribeFileSystemsRequest"}, + "output":{"shape":"DescribeFileSystemsResponse"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "DescribeMountTargetSecurityGroups":{ + "name":"DescribeMountTargetSecurityGroups", + "http":{ + "method":"GET", + "requestUri":"/2015-02-01/mount-targets/{MountTargetId}/security-groups", + "responseCode":200 + }, + "input":{"shape":"DescribeMountTargetSecurityGroupsRequest"}, + "output":{"shape":"DescribeMountTargetSecurityGroupsResponse"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"MountTargetNotFound", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"IncorrectMountTargetState", + "error":{"httpStatusCode":409}, + "exception":true + } + ] + }, + "DescribeMountTargets":{ + "name":"DescribeMountTargets", + "http":{ + "method":"GET", + "requestUri":"/2015-02-01/mount-targets", + "responseCode":200 + }, + "input":{"shape":"DescribeMountTargetsRequest"}, + "output":{"shape":"DescribeMountTargetsResponse"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "DescribeTags":{ + "name":"DescribeTags", + "http":{ + "method":"GET", + "requestUri":"/2015-02-01/tags/{FileSystemId}/", + "responseCode":200 + }, + "input":{"shape":"DescribeTagsRequest"}, + "output":{"shape":"DescribeTagsResponse"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"FileSystemNotFound", + "error":{"httpStatusCode":404}, + "exception":true + } + ] + }, + "ModifyMountTargetSecurityGroups":{ + "name":"ModifyMountTargetSecurityGroups", + "http":{ + "method":"PUT", + "requestUri":"/2015-02-01/mount-targets/{MountTargetId}/security-groups", + "responseCode":204 + }, + "input":{"shape":"ModifyMountTargetSecurityGroupsRequest"}, + "errors":[ + { + "shape":"BadRequest", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"InternalServerError", + "error":{"httpStatusCode":500}, + "exception":true + }, + { + "shape":"MountTargetNotFound", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"IncorrectMountTargetState", + "error":{"httpStatusCode":409}, + "exception":true + }, + { + "shape":"SecurityGroupLimitExceeded", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"SecurityGroupNotFound", + "error":{"httpStatusCode":400}, + "exception":true + } + ] + } + }, + "shapes":{ + "AwsAccountId":{ + "type":"string", + "pattern":"[0-9]{12}" + }, + "BadRequest":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + }, + "CreateFileSystemRequest":{ + "type":"structure", + "required":["CreationToken"], + "members":{ + "CreationToken":{"shape":"CreationToken"} + } + }, + "CreateMountTargetRequest":{ + "type":"structure", + "required":[ + "FileSystemId", + "SubnetId" + ], + "members":{ + "FileSystemId":{"shape":"FileSystemId"}, + "SubnetId":{"shape":"SubnetId"}, + "IpAddress":{"shape":"IpAddress"}, + "SecurityGroups":{"shape":"SecurityGroups"} + } + }, + "CreateTagsRequest":{ + "type":"structure", + "required":[ + "FileSystemId", + "Tags" + ], + "members":{ + "FileSystemId":{ + "shape":"FileSystemId", + "location":"uri", + "locationName":"FileSystemId" + }, + "Tags":{"shape":"Tags"} + } + }, + "CreationToken":{ + "type":"string", + "min":1, + "max":64 + }, + "DeleteFileSystemRequest":{ + "type":"structure", + "required":["FileSystemId"], + "members":{ + "FileSystemId":{ + "shape":"FileSystemId", + "location":"uri", + "locationName":"FileSystemId" + } + } + }, + "DeleteMountTargetRequest":{ + "type":"structure", + "required":["MountTargetId"], + "members":{ + "MountTargetId":{ + "shape":"MountTargetId", + "location":"uri", + "locationName":"MountTargetId" + } + } + }, + "DeleteTagsRequest":{ + "type":"structure", + "required":[ + "FileSystemId", + "TagKeys" + ], + "members":{ + "FileSystemId":{ + "shape":"FileSystemId", + "location":"uri", + "locationName":"FileSystemId" + }, + "TagKeys":{"shape":"TagKeys"} + } + }, + "DependencyTimeout":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":504}, + "exception":true + }, + "DescribeFileSystemsRequest":{ + "type":"structure", + "members":{ + "MaxItems":{ + "shape":"MaxItems", + "location":"querystring", + "locationName":"MaxItems" + }, + "Marker":{ + "shape":"Marker", + "location":"querystring", + "locationName":"Marker" + }, + "CreationToken":{ + "shape":"CreationToken", + "location":"querystring", + "locationName":"CreationToken" + }, + "FileSystemId":{ + "shape":"FileSystemId", + "location":"querystring", + "locationName":"FileSystemId" + } + } + }, + "DescribeFileSystemsResponse":{ + "type":"structure", + "members":{ + "Marker":{"shape":"Marker"}, + "FileSystems":{"shape":"FileSystemDescriptions"}, + "NextMarker":{"shape":"Marker"} + } + }, + "DescribeMountTargetSecurityGroupsRequest":{ + "type":"structure", + "required":["MountTargetId"], + "members":{ + "MountTargetId":{ + "shape":"MountTargetId", + "location":"uri", + "locationName":"MountTargetId" + } + } + }, + "DescribeMountTargetSecurityGroupsResponse":{ + "type":"structure", + "required":["SecurityGroups"], + "members":{ + "SecurityGroups":{"shape":"SecurityGroups"} + } + }, + "DescribeMountTargetsRequest":{ + "type":"structure", + "required":["FileSystemId"], + "members":{ + "MaxItems":{ + "shape":"MaxItems", + "location":"querystring", + "locationName":"MaxItems" + }, + "Marker":{ + "shape":"Marker", + "location":"querystring", + "locationName":"Marker" + }, + "FileSystemId":{ + "shape":"FileSystemId", + "location":"querystring", + "locationName":"FileSystemId" + } + } + }, + "DescribeMountTargetsResponse":{ + "type":"structure", + "members":{ + "Marker":{"shape":"Marker"}, + "MountTargets":{"shape":"MountTargetDescriptions"}, + "NextMarker":{"shape":"Marker"} + } + }, + "DescribeTagsRequest":{ + "type":"structure", + "required":["FileSystemId"], + "members":{ + "MaxItems":{ + "shape":"MaxItems", + "location":"querystring", + "locationName":"MaxItems" + }, + "Marker":{ + "shape":"Marker", + "location":"querystring", + "locationName":"Marker" + }, + "FileSystemId":{ + "shape":"FileSystemId", + "location":"uri", + "locationName":"FileSystemId" + } + } + }, + "DescribeTagsResponse":{ + "type":"structure", + "required":["Tags"], + "members":{ + "Marker":{"shape":"Marker"}, + "Tags":{"shape":"Tags"}, + "NextMarker":{"shape":"Marker"} + } + }, + "ErrorCode":{ + "type":"string", + "min":1 + }, + "ErrorMessage":{"type":"string"}, + "FileSystemAlreadyExists":{ + "type":"structure", + "required":[ + "ErrorCode", + "FileSystemId" + ], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"}, + "FileSystemId":{"shape":"FileSystemId"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "FileSystemDescription":{ + "type":"structure", + "required":[ + "OwnerId", + "CreationToken", + "FileSystemId", + "CreationTime", + "LifeCycleState", + "NumberOfMountTargets", + "SizeInBytes" + ], + "members":{ + "OwnerId":{"shape":"AwsAccountId"}, + "CreationToken":{"shape":"CreationToken"}, + "FileSystemId":{"shape":"FileSystemId"}, + "CreationTime":{"shape":"Timestamp"}, + "LifeCycleState":{"shape":"LifeCycleState"}, + "Name":{"shape":"TagValue"}, + "NumberOfMountTargets":{"shape":"MountTargetCount"}, + "SizeInBytes":{"shape":"FileSystemSize"} + } + }, + "FileSystemDescriptions":{ + "type":"list", + "member":{"shape":"FileSystemDescription"} + }, + "FileSystemId":{ + "type":"string", + "pattern":"fs-[0-9a-f]{8}" + }, + "FileSystemInUse":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "FileSystemLimitExceeded":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":403}, + "exception":true + }, + "FileSystemNotFound":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":404}, + "exception":true + }, + "FileSystemSize":{ + "type":"structure", + "required":["Value"], + "members":{ + "Value":{"shape":"FileSystemSizeValue"}, + "Timestamp":{"shape":"Timestamp"} + } + }, + "FileSystemSizeValue":{ + "type":"long", + "min":0 + }, + "IncorrectFileSystemLifeCycleState":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "IncorrectMountTargetState":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "InternalServerError":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":500}, + "exception":true + }, + "IpAddress":{ + "type":"string", + "pattern":"[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}" + }, + "IpAddressInUse":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "LifeCycleState":{ + "type":"string", + "enum":[ + "creating", + "available", + "deleting", + "deleted" + ] + }, + "Marker":{"type":"string"}, + "MaxItems":{ + "type":"integer", + "min":1 + }, + "ModifyMountTargetSecurityGroupsRequest":{ + "type":"structure", + "required":["MountTargetId"], + "members":{ + "MountTargetId":{ + "shape":"MountTargetId", + "location":"uri", + "locationName":"MountTargetId" + }, + "SecurityGroups":{"shape":"SecurityGroups"} + } + }, + "MountTargetConflict":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "MountTargetCount":{ + "type":"integer", + "min":0 + }, + "MountTargetDescription":{ + "type":"structure", + "required":[ + "MountTargetId", + "FileSystemId", + "SubnetId", + "LifeCycleState" + ], + "members":{ + "OwnerId":{"shape":"AwsAccountId"}, + "MountTargetId":{"shape":"MountTargetId"}, + "FileSystemId":{"shape":"FileSystemId"}, + "SubnetId":{"shape":"SubnetId"}, + "LifeCycleState":{"shape":"LifeCycleState"}, + "IpAddress":{"shape":"IpAddress"}, + "NetworkInterfaceId":{"shape":"NetworkInterfaceId"} + } + }, + "MountTargetDescriptions":{ + "type":"list", + "member":{"shape":"MountTargetDescription"} + }, + "MountTargetId":{ + "type":"string", + "pattern":"fsmt-[0-9a-f]{8}" + }, + "MountTargetNotFound":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":404}, + "exception":true + }, + "NetworkInterfaceId":{ + "type":"string", + "pattern":"eni-[0-9a-f]{8}" + }, + "NetworkInterfaceLimitExceeded":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "NoFreeAddressesInSubnet":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "SecurityGroup":{ + "type":"string", + "pattern":"sg-[0-9a-f]{8}" + }, + "SecurityGroupLimitExceeded":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + }, + "SecurityGroupNotFound":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + }, + "SecurityGroups":{ + "type":"list", + "member":{"shape":"SecurityGroup"}, + "max":5 + }, + "SubnetId":{ + "type":"string", + "pattern":"subnet-[0-9a-f]{8}" + }, + "SubnetNotFound":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + }, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{"shape":"TagKey"}, + "Value":{"shape":"TagValue"} + } + }, + "TagKey":{ + "type":"string", + "min":1, + "max":128, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-%@]*)$" + }, + "TagKeys":{ + "type":"list", + "member":{"shape":"TagKey"} + }, + "TagValue":{ + "type":"string", + "max":256, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-%@]*)$" + }, + "Tags":{ + "type":"list", + "member":{"shape":"Tag"} + }, + "Timestamp":{"type":"timestamp"}, + "UnsupportedAvailabilityZone":{ + "type":"structure", + "required":["ErrorCode"], + "members":{ + "ErrorCode":{"shape":"ErrorCode"}, + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + } + } +} diff --git a/src/data/elasticfilesystem/2015-02-01/docs-2.json b/src/data/elasticfilesystem/2015-02-01/docs-2.json new file mode 100644 index 0000000000..2028dc5520 --- /dev/null +++ b/src/data/elasticfilesystem/2015-02-01/docs-2.json @@ -0,0 +1,414 @@ +{ + "version": "2.0", + "operations": { + "CreateFileSystem": "

Creates a new, empty file system. The operation requires a creation token in the request that Amazon EFS uses to ensure idempotent creation (calling the operation with same creation token has no effect). If a file system does not currently exist that is owned by the caller's AWS account with the specified creation token, this operation does the following:

Otherwise, this operation returns a FileSystemAlreadyExists error with the ID of the existing file system.

For basic use cases, you can use a randomly generated UUID for the creation token.

The idempotent operation allows you to retry a CreateFileSystem call without risk of creating an extra file system. This can happen when an initial call fails in a way that leaves it uncertain whether or not a file system was actually created. An example might be that a transport level timeout occurred or your connection was reset. As long as you use the same creation token, if the initial call had succeeded in creating a file system, the client can learn of its existence from the FileSystemAlreadyExists error.

The CreateFileSystem call returns while the file system's lifecycle state is still \"creating\". You can check the file system creation status by calling the DescribeFileSystems API, which among other things returns the file system state.

After the file system is fully created, Amazon EFS sets its lifecycle state to \"available\", at which point you can create one or more mount targets for the file system (CreateMountTarget) in your VPC. You mount your Amazon EFS file system on an EC2 instances in your VPC via the mount target. For more information, see Amazon EFS: How it Works

This operation requires permission for the elasticfilesystem:CreateFileSystem action.

", + "CreateMountTarget": "

Creates a mount target for a file system. You can then mount the file system on EC2 instances via the mount target.

You can create one mount target in each Availability Zone in your VPC. All EC2 instances in a VPC within a given Availability Zone share a single mount target for a given file system. If you have multiple subnets in an Availability Zone, you create a mount target in one of the subnets. EC2 instances do not need to be in the same subnet as the mount target in order to access their file system. For more information, see Amazon EFS: How it Works.

In the request, you also specify a file system ID for which you are creating the mount target and the file system's lifecycle state must be \"available\" (see DescribeFileSystems).

In the request, you also provide a subnet ID, which serves several purposes:

After creating the mount target, Amazon EFS returns a response that includes, a MountTargetId and an IpAddress. You use this IP address when mounting the file system in an EC2 instance. You can also use the mount target's DNS name when mounting the file system. The EC2 instance on which you mount the file system via the mount target can resolve the mount target's DNS name to its IP address. For more information, see How it Works: Implementation Overview

Note that you can create mount targets for a file system in only one VPC, and there can be only one mount target per Availability Zone. That is, if the file system already has one or more mount targets created for it, the request to add another mount target must meet the following requirements:

If the request satisfies the requirements, Amazon EFS does the following:

The CreateMountTarget call returns only after creating the network interface, but while the mount target state is still \"creating\". You can check the mount target creation status by calling the DescribeFileSystems API, which among other things returns the mount target state.

We recommend you create a mount target in each of the Availability Zones. There are cost considerations for using a file system in an Availability Zone through a mount target created in another Availability Zone. For more information, go to Amazon EFS product detail page. In addition, by always using a mount target local to the instance's Availability Zone, you eliminate a partial failure scenario; if the Availablity Zone in which your mount target is created goes down, then you won't be able to access your file system through that mount target.

This operation requires permission for the following action on the file system:

This operation also requires permission for the following Amazon EC2 actions:

", + "CreateTags": "

Creates or overwrites tags associated with a file system. Each tag is a key-value pair. If a tag key specified in the request already exists on the file system, this operation overwrites its value with the value provided in the request. If you add the \"Name\" tag to your file system, Amazon EFS returns it in the response to the DescribeFileSystems API.

This operation requires permission for the elasticfilesystem:CreateTags action.

", + "DeleteFileSystem": "

Deletes a file system, permanently severing access to its contents. Upon return, the file system no longer exists and you will not be able to access any contents of the deleted file system.

You cannot delete a file system that is in use. That is, if the file system has any mount targets, you must first delete them. For more information, see DescribeMountTargets and DeleteMountTarget.

The DeleteFileSystem call returns while the file system state is still \"deleting\". You can check the file system deletion status by calling the DescribeFileSystems API, which returns a list of file systems in your account. If you pass file system ID or creation token for the deleted file system, the DescribeFileSystems will return a 404 \"FileSystemNotFound\" error.

This operation requires permission for the elasticfilesystem:DeleteFileSystem action.

", + "DeleteMountTarget": "

Deletes the specified mount target.

This operation forcibly breaks any mounts of the file system via the mount target being deleted, which might disrupt instances or applications using those mounts. To avoid applications getting cut off abruptly, you might consider unmounting any mounts of the mount target, if feasible. The operation also deletes the associated network interface. Uncommitted writes may be lost, but breaking a mount target using this operation does not corrupt the file system itself. The file system you created remains. You can mount an EC2 instance in your VPC using another mount target.

This operation requires permission for the following action on the file system:

The DeleteMountTarget call returns while the mount target state is still \"deleting\". You can check the mount target deletion by calling the DescribeMountTargets API, which returns a list of mount target descriptions for the given file system.

The operation also requires permission for the following Amazon EC2 action on the mount target's network interface:

", + "DeleteTags": "

Deletes the specified tags from a file system. If the DeleteTags request includes a tag key that does not exist, Amazon EFS ignores it; it is not an error. For more information about tags and related restrictions, go to Tag Restrictions in the AWS Billing and Cost Management User Guide.

This operation requires permission for the elasticfilesystem:DeleteTags action.

", + "DescribeFileSystems": "

Returns the description of a specific Amazon EFS file system if either the file system CreationToken or the FileSystemId is provided; otherwise, returns descriptions of all file systems owned by the caller's AWS account in the AWS region of the endpoint that you're calling.

When retrieving all file system descriptions, you can optionally specify the MaxItems parameter to limit the number of descriptions in a response. If more file system descriptions remain, Amazon EFS returns a NextMarker, an opaque token, in the response. In this case, you should send a subsequent request with the Marker request parameter set to the value of NextMarker.

So to retrieve a list of your file system descriptions, the expected usage of this API is an iterative process of first calling DescribeFileSystems without the Marker and then continuing to call it with the Marker parameter set to the value of the NextMarker from the previous response until the response has no NextMarker.

Note that the implementation may return fewer than MaxItems file system descriptions while still including a NextMarker value.

The order of file systems returned in the response of one DescribeFileSystems call, and the order of file systems returned across the responses of a multi-call iteration, is unspecified.

This operation requires permission for the elasticfilesystem:DescribeFileSystems action.

", + "DescribeMountTargetSecurityGroups": "

Returns the security groups currently in effect for a mount target. This operation requires that the network interface of the mount target has been created and the life cycle state of the mount target is not \"deleted\".

This operation requires permissions for the following actions:

", + "DescribeMountTargets": "

Returns the descriptions of the current mount targets for a file system. The order of mount targets returned in the response is unspecified.

This operation requires permission for the elasticfilesystem:DescribeMountTargets action on the file system FileSystemId.

", + "DescribeTags": "

Returns the tags associated with a file system. The order of tags returned in the response of one DescribeTags call, and the order of tags returned across the responses of a multi-call iteration (when using pagination), is unspecified.

This operation requires permission for the elasticfilesystem:DescribeTags action.

", + "ModifyMountTargetSecurityGroups": "

Modifies the set of security groups in effect for a mount target.

When you create a mount target, Amazon EFS also creates a new network interface (see CreateMountTarget). This operation replaces the security groups in effect for the network interface associated with a mount target, with the SecurityGroups provided in the request. This operation requires that the network interface of the mount target has been created and the life cycle state of the mount target is not \"deleted\".

The operation requires permissions for the following actions:

" + }, + "service": "Amazon Elastic File System", + "shapes": { + "AwsAccountId": { + "base": null, + "refs": { + "FileSystemDescription$OwnerId": "

The AWS account that created the file system. If the file system was created by an IAM user, the parent account to which the user belongs is the owner.

", + "MountTargetDescription$OwnerId": "

The AWS account ID that owns the resource.

" + } + }, + "BadRequest": { + "base": "

Returned if the request is malformed or contains an error such as an invalid parameter value or a missing required parameter.

", + "refs": { + } + }, + "CreateFileSystemRequest": { + "base": null, + "refs": { + } + }, + "CreateMountTargetRequest": { + "base": null, + "refs": { + } + }, + "CreateTagsRequest": { + "base": null, + "refs": { + } + }, + "CreationToken": { + "base": null, + "refs": { + "CreateFileSystemRequest$CreationToken": "

String of up to 64 ASCII characters. Amazon EFS uses this to ensure idempotent creation.

", + "DescribeFileSystemsRequest$CreationToken": "

Optional string. Restricts the list to the file system with this creation token (you specify a creation token at the time of creating an Amazon EFS file system).

", + "FileSystemDescription$CreationToken": "

Opaque string specified in the request.

" + } + }, + "DeleteFileSystemRequest": { + "base": null, + "refs": { + } + }, + "DeleteMountTargetRequest": { + "base": null, + "refs": { + } + }, + "DeleteTagsRequest": { + "base": null, + "refs": { + } + }, + "DependencyTimeout": { + "base": "

The service timed out trying to fulfill the request, and the client should try the call again.

", + "refs": { + } + }, + "DescribeFileSystemsRequest": { + "base": null, + "refs": { + } + }, + "DescribeFileSystemsResponse": { + "base": null, + "refs": { + } + }, + "DescribeMountTargetSecurityGroupsRequest": { + "base": null, + "refs": { + } + }, + "DescribeMountTargetSecurityGroupsResponse": { + "base": null, + "refs": { + } + }, + "DescribeMountTargetsRequest": { + "base": null, + "refs": { + } + }, + "DescribeMountTargetsResponse": { + "base": null, + "refs": { + } + }, + "DescribeTagsRequest": { + "base": null, + "refs": { + } + }, + "DescribeTagsResponse": { + "base": null, + "refs": { + } + }, + "ErrorCode": { + "base": null, + "refs": { + "BadRequest$ErrorCode": null, + "DependencyTimeout$ErrorCode": null, + "FileSystemAlreadyExists$ErrorCode": null, + "FileSystemInUse$ErrorCode": null, + "FileSystemLimitExceeded$ErrorCode": null, + "FileSystemNotFound$ErrorCode": null, + "IncorrectFileSystemLifeCycleState$ErrorCode": null, + "IncorrectMountTargetState$ErrorCode": null, + "InternalServerError$ErrorCode": null, + "IpAddressInUse$ErrorCode": null, + "MountTargetConflict$ErrorCode": null, + "MountTargetNotFound$ErrorCode": null, + "NetworkInterfaceLimitExceeded$ErrorCode": null, + "NoFreeAddressesInSubnet$ErrorCode": null, + "SecurityGroupLimitExceeded$ErrorCode": null, + "SecurityGroupNotFound$ErrorCode": null, + "SubnetNotFound$ErrorCode": null, + "UnsupportedAvailabilityZone$ErrorCode": null + } + }, + "ErrorMessage": { + "base": null, + "refs": { + "BadRequest$Message": null, + "DependencyTimeout$Message": null, + "FileSystemAlreadyExists$Message": null, + "FileSystemInUse$Message": null, + "FileSystemLimitExceeded$Message": null, + "FileSystemNotFound$Message": null, + "IncorrectFileSystemLifeCycleState$Message": null, + "IncorrectMountTargetState$Message": null, + "InternalServerError$Message": null, + "IpAddressInUse$Message": null, + "MountTargetConflict$Message": null, + "MountTargetNotFound$Message": null, + "NetworkInterfaceLimitExceeded$Message": null, + "NoFreeAddressesInSubnet$Message": null, + "SecurityGroupLimitExceeded$Message": null, + "SecurityGroupNotFound$Message": null, + "SubnetNotFound$Message": null, + "UnsupportedAvailabilityZone$Message": null + } + }, + "FileSystemAlreadyExists": { + "base": "

Returned if the file system you are trying to create already exists, with the creation token you provided.

", + "refs": { + } + }, + "FileSystemDescription": { + "base": "

This object provides description of a file system.

", + "refs": { + "FileSystemDescriptions$member": null + } + }, + "FileSystemDescriptions": { + "base": null, + "refs": { + "DescribeFileSystemsResponse$FileSystems": "

An array of file system descriptions.

" + } + }, + "FileSystemId": { + "base": null, + "refs": { + "CreateMountTargetRequest$FileSystemId": "

The ID of the file system for which to create the mount target.

", + "CreateTagsRequest$FileSystemId": "

String. The ID of the file system whose tags you want to modify. This operation modifies only the tags and not the file system.

", + "DeleteFileSystemRequest$FileSystemId": "

The ID of the file system you want to delete.

", + "DeleteTagsRequest$FileSystemId": "

String. The ID of the file system whose tags you want to delete.

", + "DescribeFileSystemsRequest$FileSystemId": "

Optional string. File system ID whose description you want to retrieve.

", + "DescribeMountTargetsRequest$FileSystemId": "

String. The ID of the file system whose mount targets you want to list.

", + "DescribeTagsRequest$FileSystemId": "

The ID of the file system whose tag set you want to retrieve.

", + "FileSystemAlreadyExists$FileSystemId": null, + "FileSystemDescription$FileSystemId": "

The file system ID assigned by Amazon EFS.

", + "MountTargetDescription$FileSystemId": "

The ID of the file system for which the mount target is intended.

" + } + }, + "FileSystemInUse": { + "base": "

Returned if a file system has mount targets.

", + "refs": { + } + }, + "FileSystemLimitExceeded": { + "base": "

Returned if the AWS account has already created maximum number of file systems allowed per account.

", + "refs": { + } + }, + "FileSystemNotFound": { + "base": "

Returned if the specified FileSystemId does not exist in the requester's AWS account.

", + "refs": { + } + }, + "FileSystemSize": { + "base": "

This object provides the latest known metered size, in bytes, of data stored in the file system, in its Value field, and the time at which that size was determined in its Timestamp field. Note that the value does not represent the size of a consistent snapshot of the file system, but it is eventually consistent when there are no writes to the file system. That is, the value will represent the actual size only if the file system is not modified for a period longer than a couple of hours. Otherwise, the value is not necessarily the exact size the file system was at any instant in time.

", + "refs": { + "FileSystemDescription$SizeInBytes": "

This object provides the latest known metered size of data stored in the file system, in bytes, in its Value field, and the time at which that size was determined in its Timestamp field. The Timestamp value is the integer number of seconds since 1970-01-01T00:00:00Z. Note that the value does not represent the size of a consistent snapshot of the file system, but it is eventually consistent when there are no writes to the file system. That is, the value will represent actual size only if the file system is not modified for a period longer than a couple of hours. Otherwise, the value is not the exact size the file system was at any instant in time.

" + } + }, + "FileSystemSizeValue": { + "base": null, + "refs": { + "FileSystemSize$Value": "

The latest known metered size, in bytes, of data stored in the file system.

" + } + }, + "IncorrectFileSystemLifeCycleState": { + "base": "

Returned if the file system's life cycle state is not \"created\".

", + "refs": { + } + }, + "IncorrectMountTargetState": { + "base": "

Returned if the mount target is not in the correct state for the operation.

", + "refs": { + } + }, + "InternalServerError": { + "base": "

Returned if an error occurred on the server side.

", + "refs": { + } + }, + "IpAddress": { + "base": null, + "refs": { + "CreateMountTargetRequest$IpAddress": "

A valid IPv4 address within the address range of the specified subnet.

", + "MountTargetDescription$IpAddress": "

The address at which the file system may be mounted via the mount target.

" + } + }, + "IpAddressInUse": { + "base": "

Returned if the request specified an IpAddress that is already in use in the subnet.

", + "refs": { + } + }, + "LifeCycleState": { + "base": null, + "refs": { + "FileSystemDescription$LifeCycleState": "

A predefined string value that indicates the lifecycle phase of the file system.

", + "MountTargetDescription$LifeCycleState": "

The lifecycle state the mount target is in.

" + } + }, + "Marker": { + "base": null, + "refs": { + "DescribeFileSystemsRequest$Marker": "

Optional string. Opaque pagination token returned from a previous DescribeFileSystems operation. If present, specifies to continue the list from where the returning call had left off.

", + "DescribeFileSystemsResponse$Marker": "

A string, present if provided by caller in the request.

", + "DescribeFileSystemsResponse$NextMarker": "

A string, present if there are more file systems than returned in the response. You can use the NextMarker in the subsequent request to fetch the descriptions.

", + "DescribeMountTargetsRequest$Marker": "

Optional. String. Opaque pagination token returned from a previous DescribeMountTargets operation. If present, it specifies to continue the list from where the previous returning call left off.

", + "DescribeMountTargetsResponse$Marker": "

If the request included the Marker, the response returns that value in this field.

", + "DescribeMountTargetsResponse$NextMarker": "

If a value is present, there are more mount targets to return. In a subsequent request, you can provide Marker in your request with this value to retrieve the next set of mount targets.

", + "DescribeTagsRequest$Marker": "

Optional. String. Opaque pagination token returned from a previous DescribeTags operation. If present, it specifies to continue the list from where the previous call left off.

", + "DescribeTagsResponse$Marker": "

If the request included a Marker, the response returns that value in this field.

", + "DescribeTagsResponse$NextMarker": "

If a value is present, there are more tags to return. In a subsequent request, you can provide the value of NextMarker as the value of the Marker parameter in your next request to retrieve the next set of tags.

" + } + }, + "MaxItems": { + "base": null, + "refs": { + "DescribeFileSystemsRequest$MaxItems": "

Optional integer. Specifies the maximum number of file systems to return in the response. This parameter value must be greater than 0. The number of items Amazon EFS returns will be the minimum of the MaxItems parameter specified in the request and the service's internal maximum number of items per page.

", + "DescribeMountTargetsRequest$MaxItems": "

Optional. Maximum number of mount targets to return in the response. It must be an integer with a value greater than zero.

", + "DescribeTagsRequest$MaxItems": "

Optional. Maximum number of file system tags to return in the response. It must be an integer with a value greater than zero.

" + } + }, + "ModifyMountTargetSecurityGroupsRequest": { + "base": null, + "refs": { + } + }, + "MountTargetConflict": { + "base": "

Returned if the mount target would violate one of the specified restrictions based on the file system's existing mount targets.

", + "refs": { + } + }, + "MountTargetCount": { + "base": null, + "refs": { + "FileSystemDescription$NumberOfMountTargets": "

The current number of mount targets (see CreateMountTarget) the file system has.

" + } + }, + "MountTargetDescription": { + "base": "

This object provides description of a mount target.

", + "refs": { + "MountTargetDescriptions$member": null + } + }, + "MountTargetDescriptions": { + "base": null, + "refs": { + "DescribeMountTargetsResponse$MountTargets": "

Returns the file system's mount targets as an array of MountTargetDescription objects.

" + } + }, + "MountTargetId": { + "base": null, + "refs": { + "DeleteMountTargetRequest$MountTargetId": "

String. The ID of the mount target to delete.

", + "DescribeMountTargetSecurityGroupsRequest$MountTargetId": "

The ID of the mount target whose security groups you want to retrieve.

", + "ModifyMountTargetSecurityGroupsRequest$MountTargetId": "

The ID of the mount target whose security groups you want to modify.

", + "MountTargetDescription$MountTargetId": "

The system-assigned mount target ID.

" + } + }, + "MountTargetNotFound": { + "base": "

Returned if there is no mount target with the specified ID is found in the caller's account.

", + "refs": { + } + }, + "NetworkInterfaceId": { + "base": null, + "refs": { + "MountTargetDescription$NetworkInterfaceId": "

The ID of the network interface that Amazon EFS created when it created the mount target.

" + } + }, + "NetworkInterfaceLimitExceeded": { + "base": "

The calling account has reached the ENI limit for the specific AWS region. Client should try to delete some ENIs or get its account limit raised. For more information, go to Amazon VPC Limits in the Amazon Virtual Private Cloud User Guide (see the Network interfaces per VPC entry in the table).

", + "refs": { + } + }, + "NoFreeAddressesInSubnet": { + "base": "

Returned if IpAddress was not specified in the request and there are no free IP addresses in the subnet.

", + "refs": { + } + }, + "SecurityGroup": { + "base": null, + "refs": { + "SecurityGroups$member": null + } + }, + "SecurityGroupLimitExceeded": { + "base": "

Returned if the size of SecurityGroups specified in the request is greater than five.

", + "refs": { + } + }, + "SecurityGroupNotFound": { + "base": "

Returned if one of the specified security groups does not exist in the subnet's VPC.

", + "refs": { + } + }, + "SecurityGroups": { + "base": null, + "refs": { + "CreateMountTargetRequest$SecurityGroups": "

Up to 5 VPC security group IDs, of the form \"sg-xxxxxxxx\". These must be for the same VPC as subnet specified.

", + "DescribeMountTargetSecurityGroupsResponse$SecurityGroups": "

An array of security groups.

", + "ModifyMountTargetSecurityGroupsRequest$SecurityGroups": "

An array of up to five VPC security group IDs.

" + } + }, + "SubnetId": { + "base": null, + "refs": { + "CreateMountTargetRequest$SubnetId": "

The ID of the subnet to add the mount target in.

", + "MountTargetDescription$SubnetId": "

The ID of the subnet that the mount target is in.

" + } + }, + "SubnetNotFound": { + "base": "

Returned if there is no subnet with ID SubnetId provided in the request.

", + "refs": { + } + }, + "Tag": { + "base": "

A tag is a pair of key and value. The allowed characters in keys and values are letters, whitespace, and numbers, representable in UTF-8, and the characters '+', '-', '=', '.', '_', ':', and '/'.

", + "refs": { + "Tags$member": null + } + }, + "TagKey": { + "base": null, + "refs": { + "Tag$Key": "

Tag key, a string. The key must not start with \"aws:\".

", + "TagKeys$member": null + } + }, + "TagKeys": { + "base": null, + "refs": { + "DeleteTagsRequest$TagKeys": "

A list of tag keys to delete.

" + } + }, + "TagValue": { + "base": null, + "refs": { + "FileSystemDescription$Name": "

You can add tags to a file system (see CreateTags) including a \"Name\" tag. If the file system has a \"Name\" tag, Amazon EFS returns the value in this field.

", + "Tag$Value": "

Value of the tag key.

" + } + }, + "Tags": { + "base": null, + "refs": { + "CreateTagsRequest$Tags": "

An array of Tag objects to add. Each Tag object is a key-value pair.

", + "DescribeTagsResponse$Tags": "

Returns tags associated with the file system as an array of Tag objects.

" + } + }, + "Timestamp": { + "base": null, + "refs": { + "FileSystemDescription$CreationTime": "

The time at which the file system was created, in seconds, since 1970-01-01T00:00:00Z.

", + "FileSystemSize$Timestamp": "

The time at which the size of data, returned in the Value field, was determined. The value is the integer number of seconds since 1970-01-01T00:00:00Z.

" + } + }, + "UnsupportedAvailabilityZone": { + "base": null, + "refs": { + } + } + } +} diff --git a/src/data/elasticmapreduce/2009-03-31/waiters-2.json b/src/data/elasticmapreduce/2009-03-31/waiters-2.json index 9c68bdda0c..829f1b1ac8 100644 --- a/src/data/elasticmapreduce/2009-03-31/waiters-2.json +++ b/src/data/elasticmapreduce/2009-03-31/waiters-2.json @@ -37,6 +37,31 @@ "expected": "TERMINATED_WITH_ERRORS" } ] + }, + "StepComplete": { + "delay": 30, + "operation": "DescribeStep", + "maxAttempts": 60, + "acceptors": [ + { + "state": "success", + "matcher": "path", + "argument": "Step.Status.State", + "expected": "COMPLETED" + }, + { + "state": "failure", + "matcher": "path", + "argument": "Step.Status.State", + "expected": "FAILED" + }, + { + "state": "failure", + "matcher": "path", + "argument": "Step.Status.State", + "expected": "CANCELLED" + } + ] } } } diff --git a/src/data/elastictranscoder/2012-09-25/api-2.json b/src/data/elastictranscoder/2012-09-25/api-2.json index 7fc0e81a69..419e9588dc 100644 --- a/src/data/elastictranscoder/2012-09-25/api-2.json +++ b/src/data/elastictranscoder/2012-09-25/api-2.json @@ -695,6 +695,14 @@ "type":"string", "pattern":"(^auto$)|(^1:1$)|(^4:3$)|(^3:2$)|(^16:9$)" }, + "AudioBitDepth":{ + "type":"string", + "pattern":"(^16$)|(^24$)" + }, + "AudioBitOrder":{ + "type":"string", + "pattern":"(^LittleEndian$)" + }, "AudioBitRate":{ "type":"string", "pattern":"^\\d{1,3}$" @@ -705,18 +713,25 @@ }, "AudioCodec":{ "type":"string", - "pattern":"(^AAC$)|(^vorbis$)|(^mp3$)|(^mp2$)" + "pattern":"(^AAC$)|(^vorbis$)|(^mp3$)|(^mp2$)|(^pcm$)|(^flac$)" }, "AudioCodecOptions":{ "type":"structure", "members":{ - "Profile":{"shape":"AudioCodecProfile"} + "Profile":{"shape":"AudioCodecProfile"}, + "BitDepth":{"shape":"AudioBitDepth"}, + "BitOrder":{"shape":"AudioBitOrder"}, + "Signed":{"shape":"AudioSigned"} } }, "AudioCodecProfile":{ "type":"string", "pattern":"(^auto$)|(^AAC-LC$)|(^HE-AAC$)|(^HE-AACv2$)" }, + "AudioPackingMode":{ + "type":"string", + "pattern":"(^SingleTrack$)|(^OneChannelPerTrack$)|(^OneChannelPerTrackWithMosTo8Tracks$)" + }, "AudioParameters":{ "type":"structure", "members":{ @@ -724,12 +739,17 @@ "SampleRate":{"shape":"AudioSampleRate"}, "BitRate":{"shape":"AudioBitRate"}, "Channels":{"shape":"AudioChannels"}, + "AudioPackingMode":{"shape":"AudioPackingMode"}, "CodecOptions":{"shape":"AudioCodecOptions"} } }, "AudioSampleRate":{ "type":"string", - "pattern":"(^auto$)|(^22050$)|(^32000$)|(^44100$)|(^48000$)|(^96000$)" + "pattern":"(^auto$)|(^22050$)|(^32000$)|(^44100$)|(^48000$)|(^96000$)|(^192000$)" + }, + "AudioSigned":{ + "type":"string", + "pattern":"(^Signed$)" }, "Base64EncodedString":{ "type":"string", @@ -1437,7 +1457,7 @@ }, "PresetContainer":{ "type":"string", - "pattern":"(^mp4$)|(^ts$)|(^webm$)|(^mp3$)|(^ogg$)|(^fmp4$)|(^mpg$)|(^flv$)|(^gif$)" + "pattern":"(^mp4$)|(^ts$)|(^webm$)|(^mp3$)|(^flac$)|(^oga$)|(^ogg$)|(^fmp4$)|(^mpg$)|(^flv$)|(^gif$)|(^mxf$)" }, "PresetType":{ "type":"string", diff --git a/src/data/elastictranscoder/2012-09-25/docs-2.json b/src/data/elastictranscoder/2012-09-25/docs-2.json index f2f8ea1614..0989c649b3 100644 --- a/src/data/elastictranscoder/2012-09-25/docs-2.json +++ b/src/data/elastictranscoder/2012-09-25/docs-2.json @@ -68,6 +68,18 @@ "VideoParameters$DisplayAspectRatio": "

The value that Elastic Transcoder adds to the metadata in the output file.

" } }, + "AudioBitDepth": { + "base": null, + "refs": { + "AudioCodecOptions$BitDepth": "

You can only choose an audio bit depth when you specify flac or pcm for the value of Audio:Codec.

The bit depth of a sample is how many bits of information are included in the audio samples. The higher the bit depth, the better the audio, but the larger the file.

Valid values are 16 and 24.

The most common bit depth is 24.

" + } + }, + "AudioBitOrder": { + "base": null, + "refs": { + "AudioCodecOptions$BitOrder": "

You can only choose an audio bit order when you specify pcm for the value of Audio:Codec.

The order the bits of a PCM sample are stored in.

The supported value is LittleEndian.

" + } + }, "AudioBitRate": { "base": null, "refs": { @@ -77,13 +89,13 @@ "AudioChannels": { "base": null, "refs": { - "AudioParameters$Channels": "

The number of audio channels in the output file. Valid values include:

auto, 0, 1, 2

If you specify auto, Elastic Transcoder automatically detects the number of channels in the input file.

" + "AudioParameters$Channels": "

The number of audio channels in the output file. The following values are valid:

auto, 0, 1, 2

One channel carries the information played by a single speaker. For example, a stereo track with two channels sends one channel to the left speaker, and the other channel to the right speaker. The output channels are organized into tracks. If you want Elastic Transcoder to automatically detect the number of audio channels in the input file and use that value for the output file, select auto.

The output of a specific channel value and inputs are as follows:

For more information about how Elastic Transcoder organizes channels and tracks, see Audio:AudioPackingMode.

" } }, "AudioCodec": { "base": null, "refs": { - "AudioParameters$Codec": "

The audio codec for the output file. Valid values include aac, mp2, mp3, and vorbis.

" + "AudioParameters$Codec": "

The audio codec for the output file. Valid values include aac, flac, mp2, mp3, pcm, and vorbis.

" } }, "AudioCodecOptions": { @@ -98,6 +110,12 @@ "AudioCodecOptions$Profile": "

You can only choose an audio profile when you specify AAC for the value of Audio:Codec.

Specify the AAC profile for the output file. Elastic Transcoder supports the following profiles:

All outputs in a Smooth playlist must have the same value for Profile.

If you created any presets before AAC profiles were added, Elastic Transcoder automatically updated your presets to use AAC-LC. You can change the value as required.

" } }, + "AudioPackingMode": { + "base": null, + "refs": { + "AudioParameters$AudioPackingMode": "

The method of organizing audio channels and tracks. Use Audio:Channels to specify the number of channels in your output, and Audio:AudioPackingMode to specify the number of tracks and their relation to the channels. If you do not specify an Audio:AudioPackingMode, Elastic Transcoder uses SingleTrack.

The following values are valid:

SingleTrack, OneChannelPerTrack, and OneChannelPerTrackWithMosTo8Tracks

When you specify SingleTrack, Elastic Transcoder creates a single track for your output. The track can have up to eight channels. Use SingleTrack for all non-mxf containers.

The outputs of SingleTrack for a specific channel value and inputs are as follows:

When you specify OneChannelPerTrack, Elastic Transcoder creates a new track for every channel in your output. Your output can have up to eight single-channel tracks.

The outputs of OneChannelPerTrack for a specific channel value and inputs are as follows:

When you specify OneChannelPerTrackWithMosTo8Tracks, Elastic Transcoder creates eight single-channel tracks for your output. All tracks that do not contain audio data from an input channel are MOS, or Mit Out Sound, tracks.

The outputs of OneChannelPerTrackWithMosTo8Tracks for a specific channel value and inputs are as follows:

" + } + }, "AudioParameters": { "base": "

Parameters required for transcoding audio.

", "refs": { @@ -111,6 +129,12 @@ "AudioParameters$SampleRate": "

The sample rate of the audio stream in the output file, in Hertz. Valid values include:

auto, 22050, 32000, 44100, 48000, 96000

If you specify auto, Elastic Transcoder automatically detects the sample rate.

" } }, + "AudioSigned": { + "base": null, + "refs": { + "AudioCodecOptions$Signed": "

You can only choose whether an audio sample is signed when you specify pcm for the value of Audio:Codec.

Whether audio samples are represented with negative and positive numbers (signed) or only positive numbers (unsigned).

The supported value is Signed.

" + } + }, "Base64EncodedString": { "base": null, "refs": { @@ -152,7 +176,7 @@ "CaptionFormatFormat": { "base": null, "refs": { - "CaptionFormat$Format": "

The format you specify determines whether Elastic Transcoder generates an embedded or sidecar caption for this output.

" + "CaptionFormat$Format": "

The format you specify determines whether Elastic Transcoder generates an embedded or sidecar caption for this output.

" } }, "CaptionFormatPattern": { @@ -805,8 +829,8 @@ "PresetContainer": { "base": null, "refs": { - "CreatePresetRequest$Container": "

The container type for the output file. Valid values include flv, fmp4, gif, mp3, mp4, mpg, ogg, ts, and webm.

", - "Preset$Container": "

The container type for the output file. Valid values include flv, fmp4, gif, mp3, mp4, mpg, ogg, ts, and webm.

" + "CreatePresetRequest$Container": "

The container type for the output file. Valid values include flac, flv, fmp4, gif, mp3, mp4, mpg, mxf, oga, ogg, ts, and webm.

", + "Preset$Container": "

The container type for the output file. Valid values include flac, flv, fmp4, gif, mp3, mp4, mpg, mxf, oga, ogg, ts, and webm.

" } }, "PresetType": { diff --git a/src/data/glacier/2012-06-01/api-2.json b/src/data/glacier/2012-06-01/api-2.json index 5ad00f2cd3..d440ab380b 100644 --- a/src/data/glacier/2012-06-01/api-2.json +++ b/src/data/glacier/2012-06-01/api-2.json @@ -1,4 +1,5 @@ { + "version":"2.0", "metadata":{ "apiVersion":"2012-06-01", "checksumFormat":"sha256", @@ -165,6 +166,37 @@ } ] }, + "DeleteVaultAccessPolicy":{ + "name":"DeleteVaultAccessPolicy", + "http":{ + "method":"DELETE", + "requestUri":"/{accountId}/vaults/{vaultName}/access-policy", + "responseCode":204 + }, + "input":{"shape":"DeleteVaultAccessPolicyInput"}, + "errors":[ + { + "shape":"ResourceNotFoundException", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"InvalidParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"MissingParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"ServiceUnavailableException", + "error":{"httpStatusCode":500}, + "exception":true + } + ] + }, "DeleteVaultNotifications":{ "name":"DeleteVaultNotifications", "http":{ @@ -315,6 +347,37 @@ } ] }, + "GetVaultAccessPolicy":{ + "name":"GetVaultAccessPolicy", + "http":{ + "method":"GET", + "requestUri":"/{accountId}/vaults/{vaultName}/access-policy" + }, + "input":{"shape":"GetVaultAccessPolicyInput"}, + "output":{"shape":"GetVaultAccessPolicyOutput"}, + "errors":[ + { + "shape":"ResourceNotFoundException", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"InvalidParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"MissingParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"ServiceUnavailableException", + "error":{"httpStatusCode":500}, + "exception":true + } + ] + }, "GetVaultNotifications":{ "name":"GetVaultNotifications", "http":{ @@ -565,6 +628,37 @@ } ] }, + "SetVaultAccessPolicy":{ + "name":"SetVaultAccessPolicy", + "http":{ + "method":"PUT", + "requestUri":"/{accountId}/vaults/{vaultName}/access-policy", + "responseCode":204 + }, + "input":{"shape":"SetVaultAccessPolicyInput"}, + "errors":[ + { + "shape":"ResourceNotFoundException", + "error":{"httpStatusCode":404}, + "exception":true + }, + { + "shape":"InvalidParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"MissingParameterValueException", + "error":{"httpStatusCode":400}, + "exception":true + }, + { + "shape":"ServiceUnavailableException", + "error":{"httpStatusCode":500}, + "exception":true + } + ] + }, "SetVaultNotifications":{ "name":"SetVaultNotifications", "http":{ @@ -831,6 +925,25 @@ "archiveId" ] }, + "DeleteVaultAccessPolicyInput":{ + "type":"structure", + "members":{ + "accountId":{ + "shape":"string", + "location":"uri", + "locationName":"accountId" + }, + "vaultName":{ + "shape":"string", + "location":"uri", + "locationName":"vaultName" + } + }, + "required":[ + "accountId", + "vaultName" + ] + }, "DeleteVaultInput":{ "type":"structure", "members":{ @@ -1007,6 +1120,32 @@ }, "payload":"body" }, + "GetVaultAccessPolicyInput":{ + "type":"structure", + "members":{ + "accountId":{ + "shape":"string", + "location":"uri", + "locationName":"accountId" + }, + "vaultName":{ + "shape":"string", + "location":"uri", + "locationName":"vaultName" + } + }, + "required":[ + "accountId", + "vaultName" + ] + }, + "GetVaultAccessPolicyOutput":{ + "type":"structure", + "members":{ + "policy":{"shape":"VaultAccessPolicy"} + }, + "payload":"policy" + }, "GetVaultNotificationsInput":{ "type":"structure", "members":{ @@ -1425,6 +1564,27 @@ }, "required":["accountId"] }, + "SetVaultAccessPolicyInput":{ + "type":"structure", + "members":{ + "accountId":{ + "shape":"string", + "location":"uri", + "locationName":"accountId" + }, + "vaultName":{ + "shape":"string", + "location":"uri", + "locationName":"vaultName" + }, + "policy":{"shape":"VaultAccessPolicy"} + }, + "required":[ + "accountId", + "vaultName" + ], + "payload":"policy" + }, "SetVaultNotificationsInput":{ "type":"structure", "members":{ @@ -1551,6 +1711,12 @@ "type":"list", "member":{"shape":"UploadListElement"} }, + "VaultAccessPolicy":{ + "type":"structure", + "members":{ + "Policy":{"shape":"string"} + } + }, "VaultList":{ "type":"list", "member":{"shape":"DescribeVaultOutput"} diff --git a/src/data/glacier/2012-06-01/docs-2.json b/src/data/glacier/2012-06-01/docs-2.json index 88ee9fc73f..48e5639086 100644 --- a/src/data/glacier/2012-06-01/docs-2.json +++ b/src/data/glacier/2012-06-01/docs-2.json @@ -1,15 +1,18 @@ { + "version": "2.0", "operations": { "AbortMultipartUpload": "

This operation aborts a multipart upload identified by the upload ID.

After the Abort Multipart Upload request succeeds, you cannot upload any more parts to the multipart upload or complete the multipart upload. Aborting a completed upload fails. However, aborting an already-aborted upload will succeed, for a short time. For more information about uploading a part and completing a multipart upload, see UploadMultipartPart and CompleteMultipartUpload.

This operation is idempotent.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Working with Archives in Amazon Glacier and Abort Multipart Upload in the Amazon Glacier Developer Guide.

", "CompleteMultipartUpload": "

You call this operation to inform Amazon Glacier that all the archive parts have been uploaded and that Amazon Glacier can now assemble the archive from the uploaded parts. After assembling and saving the archive to the vault, Amazon Glacier returns the URI path of the newly created archive resource. Using the URI path, you can then access the archive. After you upload an archive, you should save the archive ID returned to retrieve the archive at a later point. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob.

In the request, you must include the computed SHA256 tree hash of the entire archive you have uploaded. For information about computing a SHA256 tree hash, see Computing Checksums. On the server side, Amazon Glacier also constructs the SHA256 tree hash of the assembled archive. If the values match, Amazon Glacier saves the archive to the vault; otherwise, it returns an error, and the operation fails. The ListParts operation returns a list of parts uploaded for a specific multipart upload. It includes checksum information for each uploaded part that can be used to debug a bad checksum issue.

Additionally, Amazon Glacier also checks for any missing content ranges when assembling the archive, if missing content ranges are found, Amazon Glacier returns an error and the operation fails.

Complete Multipart Upload is an idempotent operation. After your first successful complete multipart upload, if you call the operation again within a short period, the operation will succeed and return the same archive ID. This is useful in the event you experience a network issue that causes an aborted connection or receive a 500 server error, in which case you can repeat your Complete Multipart Upload request and get the same archive ID without creating duplicate archives. Note, however, that after the multipart upload completes, you cannot call the List Parts operation and the multipart upload will not appear in List Multipart Uploads response, even if idempotent complete is possible.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Uploading Large Archives in Parts (Multipart Upload) and Complete Multipart Upload in the Amazon Glacier Developer Guide.

", "CreateVault": "

This operation creates a new vault with the specified name. The name of the vault must be unique within a region for an AWS account. You can create up to 1,000 vaults per account. If you need to create more vaults, contact Amazon Glacier.

You must use the following guidelines when naming a vault.

This operation is idempotent.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Creating a Vault in Amazon Glacier and Create Vault in the Amazon Glacier Developer Guide.

", "DeleteArchive": "

This operation deletes an archive from a vault. Subsequent requests to initiate a retrieval of this archive will fail. Archive retrievals that are in progress for this archive ID may or may not succeed according to the following scenarios:

This operation is idempotent. Attempting to delete an already-deleted archive does not result in an error.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Deleting an Archive in Amazon Glacier and Delete Archive in the Amazon Glacier Developer Guide.

", "DeleteVault": "

This operation deletes a vault. Amazon Glacier will delete a vault only if there are no archives in the vault as of the last inventory and there have been no writes to the vault since the last inventory. If either of these conditions is not satisfied, the vault deletion fails (that is, the vault is not removed) and Amazon Glacier returns an error. You can use DescribeVault to return the number of archives in a vault, and you can use Initiate a Job (POST jobs) to initiate a new inventory retrieval for a vault. The inventory contains the archive IDs you use to delete archives using Delete Archive (DELETE archive).

This operation is idempotent.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Deleting a Vault in Amazon Glacier and Delete Vault in the Amazon Glacier Developer Guide.

", + "DeleteVaultAccessPolicy": "

This operation deletes the access policy associated with the specified vault. The operation is eventually consistent—that is, it might take some time for Amazon Glacier to completely remove the access policy, and you might still see the effect of the policy for a short time after you send the delete request.

This operation is idempotent. You can invoke delete multiple times, even if there is no policy associated with the vault. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies.

", "DeleteVaultNotifications": "

This operation deletes the notification configuration set for a vault. The operation is eventually consistent;that is, it might take some time for Amazon Glacier to completely disable the notifications and you might still receive some notifications for a short time after you send the delete request.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Configuring Vault Notifications in Amazon Glacier and Delete Vault Notification Configuration in the Amazon Glacier Developer Guide.

", "DescribeJob": "

This operation returns information about a job you previously initiated, including the job initiation date, the user who initiated the job, the job status code/message and the Amazon SNS topic to notify after Amazon Glacier completes the job. For more information about initiating a job, see InitiateJob.

This operation enables you to check the status of your job. However, it is strongly recommended that you set up an Amazon SNS topic and specify it in your initiate job request so that Amazon Glacier can notify the topic after it completes the job.

A job ID will not expire for at least 24 hours after Amazon Glacier completes the job.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For information about the underlying REST API, go to Working with Archives in Amazon Glacier in the Amazon Glacier Developer Guide.

", "DescribeVault": "

This operation returns information about a vault, including the vault's Amazon Resource Name (ARN), the date the vault was created, the number of archives it contains, and the total size of all the archives in the vault. The number of archives and their total size are as of the last inventory generation. This means that if you add or remove an archive from a vault, and then immediately use Describe Vault, the change in contents will not be immediately reflected. If you want to retrieve the latest inventory of the vault, use InitiateJob. Amazon Glacier generates vault inventories approximately daily. For more information, see Downloading a Vault Inventory in Amazon Glacier.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Retrieving Vault Metadata in Amazon Glacier and Describe Vault in the Amazon Glacier Developer Guide.

", "GetDataRetrievalPolicy": "

This operation returns the current data retrieval policy for the account and region specified in the GET request. For more information about data retrieval policies, see Amazon Glacier Data Retrieval Policies.

", "GetJobOutput": "

This operation downloads the output of the job you initiated using InitiateJob. Depending on the job type you specified when you initiated the job, the output will be either the content of an archive or a vault inventory.

A job ID will not expire for at least 24 hours after Amazon Glacier completes the job. That is, you can download the job output within the 24 hours period after Amazon Glacier completes the job.

If the job output is large, then you can use the Range request header to retrieve a portion of the output. This allows you to download the entire output in smaller chunks of bytes. For example, suppose you have 1 GB of job output you want to download and you decide to download 128 MB chunks of data at a time, which is a total of eight Get Job Output requests. You use the following process to download the job output:

  1. Download a 128 MB chunk of output by specifying the appropriate byte range using the Range header.

  2. Along with the data, the response includes a SHA256 tree hash of the payload. You compute the checksum of the payload on the client and compare it with the checksum you received in the response to ensure you received all the expected data.

  3. Repeat steps 1 and 2 for all the eight 128 MB chunks of output data, each time specifying the appropriate byte range.

  4. After downloading all the parts of the job output, you have a list of eight checksum values. Compute the tree hash of these values to find the checksum of the entire output. Using the DescribeJob API, obtain job information of the job that provided you the output. The response includes the checksum of the entire archive stored in Amazon Glacier. You compare this value with the checksum you computed to ensure you have downloaded the entire archive content with no errors.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and the underlying REST API, go to Downloading a Vault Inventory, Downloading an Archive, and Get Job Output

", + "GetVaultAccessPolicy": "

This operation retrieves the access-policy subresource set on the vault—for more information on setting this subresource, see Set Vault Access Policy (PUT access-policy). If there is no access policy set on the vault, the operation returns a 404 Not found error. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies.

", "GetVaultNotifications": "

This operation retrieves the notification-configuration subresource of the specified vault.

For information about setting a notification configuration on a vault, see SetVaultNotifications. If a notification configuration for a vault is not set, the operation returns a 404 Not Found error. For more information about vault notifications, see Configuring Vault Notifications in Amazon Glacier.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Configuring Vault Notifications in Amazon Glacier and Get Vault Notification Configuration in the Amazon Glacier Developer Guide.

", "InitiateJob": "

This operation initiates a job of the specified type. In this release, you can initiate a job to retrieve either an archive or a vault inventory (a list of archives in a vault).

Retrieving data from Amazon Glacier is a two-step process:

  1. Initiate a retrieval job.

    A data retrieval policy can cause your initiate retrieval job request to fail with a PolicyEnforcedException exception. For more information about data retrieval policies, see Amazon Glacier Data Retrieval Policies. For more information about the PolicyEnforcedException exception, see Error Responses.

  2. After the job completes, download the bytes.

The retrieval request is executed asynchronously. When you initiate a retrieval job, Amazon Glacier creates a job and returns a job ID in the response. When Amazon Glacier completes the job, you can get the job output (archive or inventory data). For information about getting job output, see GetJobOutput operation.

The job must complete before you can get its output. To determine when a job is complete, you have the following options:

The information you get via notification is same that you get by calling DescribeJob.

If for a specific event, you add both the notification configuration on the vault and also specify an SNS topic in your initiate job request, Amazon Glacier sends both notifications. For more information, see SetVaultNotifications.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

About the Vault Inventory

Amazon Glacier prepares an inventory for each vault periodically, every 24 hours. When you initiate a job for a vault inventory, Amazon Glacier returns the last inventory for the vault. The inventory data you get might be up to a day or two days old. Also, the initiate inventory job might take some time to complete before you can download the vault inventory. So you do not want to retrieve a vault inventory for each vault operation. However, in some scenarios, you might find the vault inventory useful. For example, when you upload an archive, you can provide an archive description but not an archive name. Amazon Glacier provides you a unique archive ID, an opaque string of characters. So, you might maintain your own database that maps archive names to their corresponding Amazon Glacier assigned archive IDs. You might find the vault inventory useful in the event you need to reconcile information in your database with the actual vault inventory.

Range Inventory Retrieval

You can limit the number of inventory items retrieved by filtering on the archive creation date or by setting a limit.

Filtering by Archive Creation Date

You can retrieve inventory items for archives created between StartDate and EndDate by specifying values for these parameters in the InitiateJob request. Archives created on or after the StartDate and before the EndDate will be returned. If you only provide the StartDate without the EndDate, you will retrieve the inventory for all archives created on or after the StartDate. If you only provide the EndDate without the StartDate, you will get back the inventory for all archives created before the EndDate.

Limiting Inventory Items per Retrieval

You can limit the number of inventory items returned by setting the Limit parameter in the InitiateJob request. The inventory job output will contain inventory items up to the specified Limit. If there are more inventory items available, the result is paginated. After a job is complete you can use the DescribeJob operation to get a marker that you use in a subsequent InitiateJob request. The marker will indicate the starting point to retrieve the next set of inventory items. You can page through your entire inventory by repeatedly making InitiateJob requests with the marker from the previous DescribeJob output, until you get a marker from DescribeJob that returns null, indicating that there are no more inventory items available.

You can use the Limit parameter together with the date range parameters.

About Ranged Archive Retrieval

You can initiate an archive retrieval for the whole archive or a range of the archive. In the case of ranged archive retrieval, you specify a byte range to return or the whole archive. The range specified must be megabyte (MB) aligned, that is the range start value must be divisible by 1 MB and range end value plus 1 must be divisible by 1 MB or equal the end of the archive. If the ranged archive retrieval is not megabyte aligned, this operation returns a 400 response. Furthermore, to ensure you get checksum values for data you download using Get Job Output API, the range must be tree hash aligned.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and the underlying REST API, go to Initiate a Job and Downloading a Vault Inventory

", "InitiateMultipartUpload": "

This operation initiates a multipart upload. Amazon Glacier creates a multipart upload resource and returns its ID in the response. The multipart upload ID is used in subsequent requests to upload parts of an archive (see UploadMultipartPart).

When you initiate a multipart upload, you specify the part size in number of bytes. The part size must be a megabyte (1024 KB) multiplied by a power of 2-for example, 1048576 (1 MB), 2097152 (2 MB), 4194304 (4 MB), 8388608 (8 MB), and so on. The minimum allowable part size is 1 MB, and the maximum is 4 GB.

Every part you upload to this resource (see UploadMultipartPart), except the last one, must have the same size. The last one can be the same size or smaller. For example, suppose you want to upload a 16.2 MB file. If you initiate the multipart upload with a part size of 4 MB, you will upload four parts of 4 MB each and one part of 0.2 MB.

You don't need to know the size of the archive when you start a multipart upload because Amazon Glacier does not require you to specify the overall archive size.

After you complete the multipart upload, Amazon Glacier removes the multipart upload resource referenced by the ID. Amazon Glacier also removes the multipart upload resource if you cancel the multipart upload or it may be removed if there is no activity for a period of 24 hours.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Uploading Large Archives in Parts (Multipart Upload) and Initiate Multipart Upload in the Amazon Glacier Developer Guide.

", @@ -18,6 +21,7 @@ "ListParts": "

This operation lists the parts of an archive that have been uploaded in a specific multipart upload. You can make this request at any time during an in-progress multipart upload before you complete the upload (see CompleteMultipartUpload. List Parts returns an error for completed uploads. The list returned in the List Parts response is sorted by part range.

The List Parts operation supports pagination. By default, this operation returns up to 1,000 uploaded parts in the response. You should always check the response for a marker at which to continue the list; if there are no more items the marker is null. To return a list of parts that begins at a specific part, set the marker request parameter to the value you obtained from a previous List Parts request. You can also limit the number of parts returned in the response by specifying the limit parameter in the request.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and the underlying REST API, go to Working with Archives in Amazon Glacier and List Parts in the Amazon Glacier Developer Guide.

", "ListVaults": "

This operation lists all vaults owned by the calling user's account. The list returned in the response is ASCII-sorted by vault name.

By default, this operation returns up to 1,000 items. If there are more vaults to list, the response marker field contains the vault Amazon Resource Name (ARN) at which to continue the list with a new List Vaults request; otherwise, the marker field is null. To return a list of vaults that begins at a specific vault, set the marker request parameter to the vault ARN you obtained from a previous List Vaults request. You can also limit the number of vaults returned in the response by specifying the limit parameter in the request.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Retrieving Vault Metadata in Amazon Glacier and List Vaults in the Amazon Glacier Developer Guide.

", "SetDataRetrievalPolicy": "

This operation sets and then enacts a data retrieval policy in the region specified in the PUT request. You can set one policy per region for an AWS account. The policy is enacted within a few minutes of a successful PUT operation.

The set policy operation does not affect retrieval jobs that were in progress before the policy was enacted. For more information about data retrieval policies, see Amazon Glacier Data Retrieval Policies.

", + "SetVaultAccessPolicy": "

This operation configures an access policy for a vault and will overwrite an existing policy. To configure a vault access policy, send a PUT request to the access-policy subresource of the vault. An access policy is specific to a vault and is also called a vault subresource. You can set one access policy per vault and the policy can be up to 20 KB in size. For more information about vault access policies, see Amazon Glacier Access Control with Vault Access Policies.

", "SetVaultNotifications": "

This operation configures notifications that will be sent when specific events happen to a vault. By default, you don't get any notifications.

To configure vault notifications, send a PUT request to the notification-configuration subresource of the vault. The request should include a JSON document that provides an Amazon SNS topic and specific events for which you want Amazon Glacier to send notifications to the topic.

Amazon SNS topics must grant permission to the vault to be allowed to publish notifications to the topic. You can configure a vault to publish a notification for the following vault events:

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Configuring Vault Notifications in Amazon Glacier and Set Vault Notification Configuration in the Amazon Glacier Developer Guide.

", "UploadArchive": "

This operation adds an archive to a vault. This is a synchronous operation, and for a successful upload, your data is durably persisted. Amazon Glacier returns the archive ID in the x-amz-archive-id header of the response.

You must use the archive ID to access your data in Amazon Glacier. After you upload an archive, you should save the archive ID returned so that you can retrieve or delete the archive later. Besides saving the archive ID, you can also index it and give it a friendly name to allow for better searching. You can also use the optional archive description field to specify how the archive is referred to in an external index of archives, such as you might create in Amazon DynamoDB. You can also get the vault inventory to obtain a list of archive IDs in a vault. For more information, see InitiateJob.

You must provide a SHA256 tree hash of the data you are uploading. For information about computing a SHA256 tree hash, see Computing Checksums.

You can optionally specify an archive description of up to 1,024 printable ASCII characters. You can get the archive description when you either retrieve the archive or get the vault inventory. For more information, see InitiateJob. Amazon Glacier does not interpret the description in any way. An archive description does not need to be unique. You cannot use the description to retrieve or sort the archive list.

Archives are immutable. After you upload an archive, you cannot edit the archive or its description.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Uploading an Archive in Amazon Glacier and Upload Archive in the Amazon Glacier Developer Guide.

", "UploadMultipartPart": "

This operation uploads a part of an archive. You can upload archive parts in any order. You can also upload them in parallel. You can upload up to 10,000 parts for a multipart upload.

Amazon Glacier rejects your upload part request if any of the following conditions is true:

This operation is idempotent. If you upload the same part multiple times, the data included in the most recent request overwrites the previously uploaded data.

An AWS account has full permission to perform all operations (actions). However, AWS Identity and Access Management (IAM) users don't have any permissions by default. You must grant them explicit permission to perform specific actions. For more information, see Access Control Using AWS Identity and Access Management (IAM).

For conceptual information and underlying REST API, go to Uploading Large Archives in Parts (Multipart Upload) and Upload Part in the Amazon Glacier Developer Guide.

" @@ -86,6 +90,11 @@ "refs": { } }, + "DeleteVaultAccessPolicyInput": { + "base": "

DeleteVaultAccessPolicy input.

", + "refs": { + } + }, "DeleteVaultInput": { "base": "

Provides options for deleting a vault from Amazon Glacier.

", "refs": { @@ -132,6 +141,16 @@ "refs": { } }, + "GetVaultAccessPolicyInput": { + "base": "

Input for GetVaultAccessPolicy.

", + "refs": { + } + }, + "GetVaultAccessPolicyOutput": { + "base": "

Output for GetVaultAccessPolicy.

", + "refs": { + } + }, "GetVaultNotificationsInput": { "base": "

Provides options for retrieving the notification configuration set on an Amazon Glacier vault.

", "refs": { @@ -296,6 +315,11 @@ "refs": { } }, + "SetVaultAccessPolicyInput": { + "base": "

SetVaultAccessPolicy input.

", + "refs": { + } + }, "SetVaultNotificationsInput": { "base": "

Provides options to configure notifications that will be sent when specific events happen to a vault.

", "refs": { @@ -349,6 +373,13 @@ "ListMultipartUploadsOutput$UploadsList": "

A list of in-progress multipart uploads.

" } }, + "VaultAccessPolicy": { + "base": "

Contains the vault access policy.

", + "refs": { + "GetVaultAccessPolicyOutput$policy": "

Contains the returned vault access policy as a JSON string.

", + "SetVaultAccessPolicyInput$policy": "

The vault access policy as a JSON string.

" + } + }, "VaultList": { "base": null, "refs": { @@ -386,39 +417,41 @@ "string": { "base": null, "refs": { - "AbortMultipartUploadInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "AbortMultipartUploadInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "AbortMultipartUploadInput$vaultName": "

The name of the vault.

", "AbortMultipartUploadInput$uploadId": "

The upload ID of the multipart upload to delete.

", "ArchiveCreationOutput$location": "

The relative URI path of the newly added archive resource.

", "ArchiveCreationOutput$checksum": "

The checksum of the archive computed by Amazon Glacier.

", "ArchiveCreationOutput$archiveId": "

The ID of the archive. This value is also included as part of the location.

", - "CompleteMultipartUploadInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "CompleteMultipartUploadInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "CompleteMultipartUploadInput$vaultName": "

The name of the vault.

", "CompleteMultipartUploadInput$uploadId": "

The upload ID of the multipart upload.

", "CompleteMultipartUploadInput$archiveSize": "

The total size, in bytes, of the entire archive. This value should be the sum of all the sizes of the individual parts that you uploaded.

", "CompleteMultipartUploadInput$checksum": "

The SHA256 tree hash of the entire archive. It is the tree hash of SHA256 tree hash of the individual parts. If the value you specify in the request does not match the SHA256 tree hash of the final assembled archive as computed by Amazon Glacier, Amazon Glacier returns an error and the request fails.

", - "CreateVaultInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "CreateVaultInput$accountId": "

The AccountId value is the AWS account ID. This value must match the AWS account ID associated with the credentials used to sign the request. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include any hyphens (apos-apos) in the ID.

", "CreateVaultInput$vaultName": "

The name of the vault.

", "CreateVaultOutput$location": "

The URI of the vault that was created.

", "DataRetrievalRule$Strategy": "

The type of data retrieval policy to set.

Valid values: BytesPerHour|FreeTier|None

", - "DeleteArchiveInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "DeleteArchiveInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "DeleteArchiveInput$vaultName": "

The name of the vault.

", "DeleteArchiveInput$archiveId": "

The ID of the archive to delete.

", - "DeleteVaultInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "DeleteVaultAccessPolicyInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", + "DeleteVaultAccessPolicyInput$vaultName": "

The name of the vault.

", + "DeleteVaultInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "DeleteVaultInput$vaultName": "

The name of the vault.

", - "DeleteVaultNotificationsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "DeleteVaultNotificationsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "DeleteVaultNotificationsInput$vaultName": "

The name of the vault.

", - "DescribeJobInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "DescribeJobInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "DescribeJobInput$vaultName": "

The name of the vault.

", "DescribeJobInput$jobId": "

The ID of the job to describe.

", - "DescribeVaultInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "DescribeVaultInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "DescribeVaultInput$vaultName": "

The name of the vault.

", "DescribeVaultOutput$VaultARN": "

The Amazon Resource Name (ARN) of the vault.

", "DescribeVaultOutput$VaultName": "

The name of the vault.

", "DescribeVaultOutput$CreationDate": "

The UTC date when the vault was created. A string representation of ISO 8601 date format, for example, \"2012-03-20T17:03:43.221Z\".

", "DescribeVaultOutput$LastInventoryDate": "

The UTC date when Amazon Glacier completed the last vault inventory. A string representation of ISO 8601 date format, for example, \"2012-03-20T17:03:43.221Z\".

", - "GetDataRetrievalPolicyInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include the dashes hyphens in it.

", - "GetJobOutputInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "GetDataRetrievalPolicyInput$accountId": "

The AccountId value is the AWS account ID. This value must match the AWS account ID associated with the credentials used to sign the request. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include any hyphens (apos-apos) in the ID.

", + "GetJobOutputInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "GetJobOutputInput$vaultName": "

The name of the vault.

", "GetJobOutputInput$jobId": "

The job ID whose data is downloaded.

", "GetJobOutputInput$range": "

The range of bytes to retrieve from the output. For example, if you want to download the first 1,048,576 bytes, specify \"Range: bytes=0-1048575\". By default, this operation downloads the entire output.

", @@ -427,7 +460,9 @@ "GetJobOutputOutput$acceptRanges": "

Indicates the range units accepted. For more information, go to RFC2616.

", "GetJobOutputOutput$contentType": "

The Content-Type depends on whether the job output is an archive or a vault inventory. For archive data, the Content-Type is application/octet-stream. For vault inventory, if you requested CSV format when you initiated the job, the Content-Type is text/csv. Otherwise, by default, vault inventory is returned as JSON, and the Content-Type is application/json.

", "GetJobOutputOutput$archiveDescription": "

The description of an archive.

", - "GetVaultNotificationsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "GetVaultAccessPolicyInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", + "GetVaultAccessPolicyInput$vaultName": "

The name of the vault.

", + "GetVaultNotificationsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "GetVaultNotificationsInput$vaultName": "

The name of the vault.

", "GlacierJobDescription$JobId": "

An opaque string that identifies an Amazon Glacier job.

", "GlacierJobDescription$JobDescription": "

The job description you provided when you initiated the job.

", @@ -440,11 +475,11 @@ "GlacierJobDescription$SHA256TreeHash": "

For an ArchiveRetrieval job, it is the checksum of the archive. Otherwise, the value is null.

The SHA256 tree hash value for the requested range of an archive. If the Initiate a Job request for an archive specified a tree-hash aligned range, then this field returns a value.

For the specific case when the whole archive is retrieved, this value is the same as the ArchiveSHA256TreeHash value.

This field is null in the following situations:

", "GlacierJobDescription$ArchiveSHA256TreeHash": "

The SHA256 tree hash of the entire archive for an archive retrieval. For inventory retrieval jobs, this field is null.

", "GlacierJobDescription$RetrievalByteRange": "

The retrieved byte range for archive retrieval jobs in the form \"StartByteValue-EndByteValue\" If no range was specified in the archive retrieval, then the whole archive is retrieved and StartByteValue equals 0 and EndByteValue equals the size of the archive minus 1. For inventory retrieval jobs this field is null.

", - "InitiateJobInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "InitiateJobInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "InitiateJobInput$vaultName": "

The name of the vault.

", "InitiateJobOutput$location": "

The relative URI path of the job.

", "InitiateJobOutput$jobId": "

The ID of the job.

", - "InitiateMultipartUploadInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "InitiateMultipartUploadInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "InitiateMultipartUploadInput$vaultName": "

The name of the vault.

", "InitiateMultipartUploadInput$archiveDescription": "

The archive description that you are uploading in parts.

The part size must be a megabyte (1024 KB) multiplied by a power of 2—for example, 1048576 (1 MB), 2097152 (2 MB), 4194304 (4 MB), 8388608 (8 MB), and so on. The minimum allowable part size is 1 MB, and the maximum is 4 GB (4096 MB).

", "InitiateMultipartUploadInput$partSize": "

The size of each part except the last, in bytes. The last part can be smaller than this part size.

", @@ -469,19 +504,19 @@ "LimitExceededException$type": "

Client

", "LimitExceededException$code": "

400 Bad Request

", "LimitExceededException$message": null, - "ListJobsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "ListJobsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "ListJobsInput$vaultName": "

The name of the vault.

", "ListJobsInput$limit": "

Specifies that the response be limited to the specified number of items or fewer. If not specified, the List Jobs operation returns up to 1,000 jobs.

", "ListJobsInput$marker": "

An opaque string used for pagination. This value specifies the job at which the listing of jobs should begin. Get the marker value from a previous List Jobs response. You need only include the marker if you are continuing the pagination of results started in a previous List Jobs request.

", "ListJobsInput$statuscode": "

Specifies the type of job status to return. You can specify the following values: \"InProgress\", \"Succeeded\", or \"Failed\".

", "ListJobsInput$completed": "

Specifies the state of the jobs to return. You can specify true or false.

", "ListJobsOutput$Marker": "

An opaque string that represents where to continue pagination of the results. You use this value in a new List Jobs request to obtain more jobs in the list. If there are no more jobs, this value is null.

", - "ListMultipartUploadsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "ListMultipartUploadsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "ListMultipartUploadsInput$vaultName": "

The name of the vault.

", "ListMultipartUploadsInput$marker": "

An opaque string used for pagination. This value specifies the upload at which the listing of uploads should begin. Get the marker value from a previous List Uploads response. You need only include the marker if you are continuing the pagination of results started in a previous List Uploads request.

", "ListMultipartUploadsInput$limit": "

Specifies the maximum number of uploads returned in the response body. If this value is not specified, the List Uploads operation returns up to 1,000 uploads.

", "ListMultipartUploadsOutput$Marker": "

An opaque string that represents where to continue pagination of the results. You use the marker in a new List Multipart Uploads request to obtain more uploads in the list. If there are no more uploads, this value is null.

", - "ListPartsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "ListPartsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "ListPartsInput$vaultName": "

The name of the vault.

", "ListPartsInput$uploadId": "

The upload ID of the multipart upload.

", "ListPartsInput$marker": "

An opaque string used for pagination. This value specifies the part at which the listing of parts should begin. Get the marker value from the response of a previous List Parts response. You need only include the marker if you are continuing the pagination of results started in a previous List Parts request.

", @@ -491,7 +526,7 @@ "ListPartsOutput$ArchiveDescription": "

The description of the archive that was specified in the Initiate Multipart Upload request.

", "ListPartsOutput$CreationDate": "

The UTC time at which the multipart upload was initiated.

", "ListPartsOutput$Marker": "

An opaque string that represents where to continue pagination of the results. You use the marker in a new List Parts request to obtain more jobs in the list. If there are no more parts, this value is null.

", - "ListVaultsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "ListVaultsInput$accountId": "

The AccountId value is the AWS account ID. This value must match the AWS account ID associated with the credentials used to sign the request. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include any hyphens (apos-apos) in the ID.

", "ListVaultsInput$marker": "

A string used for pagination. The marker specifies the vault ARN after which the listing of vaults should begin.

", "ListVaultsInput$limit": "

The maximum number of items returned in the response. If you don't specify a value, the List Vaults operation returns up to 1,000 items.

", "ListVaultsOutput$Marker": "

The vault ARN at which to continue pagination of the results. You use the marker in another List Vaults request to obtain more vaults in the list.

", @@ -513,23 +548,26 @@ "ServiceUnavailableException$type": "

Server

", "ServiceUnavailableException$code": "

500 Internal Server Error

", "ServiceUnavailableException$message": null, - "SetDataRetrievalPolicyInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include the dashes in it.

", - "SetVaultNotificationsInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "SetDataRetrievalPolicyInput$accountId": "

The AccountId value is the AWS account ID. This value must match the AWS account ID associated with the credentials used to sign the request. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include any hyphens (apos-apos) in the ID.

", + "SetVaultAccessPolicyInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", + "SetVaultAccessPolicyInput$vaultName": "

The name of the vault.

", + "SetVaultNotificationsInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "SetVaultNotificationsInput$vaultName": "

The name of the vault.

", "UploadArchiveInput$vaultName": "

The name of the vault.

", - "UploadArchiveInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "UploadArchiveInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "UploadArchiveInput$archiveDescription": "

The optional description of the archive you are uploading.

", "UploadArchiveInput$checksum": "

The SHA256 tree hash of the data being uploaded.

", "UploadListElement$MultipartUploadId": "

The ID of a multipart upload.

", "UploadListElement$VaultARN": "

The Amazon Resource Name (ARN) of the vault that contains the archive.

", "UploadListElement$ArchiveDescription": "

The description of the archive that was specified in the Initiate Multipart Upload request.

", "UploadListElement$CreationDate": "

The UTC time at which the multipart upload was initiated.

", - "UploadMultipartPartInput$accountId": "

The AccountId is the AWS Account ID. You can specify either the AWS Account ID or optionally a '-', in which case Amazon Glacier uses the AWS Account ID associated with the credentials used to sign the request. If you specify your Account ID, do not include hyphens in it.

", + "UploadMultipartPartInput$accountId": "

The AccountId value is the AWS account ID of the account that owns the vault. You can either specify an AWS account ID or optionally a single apos-apos (hyphen), in which case Amazon Glacier uses the AWS account ID associated with the credentials used to sign the request. If you use an account ID, do not include any hyphens (apos-apos) in the ID.

", "UploadMultipartPartInput$vaultName": "

The name of the vault.

", "UploadMultipartPartInput$uploadId": "

The upload ID of the multipart upload.

", "UploadMultipartPartInput$checksum": "

The SHA256 tree hash of the data being uploaded.

", "UploadMultipartPartInput$range": "

Identifies the range of bytes in the assembled archive that will be uploaded in this part. Amazon Glacier uses this information to assemble the archive in the proper sequence. The format of this header follows RFC 2616. An example header is Content-Range:bytes 0-4194303/*.

", "UploadMultipartPartOutput$checksum": "

The SHA256 tree hash that Amazon Glacier computed for the uploaded part.

", + "VaultAccessPolicy$Policy": "

The vault access policy.

", "VaultNotificationConfig$SNSTopic": "

The Amazon Simple Notification Service (Amazon SNS) topic Amazon Resource Name (ARN).

" } } diff --git a/src/data/iam/2010-05-08/docs-2.json b/src/data/iam/2010-05-08/docs-2.json index 527ba8e94d..9803bc4a6d 100644 --- a/src/data/iam/2010-05-08/docs-2.json +++ b/src/data/iam/2010-05-08/docs-2.json @@ -16,7 +16,7 @@ "CreateOpenIDConnectProvider": "

Creates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).

The OIDC provider that you create with this operation can be used as a principal in a role's trust policy to establish a trust relationship between AWS and the OIDC provider.

When you create the IAM OIDC provider, you specify the URL of the OIDC identity provider (IdP) to trust, a list of client IDs (also known as audiences) that identify the application or applications that are allowed to authenticate using the OIDC provider, and a list of thumbprints of the server certificate(s) that the IdP uses. You get all of this information from the OIDC IdP that you want to use for access to AWS.

Because trust for the OIDC provider is ultimately derived from the IAM provider that this action creates, it is a best practice to limit access to the CreateOpenIDConnectProvider action to highly-privileged users. ", "CreatePolicy": "

Creates a new managed policy for your AWS account.

This operation creates a policy version with a version identifier of v1 and sets v1 as the policy's default version. For more information about policy versions, see Versioning for Managed Policies in the Using IAM guide.

For more information about managed policies in general, refer to Managed Policies and Inline Policies in the Using IAM guide.

", "CreatePolicyVersion": "

Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using DeletePolicyVersion before you create a new version.

Optionally, you can set the new version as the policy's default version. The default version is the operative version; that is, the version that is in effect for the IAM users, groups, and roles that the policy is attached to.

For more information about managed policy versions, see Versioning for Managed Policies in the Using IAM guide.

", - "CreateRole": "

Creates a new role for your AWS account. For more information about roles, go to Working with Roles. For information about limitations on role names and the number of roles you can create, go to Limitations on IAM Entities in the Using IAM guide.

The example policy grants permission to an EC2 instance to assume the role. The policy is URL-encoded according to RFC 3986. For more information about RFC 3986, go to http://www.faqs.org/rfcs/rfc3986.html.

", + "CreateRole": "

Creates a new role for your AWS account. For more information about roles, go to Working with Roles. For information about limitations on role names and the number of roles you can create, go to Limitations on IAM Entities in the Using IAM guide.

The policy in the following example grants permission to an EC2 instance to assume the role.

", "CreateSAMLProvider": "

Creates an IAM entity to describe an identity provider (IdP) that supports SAML 2.0.

The SAML provider that you create with this operation can be used as a principal in a role's trust policy to establish a trust relationship between AWS and a SAML identity provider. You can create an IAM role that supports Web-based single sign-on (SSO) to the AWS Management Console or one that supports API access to AWS.

When you create the SAML provider, you upload an a SAML metadata document that you get from your IdP and that includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization's IdP.

This operation requires Signature Version 4.

For more information, see Giving Console Access Using SAML and Creating Temporary Security Credentials for SAML Federation in the Using Temporary Credentials guide.

", "CreateUser": "

Creates a new user for your AWS account.

For information about limitations on the number of users you can create, see Limitations on IAM Entities in the Using IAM guide.

", "CreateVirtualMFADevice": "

Creates a new virtual MFA device for the AWS account. After creating the virtual MFA, use EnableMFADevice to attach the MFA device to an IAM user. For more information about creating and working with virtual MFA devices, go to Using a Virtual MFA Device in the Using IAM guide.

For information about limits on the number of MFA devices you can create, see Limitations on Entities in the Using IAM guide.

The seed information contained in the QR code and the Base32 string should be treated like any other secret access information, such as your AWS access keys or your passwords. After you provision your virtual device, you should ensure that the information is destroyed following secure procedures. ", @@ -45,7 +45,7 @@ "EnableMFADevice": "

Enables the specified MFA device and associates it with the specified user name. When enabled, the MFA device is required for every subsequent login by the user name associated with the device.

", "GenerateCredentialReport": "

Generates a credential report for the AWS account. For more information about the credential report, see Getting Credential Reports in the Using IAM guide.

", "GetAccessKeyLastUsed": "

Retrieves information about when the specified access key was last used. The information includes the date and time of last use, along with the AWS service and region that were specified in the last request made with that key.

", - "GetAccountAuthorizationDetails": "

Retrieves information about all IAM users, groups, and roles in your account, including their relationships to one another and their policies. Use this API to obtain a snapshot of the configuration of IAM permissions (users, groups, roles, and policies) in your account.

You can optionally filter the results using the Filter parameter. You can paginate the results using the MaxItems and Marker parameters.

", + "GetAccountAuthorizationDetails": "

Retrieves information about all IAM users, groups, roles, and policies in your account, including their relationships to one another. Use this API to obtain a snapshot of the configuration of IAM permissions (users, groups, roles, and policies) in your account.

You can optionally filter the results using the Filter parameter. You can paginate the results using the MaxItems and Marker parameters.

", "GetAccountPasswordPolicy": "

Retrieves the password policy for the AWS account. For more information about using a password policy, go to Managing an IAM Password Policy.

", "GetAccountSummary": "

Retrieves information about IAM entity usage and IAM quotas in the AWS account.

For information about limitations on IAM entities, see Limitations on IAM Entities in the Using IAM guide.

", "GetCredentialReport": "

Retrieves a credential report for the AWS account. For more information about the credential report, see Getting Credential Reports in the Using IAM guide.

", @@ -56,7 +56,7 @@ "GetOpenIDConnectProvider": "

Returns information about the specified OpenID Connect provider.

", "GetPolicy": "

Retrieves information about the specified managed policy, including the policy's default version and the total number of users, groups, and roles that the policy is attached to. For a list of the specific users, groups, and roles that the policy is attached to, use the ListEntitiesForPolicy API. This API returns metadata about the policy. To retrieve the policy document for a specific version of the policy, use GetPolicyVersion.

This API retrieves information about managed policies. To retrieve information about an inline policy that is embedded with a user, group, or role, use the GetUserPolicy, GetGroupPolicy, or GetRolePolicy API.

For more information about policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

", "GetPolicyVersion": "

Retrieves information about the specified version of the specified managed policy, including the policy document.

To list the available versions for a policy, use ListPolicyVersions.

This API retrieves information about managed policies. To retrieve information about an inline policy that is embedded in a user, group, or role, use the GetUserPolicy, GetGroupPolicy, or GetRolePolicy API.

For more information about the types of policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

", - "GetRole": "

Retrieves information about the specified role, including the role's path, GUID, ARN, and the policy granting permission to assume the role. For more information about ARNs, go to ARNs. For more information about roles, go to Working with Roles.

The returned policy is URL-encoded according to RFC 3986. For more information about RFC 3986, go to http://www.faqs.org/rfcs/rfc3986.html.

", + "GetRole": "

Retrieves information about the specified role, including the role's path, GUID, ARN, and the policy granting permission to assume the role. For more information about ARNs, go to ARNs. For more information about roles, go to Working with Roles.

", "GetRolePolicy": "

Retrieves the specified inline policy document that is embedded with the specified role.

A role can also have managed policies attached to it. To retrieve a managed policy document that is attached to a role, use GetPolicy to determine the policy's default version, then use GetPolicyVersion to retrieve the policy document.

For more information about policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

For more information about roles, go to Using Roles to Delegate Permissions and Federate Identities.

", "GetSAMLProvider": "

Returns the SAML provider metadocument that was uploaded when the provider was created or updated.

This operation requires Signature Version 4. ", "GetServerCertificate": "

Retrieves information about the specified server certificate.

", @@ -78,7 +78,7 @@ "ListPolicies": "

Lists all the managed policies that are available to your account, including your own customer managed policies and all AWS managed policies.

You can filter the list of policies that is returned using the optional OnlyAttached, Scope, and PathPrefix parameters. For example, to list only the customer managed policies in your AWS account, set Scope to Local. To list only AWS managed policies, set Scope to AWS.

You can paginate the results using the MaxItems and Marker parameters.

For more information about managed policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

", "ListPolicyVersions": "

Lists information about the versions of the specified managed policy, including the version that is set as the policy's default version.

For more information about managed policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

", "ListRolePolicies": "

Lists the names of the inline policies that are embedded in the specified role.

A role can also have managed policies attached to it. To list the managed policies that are attached to a role, use ListAttachedRolePolicies. For more information about policies, refer to Managed Policies and Inline Policies in the Using IAM guide.

You can paginate the results using the MaxItems and Marker parameters. If there are no inline policies embedded with the specified role, the action returns an empty list.

", - "ListRoles": "

Lists the roles that have the specified path prefix. If there are none, the action returns an empty list. For more information about roles, go to Working with Roles.

You can paginate the results using the MaxItems and Marker parameters.

The returned policy is URL-encoded according to RFC 3986. For more information about RFC 3986, go to http://www.faqs.org/rfcs/rfc3986.html.

", + "ListRoles": "

Lists the roles that have the specified path prefix. If there are none, the action returns an empty list. For more information about roles, go to Working with Roles.

You can paginate the results using the MaxItems and Marker parameters.

", "ListSAMLProviders": "

Lists the SAML providers in the account.

This operation requires Signature Version 4. ", "ListServerCertificates": "

Lists the server certificates that have the specified path prefix. If none exist, the action returns an empty list.

You can paginate the results using the MaxItems and Marker parameters.

", "ListSigningCertificates": "

Returns information about the signing certificates associated with the specified user. If there are none, the action returns an empty list.

Although each user is limited to a small number of signing certificates, you can still paginate the results using the MaxItems and Marker parameters.

If the UserName field is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. Because this action works for access keys under the AWS account, you can use this action to manage root credentials even if the AWS account has no associated users.

", @@ -1873,19 +1873,19 @@ "policyDocumentType": { "base": null, "refs": { - "CreatePolicyRequest$PolicyDocument": "

The policy document.

The policy must be URL-encoded according to RFC 3986.

", - "CreatePolicyVersionRequest$PolicyDocument": "

The policy document.

The policy must be URL-encoded according to RFC 3986.

", + "CreatePolicyRequest$PolicyDocument": "

The policy document.

", + "CreatePolicyVersionRequest$PolicyDocument": "

The policy document.

", "CreateRoleRequest$AssumeRolePolicyDocument": "

The policy that grants an entity permission to assume the role.

", "GetGroupPolicyResponse$PolicyDocument": "

The policy document.

", "GetRolePolicyResponse$PolicyDocument": "

The policy document.

", "GetUserPolicyResponse$PolicyDocument": "

The policy document.

", - "PolicyDetail$PolicyDocument": "

The policy document.

The returned policy is URL-encoded according to RFC 3986.

", - "PolicyVersion$Document": "

The policy document.

The policy document is returned in the response to the GetPolicyVersion operation. It is not included in the response to the ListPolicyVersions or GetAccountAuthorizationDetails operations.

", + "PolicyDetail$PolicyDocument": "

The policy document.

", + "PolicyVersion$Document": "

The policy document.

The policy document is returned in the response to the GetPolicyVersion and GetAccountAuthorizationDetails operations. It is not returned in the response to the CreatePolicyVersion or ListPolicyVersions operations.

", "PutGroupPolicyRequest$PolicyDocument": "

The policy document.

", "PutRolePolicyRequest$PolicyDocument": "

The policy document.

", "PutUserPolicyRequest$PolicyDocument": "

The policy document.

", - "Role$AssumeRolePolicyDocument": "

The policy that grants an entity permission to assume the role.

The returned policy is URL-encoded according to RFC 3986.

", - "RoleDetail$AssumeRolePolicyDocument": "

The trust policy that grants permission to assume the role.

The returned policy is URL-encoded according to RFC 3986.

", + "Role$AssumeRolePolicyDocument": "

The policy that grants an entity permission to assume the role.

", + "RoleDetail$AssumeRolePolicyDocument": "

The trust policy that grants permission to assume the role.

", "UpdateAssumeRolePolicyRequest$PolicyDocument": "

The policy that grants an entity permission to assume the role.

" } }, diff --git a/src/data/iam/2010-05-08/paginators-1.json b/src/data/iam/2010-05-08/paginators-1.json index 7718bfbddd..f302ff03ea 100644 --- a/src/data/iam/2010-05-08/paginators-1.json +++ b/src/data/iam/2010-05-08/paginators-1.json @@ -63,6 +63,13 @@ "limit_key": "MaxItems", "result_key": "MFADevices" }, + "ListPolicies": { + "input_token": "Marker", + "output_token": "Marker", + "more_results": "IsTruncated", + "limit_key": "MaxItems", + "result_key": "Policies" + }, "ListRolePolicies": { "input_token": "Marker", "output_token": "Marker", diff --git a/src/data/kinesis/2013-12-02/api-2.json b/src/data/kinesis/2013-12-02/api-2.json index 66df493114..8d0ad2a1db 100644 --- a/src/data/kinesis/2013-12-02/api-2.json +++ b/src/data/kinesis/2013-12-02/api-2.json @@ -1,4 +1,5 @@ { + "version":"2.0", "metadata":{ "apiVersion":"2013-12-02", "endpointPrefix":"kinesis", @@ -392,7 +393,8 @@ "required":["Records"], "members":{ "Records":{"shape":"RecordList"}, - "NextShardIterator":{"shape":"ShardIterator"} + "NextShardIterator":{"shape":"ShardIterator"}, + "MillisBehindLatest":{"shape":"MillisBehindLatest"} } }, "GetShardIteratorInput":{ @@ -505,6 +507,10 @@ "AdjacentShardToMerge":{"shape":"ShardId"} } }, + "MillisBehindLatest":{ + "type":"long", + "min":0 + }, "PartitionKey":{ "type":"string", "min":1, diff --git a/src/data/kinesis/2013-12-02/docs-2.json b/src/data/kinesis/2013-12-02/docs-2.json index e65e4ff5ef..e6e38f6ba6 100644 --- a/src/data/kinesis/2013-12-02/docs-2.json +++ b/src/data/kinesis/2013-12-02/docs-2.json @@ -1,18 +1,19 @@ { + "version": "2.0", "operations": { "AddTagsToStream": "

Adds or updates tags for the specified Amazon Kinesis stream. Each stream can have up to 10 tags.

If tags have already been assigned to the stream, AddTagsToStream overwrites any existing tags that correspond to the specified tag keys.

", - "CreateStream": "

Creates a Amazon Kinesis stream. A stream captures and transports data records that are continuously emitted from different data sources or producers. Scale-out within an Amazon Kinesis stream is explicitly supported by means of shards, which are uniquely identified groups of data records in an Amazon Kinesis stream.

You specify and control the number of shards that a stream is composed of. Each open shard can support up to 5 read transactions per second, up to a maximum total of 2 MB of data read per second. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second. You can add shards to a stream if the amount of data input increases and you can remove shards if the amount of data input decreases.

The stream name identifies the stream. The name is scoped to the AWS account used by the application. It is also scoped by region. That is, two streams in two different accounts can have the same name, and two streams in the same account, but in two different regions, can have the same name.

CreateStream is an asynchronous operation. Upon receiving a CreateStream request, Amazon Kinesis immediately returns and sets the stream status to CREATING. After the stream is created, Amazon Kinesis sets the stream status to ACTIVE. You should perform read and write operations only on an ACTIVE stream.

You receive a LimitExceededException when making a CreateStream request if you try to do one of the following:

The default limit for an AWS account is 10 shards per stream. If you need to create a stream with more than 10 shards, contact AWS Support to increase the limit on your account.

You can use DescribeStream to check the stream status, which is returned in StreamStatus.

CreateStream has a limit of 5 transactions per second per account.

", - "DeleteStream": "

Deletes a stream and all its shards and data. You must shut down any applications that are operating on the stream before you delete the stream. If an application attempts to operate on a deleted stream, it will receive the exception ResourceNotFoundException.

If the stream is in the ACTIVE state, you can delete it. After a DeleteStream request, the specified stream is in the DELETING state until Amazon Kinesis completes the deletion.

Note: Amazon Kinesis might continue to accept data read and write operations, such as PutRecord, PutRecords, and GetRecords, on a stream in the DELETING state until the stream deletion is complete.

When you delete a stream, any shards in that stream are also deleted, and any tags are dissociated from the stream.

You can use the DescribeStream operation to check the state of the stream, which is returned in StreamStatus.

DeleteStream has a limit of 5 transactions per second per account.

", - "DescribeStream": "

Describes the specified stream.

The information about the stream includes its current status, its Amazon Resource Name (ARN), and an array of shard objects. For each shard object, there is information about the hash key and sequence number ranges that the shard spans, and the IDs of any earlier shards that played in a role in creating the shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned when a record is put into the stream.

You can limit the number of returned shards using the Limit parameter. The number of shards in a stream may be too large to return from a single call to DescribeStream. You can detect this by using the HasMoreShards flag in the returned output. HasMoreShards is set to true when there is more data available.

DescribeStream is a paginated operation. If there are more shards available, you can request them using the shard ID of the last shard returned. Specify this ID in the ExclusiveStartShardId parameter in a subsequent request to DescribeStream.

DescribeStream has a limit of 10 transactions per second per account.

", - "GetRecords": "

Gets data records from a shard.

Specify a shard iterator using the ShardIterator parameter. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. If there are no records available in the portion of the shard that the iterator points to, GetRecords returns an empty list. Note that it might take multiple calls to get to a portion of the shard that contains records.

You can scale by provisioning multiple shards. Your application should have one thread per shard, each reading continuously from its stream. To read from a stream continually, call GetRecords in a loop. Use GetShardIterator to get the shard iterator to specify in the first GetRecords call. GetRecords returns a new shard iterator in NextShardIterator. Specify the shard iterator returned in NextShardIterator in subsequent calls to GetRecords. Note that if the shard has been closed, the shard iterator can't return more data and GetRecords returns null in NextShardIterator. You can terminate the loop when the shard is closed, or when the shard iterator reaches the record with the sequence number or other attribute that marks it as the last record to process.

Each data record can be up to 50 KB in size, and each shard can read up to 2 MB per second. You can ensure that your calls don't exceed the maximum supported size or throughput by using the Limit parameter to specify the maximum number of records that GetRecords can return. Consider your average record size when determining this limit. For example, if your average record size is 40 KB, you can limit the data returned to about 1 MB per call by specifying 25 as the limit.

The size of the data returned by GetRecords will vary depending on the utilization of the shard. The maximum size of data that GetRecords can return is 10 MB. If a call returns 10 MB of data, subsequent calls made within the next 5 seconds throw ProvisionedThroughputExceededException. If there is insufficient provisioned throughput on the shard, subsequent calls made within the next 1 second throw ProvisionedThroughputExceededException. Note that GetRecords won't return any data when it throws an exception. For this reason, we recommend that you wait one second between calls to GetRecords; however, it's possible that the application will get exceptions for longer than 1 second.

To detect whether the application is falling behind in processing, add a timestamp to your records and note how long it takes to process them. You can also monitor how much data is in a stream using the CloudWatch metrics for write operations (PutRecord and PutRecords). For more information, see Monitoring Amazon Kinesis with Amazon CloudWatch in the Amazon Kinesis Developer Guide.

", - "GetShardIterator": "

Gets a shard iterator. A shard iterator expires five minutes after it is returned to the requester.

A shard iterator specifies the position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned when a record is put into the stream.

You must specify the shard iterator type. For example, you can set the ShardIteratorType parameter to read exactly from the position denoted by a specific sequence number by using the AT_SEQUENCE_NUMBER shard iterator type, or right after the sequence number by using the AFTER_SEQUENCE_NUMBER shard iterator type, using sequence numbers returned by earlier calls to PutRecord, PutRecords, GetRecords, or DescribeStream. You can specify the shard iterator type TRIM_HORIZON in the request to cause ShardIterator to point to the last untrimmed record in the shard in the system, which is the oldest data record in the shard. Or you can point to just after the most recent record in the shard, by using the shard iterator type LATEST, so that you always read the most recent data in the shard.

When you repeatedly read from an Amazon Kinesis stream use a GetShardIterator request to get the first shard iterator to to use in your first GetRecords request and then use the shard iterator returned by the GetRecords request in NextShardIterator for subsequent reads. A new shard iterator is returned by every GetRecords request in NextShardIterator, which you use in the ShardIterator parameter of the next GetRecords request.

If a GetShardIterator request is made too often, you receive a ProvisionedThroughputExceededException. For more information about throughput limits, see GetRecords.

If the shard is closed, the iterator can't return more data, and GetShardIterator returns null for its ShardIterator. A shard can be closed using SplitShard or MergeShards.

GetShardIterator has a limit of 5 transactions per second per account per open shard.

", - "ListStreams": "

Lists your streams.

The number of streams may be too large to return from a single call to ListStreams. You can limit the number of returned streams using the Limit parameter. If you do not specify a value for the Limit parameter, Amazon Kinesis uses the default limit, which is currently 10.

You can detect if there are more streams available to list by using the HasMoreStreams flag from the returned output. If there are more streams available, you can request more streams by using the name of the last stream returned by the ListStreams request in the ExclusiveStartStreamName parameter in a subsequent request to ListStreams. The group of stream names returned by the subsequent request is then added to the list. You can continue this process until all the stream names have been collected in the list.

ListStreams has a limit of 5 transactions per second per account.

", + "CreateStream": "

Creates a Amazon Kinesis stream. A stream captures and transports data records that are continuously emitted from different data sources or producers. Scale-out within an Amazon Kinesis stream is explicitly supported by means of shards, which are uniquely identified groups of data records in an Amazon Kinesis stream.

You specify and control the number of shards that a stream is composed of. Each open shard can support up to 5 read transactions per second, up to a maximum total of 2 MB of data read per second. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second. You can add shards to a stream if the amount of data input increases and you can remove shards if the amount of data input decreases.

The stream name identifies the stream. The name is scoped to the AWS account used by the application. It is also scoped by region. That is, two streams in two different accounts can have the same name, and two streams in the same account, but in two different regions, can have the same name.

CreateStream is an asynchronous operation. Upon receiving a CreateStream request, Amazon Kinesis immediately returns and sets the stream status to CREATING. After the stream is created, Amazon Kinesis sets the stream status to ACTIVE. You should perform read and write operations only on an ACTIVE stream.

You receive a LimitExceededException when making a CreateStream request if you try to do one of the following:

For the default shard limit for an AWS account, see Amazon Kinesis Limits. If you need to increase this limit, contact AWS Support

You can use DescribeStream to check the stream status, which is returned in StreamStatus.

CreateStream has a limit of 5 transactions per second per account.

", + "DeleteStream": "

Deletes a stream and all its shards and data. You must shut down any applications that are operating on the stream before you delete the stream. If an application attempts to operate on a deleted stream, it will receive the exception ResourceNotFoundException.

If the stream is in the ACTIVE state, you can delete it. After a DeleteStream request, the specified stream is in the DELETING state until Amazon Kinesis completes the deletion.

Note: Amazon Kinesis might continue to accept data read and write operations, such as PutRecord, PutRecords, and GetRecords, on a stream in the DELETING state until the stream deletion is complete.

When you delete a stream, any shards in that stream are also deleted, and any tags are dissociated from the stream.

You can use the DescribeStream operation to check the state of the stream, which is returned in StreamStatus.

DeleteStream has a limit of 5 transactions per second per account.

", + "DescribeStream": "

Describes the specified stream.

The information about the stream includes its current status, its Amazon Resource Name (ARN), and an array of shard objects. For each shard object, there is information about the hash key and sequence number ranges that the shard spans, and the IDs of any earlier shards that played in a role in creating the shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned when a record is put into the stream.

You can limit the number of returned shards using the Limit parameter. The number of shards in a stream may be too large to return from a single call to DescribeStream. You can detect this by using the HasMoreShards flag in the returned output. HasMoreShards is set to true when there is more data available.

DescribeStream is a paginated operation. If there are more shards available, you can request them using the shard ID of the last shard returned. Specify this ID in the ExclusiveStartShardId parameter in a subsequent request to DescribeStream.

DescribeStream has a limit of 10 transactions per second per account.

", + "GetRecords": "

Gets data records from a shard.

Specify a shard iterator using the ShardIterator parameter. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. If there are no records available in the portion of the shard that the iterator points to, GetRecords returns an empty list. Note that it might take multiple calls to get to a portion of the shard that contains records.

You can scale by provisioning multiple shards. Your application should have one thread per shard, each reading continuously from its stream. To read from a stream continually, call GetRecords in a loop. Use GetShardIterator to get the shard iterator to specify in the first GetRecords call. GetRecords returns a new shard iterator in NextShardIterator. Specify the shard iterator returned in NextShardIterator in subsequent calls to GetRecords. Note that if the shard has been closed, the shard iterator can't return more data and GetRecords returns null in NextShardIterator. You can terminate the loop when the shard is closed, or when the shard iterator reaches the record with the sequence number or other attribute that marks it as the last record to process.

Each data record can be up to 50 KB in size, and each shard can read up to 2 MB per second. You can ensure that your calls don't exceed the maximum supported size or throughput by using the Limit parameter to specify the maximum number of records that GetRecords can return. Consider your average record size when determining this limit. For example, if your average record size is 40 KB, you can limit the data returned to about 1 MB per call by specifying 25 as the limit.

The size of the data returned by GetRecords will vary depending on the utilization of the shard. The maximum size of data that GetRecords can return is 10 MB. If a call returns this amount of data, subsequent calls made within the next 5 seconds throw ProvisionedThroughputExceededException. If there is insufficient provisioned throughput on the shard, subsequent calls made within the next 1 second throw ProvisionedThroughputExceededException. Note that GetRecords won't return any data when it throws an exception. For this reason, we recommend that you wait one second between calls to GetRecords; however, it's possible that the application will get exceptions for longer than 1 second.

To detect whether the application is falling behind in processing, you can use the MillisBehindLatest response attribute. You can also monitor the amount of data in a stream using the CloudWatch metrics. For more information, see Monitoring Amazon Kinesis with Amazon CloudWatch in the Amazon Kinesis Developer Guide.

", + "GetShardIterator": "

Gets a shard iterator. A shard iterator expires five minutes after it is returned to the requester.

A shard iterator specifies the position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard. A sequence number is the identifier associated with every record ingested in the Amazon Kinesis stream. The sequence number is assigned when a record is put into the stream.

You must specify the shard iterator type. For example, you can set the ShardIteratorType parameter to read exactly from the position denoted by a specific sequence number by using the AT_SEQUENCE_NUMBER shard iterator type, or right after the sequence number by using the AFTER_SEQUENCE_NUMBER shard iterator type, using sequence numbers returned by earlier calls to PutRecord, PutRecords, GetRecords, or DescribeStream. You can specify the shard iterator type TRIM_HORIZON in the request to cause ShardIterator to point to the last untrimmed record in the shard in the system, which is the oldest data record in the shard. Or you can point to just after the most recent record in the shard, by using the shard iterator type LATEST, so that you always read the most recent data in the shard.

When you repeatedly read from an Amazon Kinesis stream use a GetShardIterator request to get the first shard iterator for use in your first GetRecords request and then use the shard iterator returned by the GetRecords request in NextShardIterator for subsequent reads. A new shard iterator is returned by every GetRecords request in NextShardIterator, which you use in the ShardIterator parameter of the next GetRecords request.

If a GetShardIterator request is made too often, you receive a ProvisionedThroughputExceededException. For more information about throughput limits, see GetRecords.

If the shard is closed, the iterator can't return more data, and GetShardIterator returns null for its ShardIterator. A shard can be closed using SplitShard or MergeShards.

GetShardIterator has a limit of 5 transactions per second per account per open shard.

", + "ListStreams": "

Lists your streams.

The number of streams may be too large to return from a single call to ListStreams. You can limit the number of returned streams using the Limit parameter. If you do not specify a value for the Limit parameter, Amazon Kinesis uses the default limit, which is currently 10.

You can detect if there are more streams available to list by using the HasMoreStreams flag from the returned output. If there are more streams available, you can request more streams by using the name of the last stream returned by the ListStreams request in the ExclusiveStartStreamName parameter in a subsequent request to ListStreams. The group of stream names returned by the subsequent request is then added to the list. You can continue this process until all the stream names have been collected in the list.

ListStreams has a limit of 5 transactions per second per account.

", "ListTagsForStream": "

Lists the tags for the specified Amazon Kinesis stream.

", - "MergeShards": "

Merges two adjacent shards in a stream and combines them into a single shard to reduce the stream's capacity to ingest and transport data. Two shards are considered adjacent if the union of the hash key ranges for the two shards form a contiguous set with no gaps. For example, if you have two shards, one with a hash key range of 276...381 and the other with a hash key range of 382...454, then you could merge these two shards into a single shard that would have a hash key range of 276...454. After the merge, the single child shard receives data for all hash key values covered by the two parent shards.

MergeShards is called when there is a need to reduce the overall capacity of a stream because of excess capacity that is not being used. You must specify the shard to be merged and the adjacent shard for a stream. For more information about merging shards, see Merge Two Shards in the Amazon Kinesis Developer Guide.

If the stream is in the ACTIVE state, you can call MergeShards. If a stream is in the CREATING, UPDATING, or DELETING state, MergeShards returns a ResourceInUseException. If the specified stream does not exist, MergeShards returns a ResourceNotFoundException.

You can use DescribeStream to check the state of the stream, which is returned in StreamStatus.

MergeShards is an asynchronous operation. Upon receiving a MergeShards request, Amazon Kinesis immediately returns a response and sets the StreamStatus to UPDATING. After the operation is completed, Amazon Kinesis sets the StreamStatus to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You use DescribeStream to determine the shard IDs that are specified in the MergeShards request.

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you will receive a LimitExceededException.

MergeShards has limit of 5 transactions per second per account.

", - "PutRecord": "

Puts (writes) a single data record from a producer into an Amazon Kinesis stream. Call PutRecord to send data from the producer into the Amazon Kinesis stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second.

You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis to distribute data across shards. Amazon Kinesis segregates the data records that belong to a data stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to.

Partition keys are Unicode strings, with a maximum length limit of 256 bytes. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the ExplicitHashKey parameter. For more information, see Partition Key in the Amazon Kinesis Developer Guide.

PutRecord returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record.

Sequence numbers generally increase over time. To guarantee strictly increasing ordering, use the SequenceNumberForOrdering parameter. For more information, see Sequence Number in the Amazon Kinesis Developer Guide.

If a PutRecord request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, PutRecord throws ProvisionedThroughputExceededException.

Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream.

", - "PutRecords": "

Puts (writes) multiple data records from a producer into an Amazon Kinesis stream in a single call (also referred to as a PutRecords request). Use this operation to send data from a data producer into the Amazon Kinesis stream for real-time ingestion and processing. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second.

You must specify the name of the stream that captures, stores, and transports the data; and an array of request Records, with each record in the array requiring a partition key and data blob.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis as input to a hash function that maps the partition key and associated data to a specific shard. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream. For more information, see Partition Key in the Amazon Kinesis Developer Guide.

Each record in the Records array may include an optional parameter, ExplicitHashKey, which overrides the partition key to shard mapping. This parameter allows a data producer to determine explicitly the shard where the record is stored. For more information, see Adding Multiple Records with PutRecords in the Amazon Kinesis Developer Guide.

The PutRecords response includes an array of response Records. Each record in the response array directly correlates with a record in the request array using natural ordering, from the top to the bottom of the request and response. The response Records array always includes the same number of records as the request array.

The response Records array includes both successfully and unsuccessfully processed records. Amazon Kinesis attempts to process all records in each PutRecords request. A single record failure does not stop the processing of subsequent records.

A successfully-processed record includes ShardId and SequenceNumber values. The ShardId parameter identifies the shard in the stream where the record is stored. The SequenceNumber parameter is an identifier assigned to the put record, unique to all records in the stream.

An unsuccessfully-processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error and can be one of the following values: ProvisionedThroughputExceededException or InternalFailure. ErrorMessage provides more detailed information about the ProvisionedThroughputExceededException exception including the account ID, stream name, and shard ID of the record that was throttled.

Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream.

", + "MergeShards": "

Merges two adjacent shards in a stream and combines them into a single shard to reduce the stream's capacity to ingest and transport data. Two shards are considered adjacent if the union of the hash key ranges for the two shards form a contiguous set with no gaps. For example, if you have two shards, one with a hash key range of 276...381 and the other with a hash key range of 382...454, then you could merge these two shards into a single shard that would have a hash key range of 276...454. After the merge, the single child shard receives data for all hash key values covered by the two parent shards.

MergeShards is called when there is a need to reduce the overall capacity of a stream because of excess capacity that is not being used. You must specify the shard to be merged and the adjacent shard for a stream. For more information about merging shards, see Merge Two Shards in the Amazon Kinesis Developer Guide.

If the stream is in the ACTIVE state, you can call MergeShards. If a stream is in the CREATING, UPDATING, or DELETING state, MergeShards returns a ResourceInUseException. If the specified stream does not exist, MergeShards returns a ResourceNotFoundException.

You can use DescribeStream to check the state of the stream, which is returned in StreamStatus.

MergeShards is an asynchronous operation. Upon receiving a MergeShards request, Amazon Kinesis immediately returns a response and sets the StreamStatus to UPDATING. After the operation is completed, Amazon Kinesis sets the StreamStatus to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You use DescribeStream to determine the shard IDs that are specified in the MergeShards request.

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you will receive a LimitExceededException.

MergeShards has limit of 5 transactions per second per account.

", + "PutRecord": "

Puts (writes) a single data record from a producer into an Amazon Kinesis stream. Call PutRecord to send data from the producer into the Amazon Kinesis stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second.

You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis to distribute data across shards. Amazon Kinesis segregates the data records that belong to a data stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to.

Partition keys are Unicode strings, with a maximum length limit of 256 characters for each key. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the ExplicitHashKey parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Developer Guide.

PutRecord returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record.

Sequence numbers generally increase over time. To guarantee strictly increasing ordering, use the SequenceNumberForOrdering parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Developer Guide.

If a PutRecord request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, PutRecord throws ProvisionedThroughputExceededException.

Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream.

", + "PutRecords": "

Puts (writes) multiple data records from a producer into an Amazon Kinesis stream in a single call (also referred to as a PutRecords request). Use this operation to send data from a data producer into the Amazon Kinesis stream for real-time ingestion and processing. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second.

You must specify the name of the stream that captures, stores, and transports the data; and an array of request Records, with each record in the array requiring a partition key and data blob.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis as input to a hash function that maps the partition key and associated data to a specific shard. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream. For more information, see Adding Data to a Stream in the Amazon Kinesis Developer Guide.

Each record in the Records array may include an optional parameter, ExplicitHashKey, which overrides the partition key to shard mapping. This parameter allows a data producer to determine explicitly the shard where the record is stored. For more information, see Adding Multiple Records with PutRecords in the Amazon Kinesis Developer Guide.

The PutRecords response includes an array of response Records. Each record in the response array directly correlates with a record in the request array using natural ordering, from the top to the bottom of the request and response. The response Records array always includes the same number of records as the request array.

The response Records array includes both successfully and unsuccessfully processed records. Amazon Kinesis attempts to process all records in each PutRecords request. A single record failure does not stop the processing of subsequent records.

A successfully-processed record includes ShardId and SequenceNumber values. The ShardId parameter identifies the shard in the stream where the record is stored. The SequenceNumber parameter is an identifier assigned to the put record, unique to all records in the stream.

An unsuccessfully-processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error and can be one of the following values: ProvisionedThroughputExceededException or InternalFailure. ErrorMessage provides more detailed information about the ProvisionedThroughputExceededException exception including the account ID, stream name, and shard ID of the record that was throttled. For more information about partially successful responses, see Adding Multiple Records with PutRecords in the Amazon Kinesis Developer Guide.

Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream.

", "RemoveTagsFromStream": "

Deletes tags from the specified Amazon Kinesis stream.

If you specify a tag that does not exist, it is ignored.

", - "SplitShard": "

Splits a shard into two new shards in the stream, to increase the stream's capacity to ingest and transport data. SplitShard is called when there is a need to increase the overall capacity of stream because of an expected increase in the volume of data records being ingested.

You can also use SplitShard when a shard appears to be approaching its maximum utilization, for example, when the set of producers sending data into the specific shard are suddenly sending more than previously anticipated. You can also call SplitShard to increase stream capacity, so that more Amazon Kinesis applications can simultaneously read data from the stream for real-time processing.

You must specify the shard to be split and the new hash key, which is the position in the shard where the shard gets split in two. In many cases, the new hash key might simply be the average of the beginning and ending hash key, but it can be any hash key value in the range being mapped into the shard. For more information about splitting shards, see Split a Shard in the Amazon Kinesis Developer Guide.

You can use DescribeStream to determine the shard ID and hash key values for the ShardToSplit and NewStartingHashKey parameters that are specified in the SplitShard request.

SplitShard is an asynchronous operation. Upon receiving a SplitShard request, Amazon Kinesis immediately returns a response and sets the stream status to UPDATING. After the operation is completed, Amazon Kinesis sets the stream status to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You can use DescribeStream to check the status of the stream, which is returned in StreamStatus. If the stream is in the ACTIVE state, you can call SplitShard. If a stream is in CREATING or UPDATING or DELETING states, DescribeStream returns a ResourceInUseException.

If the specified stream does not exist, DescribeStream returns a ResourceNotFoundException. If you try to create more shards than are authorized for your account, you receive a LimitExceededException.

The default limit for an AWS account is 10 shards per stream. If you need to create a stream with more than 10 shards, contact AWS Support to increase the limit on your account.

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you receive a LimitExceededException.

SplitShard has limit of 5 transactions per second per account.

" + "SplitShard": "

Splits a shard into two new shards in the stream, to increase the stream's capacity to ingest and transport data. SplitShard is called when there is a need to increase the overall capacity of stream because of an expected increase in the volume of data records being ingested.

You can also use SplitShard when a shard appears to be approaching its maximum utilization, for example, when the set of producers sending data into the specific shard are suddenly sending more than previously anticipated. You can also call SplitShard to increase stream capacity, so that more Amazon Kinesis applications can simultaneously read data from the stream for real-time processing.

You must specify the shard to be split and the new hash key, which is the position in the shard where the shard gets split in two. In many cases, the new hash key might simply be the average of the beginning and ending hash key, but it can be any hash key value in the range being mapped into the shard. For more information about splitting shards, see Split a Shard in the Amazon Kinesis Developer Guide.

You can use DescribeStream to determine the shard ID and hash key values for the ShardToSplit and NewStartingHashKey parameters that are specified in the SplitShard request.

SplitShard is an asynchronous operation. Upon receiving a SplitShard request, Amazon Kinesis immediately returns a response and sets the stream status to UPDATING. After the operation is completed, Amazon Kinesis sets the stream status to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You can use DescribeStream to check the status of the stream, which is returned in StreamStatus. If the stream is in the ACTIVE state, you can call SplitShard. If a stream is in CREATING or UPDATING or DELETING states, DescribeStream returns a ResourceInUseException.

If the specified stream does not exist, DescribeStream returns a ResourceNotFoundException. If you try to create more shards than are authorized for your account, you receive a LimitExceededException.

For the default shard limit for an AWS account, see Amazon Kinesis Limits. If you need to increase this limit, contact AWS Support

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you receive a LimitExceededException.

SplitShard has limit of 5 transactions per second per account.

" }, "service": "Amazon Kinesis Service API Reference

Amazon Kinesis is a managed service that scales elastically for real time processing of streaming big data.

", "shapes": { @@ -87,18 +88,18 @@ } }, "GetRecordsInput": { - "base": "

Represents the input for GetRecords.

", + "base": "

Represents the input for GetRecords.

", "refs": { } }, "GetRecordsInputLimit": { "base": null, "refs": { - "GetRecordsInput$Limit": "

The maximum number of records to return. Specify a value of up to 10,000. If you specify a value that is greater than 10,000, GetRecords throws InvalidArgumentException.

" + "GetRecordsInput$Limit": "

The maximum number of records to return. Specify a value of up to 10,000. If you specify a value that is greater than 10,000, GetRecords throws InvalidArgumentException.

" } }, "GetRecordsOutput": { - "base": "

Represents the output for GetRecords.

", + "base": "

Represents the output for GetRecords.

", "refs": { } }, @@ -175,18 +176,24 @@ "refs": { } }, + "MillisBehindLatest": { + "base": null, + "refs": { + "GetRecordsOutput$MillisBehindLatest": "

The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates record processing is caught up, and there are no new records to process at this moment.

" + } + }, "PartitionKey": { "base": null, "refs": { - "PutRecordInput$PartitionKey": "

Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

", - "PutRecordsRequestEntry$PartitionKey": "

Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

", + "PutRecordInput$PartitionKey": "

Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

", + "PutRecordsRequestEntry$PartitionKey": "

Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

", "Record$PartitionKey": "

Identifies which shard in the stream the data record is assigned to.

" } }, "PositiveIntegerObject": { "base": null, "refs": { - "CreateStreamInput$ShardCount": "

The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.

Note: The default limit for an AWS account is 10 shards per stream. If you need to create a stream with more than 10 shards, contact AWS Support to increase the limit on your account.

", + "CreateStreamInput$ShardCount": "

The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.

DefaultShardLimit;

", "PutRecordsOutput$FailedRecordCount": "

The number of unsuccessfully processed records in a PutRecords request.

" } }, @@ -270,7 +277,7 @@ "base": null, "refs": { "GetShardIteratorInput$StartingSequenceNumber": "

The sequence number of the data record in the shard from which to start reading from.

", - "PutRecordInput$SequenceNumberForOrdering": "

Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

", + "PutRecordInput$SequenceNumberForOrdering": "

Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the result when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

", "PutRecordOutput$SequenceNumber": "

The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.

", "PutRecordsResultEntry$SequenceNumber": "

The sequence number for an individual record result.

", "Record$SequenceNumber": "

The unique identifier for the record in the Amazon Kinesis stream.

", diff --git a/src/data/kms/2014-11-01/api-2.json b/src/data/kms/2014-11-01/api-2.json index bab03b3d76..deb77d3aa6 100644 --- a/src/data/kms/2014-11-01/api-2.json +++ b/src/data/kms/2014-11-01/api-2.json @@ -1,4 +1,5 @@ { + "version":"2.0", "metadata":{ "apiVersion":"2014-11-01", "endpointPrefix":"kms", @@ -427,6 +428,15 @@ }, "exception":true }, + { + "shape":"DisabledException", + "error":{ + "code":"Disabled", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, { "shape":"InvalidArnException", "error":{ @@ -526,6 +536,15 @@ }, "exception":true }, + { + "shape":"DisabledException", + "error":{ + "code":"Disabled", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, { "shape":"InvalidArnException", "error":{ @@ -911,6 +930,15 @@ "exception":true, "fault":true }, + { + "shape":"InvalidMarkerException", + "error":{ + "code":"InvalidMarker", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, { "shape":"KMSInternalException", "error":{ @@ -948,6 +976,15 @@ }, "exception":true }, + { + "shape":"InvalidArnException", + "error":{ + "code":"InvalidArn", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, { "shape":"KMSInternalException", "error":{ @@ -1212,6 +1249,15 @@ }, "exception":true }, + { + "shape":"NotFoundException", + "error":{ + "code":"NotFound", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, { "shape":"DependencyTimeoutException", "error":{ @@ -1257,6 +1303,51 @@ "exception":true, "fault":true }, + { + "shape":"InvalidArnException", + "error":{ + "code":"InvalidArn", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + { + "shape":"KMSInternalException", + "error":{ + "code":"KMSInternal", + "httpStatusCode":500 + }, + "exception":true + } + ] + }, + "UpdateAlias":{ + "name":"UpdateAlias", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateAliasRequest"}, + "errors":[ + { + "shape":"DependencyTimeoutException", + "error":{ + "code":"DependencyTimeout", + "httpStatusCode":503 + }, + "exception":true, + "fault":true + }, + { + "shape":"NotFoundException", + "error":{ + "code":"NotFound", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, { "shape":"KMSInternalException", "error":{ @@ -1974,9 +2065,10 @@ }, "RetireGrantRequest":{ "type":"structure", - "required":["GrantToken"], "members":{ - "GrantToken":{"shape":"GrantTokenType"} + "GrantToken":{"shape":"GrantTokenType"}, + "KeyId":{"shape":"KeyIdType"}, + "GrantId":{"shape":"GrantIdType"} } }, "RevokeGrantRequest":{ @@ -2002,6 +2094,17 @@ }, "exception":true }, + "UpdateAliasRequest":{ + "type":"structure", + "required":[ + "AliasName", + "TargetKeyId" + ], + "members":{ + "AliasName":{"shape":"AliasNameType"}, + "TargetKeyId":{"shape":"KeyIdType"} + } + }, "UpdateKeyDescriptionRequest":{ "type":"structure", "required":[ diff --git a/src/data/kms/2014-11-01/docs-2.json b/src/data/kms/2014-11-01/docs-2.json index f4b5bf88de..591d0ee450 100644 --- a/src/data/kms/2014-11-01/docs-2.json +++ b/src/data/kms/2014-11-01/docs-2.json @@ -1,18 +1,19 @@ { + "version": "2.0", "operations": { - "CreateAlias": "

Creates a display name for a customer master key. An alias can be used to identify a key and should be unique. The console enforces a one-to-one mapping between the alias and a key. An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word \"alias\" followed by a forward slash (alias/). An alias that begins with \"aws\" after the forward slash (alias/aws...) is reserved by Amazon Web Services (AWS).

", - "CreateGrant": "

Adds a grant to a key to specify who can access the key and under what conditions. Grants are alternate permission mechanisms to key policies. If absent, access to the key is evaluated based on IAM policies attached to the user. By default, grants do not expire. Grants can be listed, retired, or revoked as indicated by the following APIs. Typically, when you are finished using a grant, you retire it. When you want to end a grant immediately, revoke it. For more information about grants, see Grants.

  1. ListGrants
  2. RetireGrant
  3. RevokeGrant

", + "CreateAlias": "

Creates a display name for a customer master key. An alias can be used to identify a key and should be unique. The console enforces a one-to-one mapping between the alias and a key. An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word \"alias\" followed by a forward slash (alias/). An alias that begins with \"aws\" after the forward slash (alias/aws...) is reserved by Amazon Web Services (AWS).

To associate an alias with a different key, call UpdateAlias.

Note that you cannot create or update an alias that represents a key in another account.

", + "CreateGrant": "

Adds a grant to a key to specify who can access the key and under what conditions. Grants are alternate permission mechanisms to key policies. For more information about grants, see Grants in the developer guide. If a grant is absent, access to the key is evaluated based on IAM policies attached to the user.

  1. ListGrants
  2. RetireGrant
  3. RevokeGrant

", "CreateKey": "

Creates a customer master key. Customer master keys can be used to encrypt small amounts of data (less than 4K) directly, but they are most commonly used to encrypt or envelope data keys that are then used to encrypt customer data. For more information about data keys, see GenerateDataKey and GenerateDataKeyWithoutPlaintext.

", - "Decrypt": "

Decrypts ciphertext. Ciphertext is plaintext that has been previously encrypted by using the Encrypt function.

", - "DeleteAlias": "

Deletes the specified alias.

", + "Decrypt": "

Decrypts ciphertext. Ciphertext is plaintext that has been previously encrypted by using any of the following functions:

Note that if a caller has been granted access permissions to all keys (through, for example, IAM user policies that grant Decrypt permission on all resources), then ciphertext encrypted by using keys in other accounts where the key grants access to the caller can be decrypted. To remedy this, we recommend that you do not grant Decrypt access in an IAM user policy. Instead grant Decrypt access only in key policies. If you must grant Decrypt access in an IAM user policy, you should scope the resource to specific keys or to specific trusted accounts.

", + "DeleteAlias": "

Deletes the specified alias. To associate an alias with a different key, call UpdateAlias.

", "DescribeKey": "

Provides detailed information about the specified customer master key.

", "DisableKey": "

Marks a key as disabled, thereby preventing its use.

", "DisableKeyRotation": "Disables rotation of the specified key.", "EnableKey": "Marks a key as enabled, thereby permitting its use. You can have up to 25 enabled keys at one time.", "EnableKeyRotation": "Enables rotation of the specified customer master key.", - "Encrypt": "

Encrypts plaintext into ciphertext by using a customer master key.

", - "GenerateDataKey": "

Generates a secure data key. Data keys are used to encrypt and decrypt data. They are wrapped by customer master keys.

", - "GenerateDataKeyWithoutPlaintext": "

Returns a key wrapped by a customer master key without the plaintext copy of that key. To retrieve the plaintext, see GenerateDataKey.

", + "Encrypt": "

Encrypts plaintext into ciphertext by using a customer master key. The Encrypt function has two primary use cases:

Unless you are moving encrypted data from one region to another, you don't use this function to encrypt a generated data key within a region. You retrieve data keys already encrypted by calling the GenerateDataKey or GenerateDataKeyWithoutPlaintext function. Data keys don't need to be encrypted again by calling Encrypt.

If you want to encrypt data locally in your application, you can use the GenerateDataKey function to return a plaintext data encryption key and a copy of the key encrypted under the customer master key (CMK) of your choosing.

", + "GenerateDataKey": "

Generates a data key that you can use in your application to locally encrypt data. This call returns a plaintext version of the key in the Plaintext field of the response object and an encrypted copy of the key in the CiphertextBlob field. The key is encrypted by using the master key specified by the KeyId field. To decrypt the encrypted key, pass it to the Decrypt API.

We recommend that you use the following pattern to locally encrypt data: call the GenerateDataKey API, use the key returned in the Plaintext response field to locally encrypt data, and then erase the plaintext data key from memory. Store the encrypted data key (contained in the CiphertextBlob field) alongside of the locally encrypted data.

You should not call the Encrypt function to re-encrypt your data keys within a region. GenerateDataKey always returns the data key encrypted and tied to the customer master key that will be used to decrypt it. There is no need to decrypt it twice.

If you decide to use the optional EncryptionContext parameter, you must also store the context in full or at least store enough information along with the encrypted data to be able to reconstruct the context when submitting the ciphertext to the Decrypt API. It is a good practice to choose a context that you can reconstruct on the fly to better secure the ciphertext. For more information about how this parameter is used, see Encryption Context.

To decrypt data, pass the encrypted data key to the Decrypt API. Decrypt uses the associated master key to decrypt the encrypted data key and returns it as plaintext. Use the plaintext data key to locally decrypt your data and then erase the key from memory. You must specify the encryption context, if any, that you specified when you generated the key. The encryption context is logged by CloudTrail, and you can use this log to help track the use of particular data.

", + "GenerateDataKeyWithoutPlaintext": "

Returns a data key encrypted by a customer master key without the plaintext copy of that key. Otherwise, this API functions exactly like GenerateDataKey. You can use this API to, for example, satisfy an audit requirement that an encrypted key be made available without exposing the plaintext copy of that key.

", "GenerateRandom": "

Generates an unpredictable byte string.

", "GetKeyPolicy": "

Retrieves a policy attached to the specified key.

", "GetKeyRotationStatus": "Retrieves a Boolean value that indicates whether key rotation is enabled for the specified key.", @@ -21,12 +22,13 @@ "ListKeyPolicies": "

Retrieves a list of policies attached to a key.

", "ListKeys": "

Lists the customer master keys.

", "PutKeyPolicy": "

Attaches a policy to the specified key.

", - "ReEncrypt": "

Encrypts data on the server side with a new customer master key without exposing the plaintext of the data on the client side. The data is first decrypted and then encrypted. This operation can also be used to change the encryption context of a ciphertext.

", - "RetireGrant": "Retires a grant. You can retire a grant when you're done using it to clean up. You should revoke a grant when you intend to actively deny operations that depend on it.", + "ReEncrypt": "

Encrypts data on the server side with a new customer master key without exposing the plaintext of the data on the client side. The data is first decrypted and then encrypted. This operation can also be used to change the encryption context of a ciphertext.

Unlike other actions, ReEncrypt is authorized twice - once as ReEncryptFrom on the source key and once as ReEncryptTo on the destination key. We therefore recommend that you include the \"action\":\"kms:ReEncrypt*\" statement in your key policies to permit re-encryption from or to the key. The statement is included automatically when you authorize use of the key through the console but must be included manually when you set a policy by using the PutKeyPolicy function.

", + "RetireGrant": "

Retires a grant. You can retire a grant when you're done using it to clean up. You should revoke a grant when you intend to actively deny operations that depend on it. The following are permitted to call this API:

The grant to retire must be identified by its grant token or by a combination of the key ARN and the grant ID. A grant token is a unique variable-length base64-encoded string. A grant ID is a 64 character unique identifier of a grant. Both are returned by the CreateGrant function.

", "RevokeGrant": "Revokes a grant. You can revoke a grant to actively deny operations that depend on it.", + "UpdateAlias": "

Updates an alias to associate it with a different key.

An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word \"alias\" followed by a forward slash (alias/). An alias that begins with \"aws\" after the forward slash (alias/aws...) is reserved by Amazon Web Services (AWS).

An alias is not a property of a key. Therefore, an alias can be associated with and disassociated from an existing key without changing the properties of the key.

Note that you cannot create or update an alias that represents a key in another account.

", "UpdateKeyDescription": "

Updates the description of a key.

" }, - "service": "AWS Key Management Service

AWS Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS actions that you can call programmatically. For general information about KMS, see (need an address here). For the KMS developer guide, see (need address here).

AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and AWS. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.

We recommend that you use the AWS SDKs to make programmatic API calls to KMS. However, you can also use the KMS Query API to make to make direct calls to the KMS web service.

Signing Requests

Requests must be signed by using an access key ID and a secret access key. We strongly recommend that you do not use your AWS account access key ID and secret key for everyday work with KMS. Instead, use the access key ID and secret access key for an IAM user, or you can use the AWS Security Token Service to generate temporary security credentials that you can use to sign requests.

All KMS operations require Signature Version 4.

Recording API Requests

KMS supports AWS CloudTrail, a service that records AWS API calls and related events for your AWS account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide

Additional Resources

For more information about credentials and request signing, see the following:

", + "service": "AWS Key Management Service

AWS Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS actions that you can call programmatically. For general information about KMS, see the AWS Key Management Service Developer Guide

AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and AWS. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.

We recommend that you use the AWS SDKs to make programmatic API calls to KMS.

Clients must support TLS (Transport Layer Security) 1.0. We recommend TLS 1.2. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes.

Signing Requests

Requests must be signed by using an access key ID and a secret access key. We strongly recommend that you do not use your AWS account access key ID and secret key for everyday work with KMS. Instead, use the access key ID and secret access key for an IAM user, or you can use the AWS Security Token Service to generate temporary security credentials that you can use to sign requests.

All KMS operations require Signature Version 4.

Recording API Requests

KMS supports AWS CloudTrail, a service that records AWS API calls and related events for your AWS account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide

Additional Resources

For more information about credentials and request signing, see the following:

Commonly Used APIs

Of the APIs discussed in this guide, the following will prove the most useful for most applications. You will likely perform actions other than these, such as creating keys and assigning policies, by using the console.

", "shapes": { "AWSAccountIdType": { "base": null, @@ -50,8 +52,9 @@ "base": null, "refs": { "AliasListEntry$AliasName": "

String that contains the alias.

", - "CreateAliasRequest$AliasName": "

String that contains the display name. Aliases that begin with AWS are reserved.

", - "DeleteAliasRequest$AliasName": "

The alias to be deleted.

" + "CreateAliasRequest$AliasName": "

String that contains the display name. The name must start with the word \"alias\" followed by a forward slash (alias/). Aliases that begin with \"alias/AWS\" are reserved.

", + "DeleteAliasRequest$AliasName": "

The alias to be deleted. The name must start with the word \"alias\" followed by a forward slash (alias/). Aliases that begin with \"alias/AWS\" are reserved.

", + "UpdateAliasRequest$AliasName": "String that contains the name of the alias to be modifed. The name must start with the word \"alias\" followed by a forward slash (alias/). Aliases that begin with \"alias/AWS\" are reserved." } }, "AlreadyExistsException": { @@ -81,12 +84,12 @@ "CiphertextType": { "base": null, "refs": { - "DecryptRequest$CiphertextBlob": "

Ciphertext including metadata.

", - "EncryptResponse$CiphertextBlob": "

The encrypted plaintext.

", - "GenerateDataKeyResponse$CiphertextBlob": "

Ciphertext that contains the wrapped key. You must store the blob and encryption context so that the ciphertext can be decrypted. You must provide both the ciphertext blob and the encryption context.

", - "GenerateDataKeyWithoutPlaintextResponse$CiphertextBlob": "

Ciphertext that contains the wrapped key. You must store the blob and encryption context so that the key can be used in a future operation.

", + "DecryptRequest$CiphertextBlob": "

Ciphertext to be decrypted. The blob includes metadata.

", + "EncryptResponse$CiphertextBlob": "

The encrypted plaintext. If you are using the CLI, the value is Base64 encoded. Otherwise, it is not encoded.

", + "GenerateDataKeyResponse$CiphertextBlob": "

Ciphertext that contains the encrypted data key. You must store the blob and enough information to reconstruct the encryption context so that the data encrypted by using the key can later be decrypted. You must provide both the ciphertext blob and the encryption context to the Decrypt API to recover the plaintext data key and decrypt the object.

If you are using the CLI, the value is Base64 encoded. Otherwise, it is not encoded.

", + "GenerateDataKeyWithoutPlaintextResponse$CiphertextBlob": "

Ciphertext that contains the wrapped data key. You must store the blob and encryption context so that the key can be used in a future decrypt operation.

If you are using the CLI, the value is Base64 encoded. Otherwise, it is not encoded.

", "ReEncryptRequest$CiphertextBlob": "

Ciphertext of the data to re-encrypt.

", - "ReEncryptResponse$CiphertextBlob": "

The re-encrypted data.

" + "ReEncryptResponse$CiphertextBlob": "

The re-encrypted data. If you are using the CLI, the value is Base64 encoded. Otherwise, it is not encoded.

" } }, "CreateAliasRequest": { @@ -210,7 +213,7 @@ "base": null, "refs": { "DecryptRequest$EncryptionContext": "

The encryption context. If this was specified in the Encrypt function, it must be specified here or the decryption operation will fail. For more information, see Encryption Context.

", - "EncryptRequest$EncryptionContext": "

Name:value pair that specifies the encryption context to be used for authenticated encryption. For more information, see Authenticated Encryption.

", + "EncryptRequest$EncryptionContext": "

Name/value pair that specifies the encryption context to be used for authenticated encryption. If used here, the same value must be supplied to the Decrypt API or decryption will fail. For more information, see Encryption Context.

", "GenerateDataKeyRequest$EncryptionContext": "

Name/value pair that contains additional data to be authenticated during the encryption and decryption processes that use the key. This value is logged by AWS CloudTrail to provide context around the data encrypted by the key.

", "GenerateDataKeyWithoutPlaintextRequest$EncryptionContext": "

Name:value pair that contains additional data to be authenticated during the encryption and decryption processes.

", "GrantConstraints$EncryptionContextSubset": "The constraint equals the full encryption context.", @@ -307,6 +310,7 @@ "refs": { "CreateGrantResponse$GrantId": "

Unique grant identifier. You can use the GrantId value to revoke a grant.

", "GrantListEntry$GrantId": "

Unique grant identifier.

", + "RetireGrantRequest$GrantId": "

Unique identifier of the grant to be retired. The grant ID is returned by the CreateGrant function.

", "RevokeGrantRequest$GrantId": "

Identifier of the grant to be revoked.

" } }, @@ -331,25 +335,25 @@ "GrantOperationList": { "base": null, "refs": { - "CreateGrantRequest$Operations": "

List of operations permitted by the grant. This can be any combination of one or more of the following values:

  1. Decrypt
  2. Encrypt
  3. GenerateDataKey
  4. GenerateDataKeyWithoutPlaintext
  5. ReEncryptFrom
  6. ReEncryptTo
  7. CreateGrant

", + "CreateGrantRequest$Operations": "

List of operations permitted by the grant. This can be any combination of one or more of the following values:

  1. Decrypt
  2. Encrypt
  3. GenerateDataKey
  4. GenerateDataKeyWithoutPlaintext
  5. ReEncryptFrom
  6. ReEncryptTo
  7. CreateGrant
  8. RetireGrant

", "GrantListEntry$Operations": "

List of operations permitted by the grant. This can be any combination of one or more of the following values:

  1. Decrypt
  2. Encrypt
  3. GenerateDataKey
  4. GenerateDataKeyWithoutPlaintext
  5. ReEncryptFrom
  6. ReEncryptTo
  7. CreateGrant

" } }, "GrantTokenList": { "base": null, "refs": { - "CreateGrantRequest$GrantTokens": "

List of grant tokens.

", - "DecryptRequest$GrantTokens": "

A list of grant tokens that represent grants which can be used to provide long term permissions to perform decryption.

", - "EncryptRequest$GrantTokens": "

A list of grant tokens that represent grants which can be used to provide long term permissions to perform encryption.

", - "GenerateDataKeyRequest$GrantTokens": "

A list of grant tokens that represent grants which can be used to provide long term permissions to generate a key.

", - "GenerateDataKeyWithoutPlaintextRequest$GrantTokens": "

A list of grant tokens that represent grants which can be used to provide long term permissions to generate a key.

", - "ReEncryptRequest$GrantTokens": "

Grant tokens that identify the grants that have permissions for the encryption and decryption process.

" + "CreateGrantRequest$GrantTokens": "

For more information, see Grant Tokens.

", + "DecryptRequest$GrantTokens": "

For more information, see Grant Tokens.

", + "EncryptRequest$GrantTokens": "

For more information, see Grant Tokens.

", + "GenerateDataKeyRequest$GrantTokens": "

For more information, see Grant Tokens.

", + "GenerateDataKeyWithoutPlaintextRequest$GrantTokens": "

For more information, see Grant Tokens.

", + "ReEncryptRequest$GrantTokens": "

For more information, see Grant Tokens.

" } }, "GrantTokenType": { "base": null, "refs": { - "CreateGrantResponse$GrantToken": "

The grant token. A grant token is a string that identifies a grant and which can be used to make a grant take effect immediately. A token contains all of the information necessary to create a grant.

", + "CreateGrantResponse$GrantToken": "

For more information, see Grant Tokens.

", "GrantTokenList$member": null, "RetireGrantRequest$GrantToken": "

Token that identifies the grant to be retired.

" } @@ -385,7 +389,7 @@ } }, "KMSInternalException": { - "base": "The request was rejected because an internal exception occurred. This error can be retried.", + "base": "

The request was rejected because an internal exception occurred. This error can be retried.

", "refs": { } }, @@ -393,32 +397,34 @@ "base": null, "refs": { "AliasListEntry$TargetKeyId": "

String that contains the key identifier pointed to by the alias.

", - "CreateAliasRequest$TargetKeyId": "

An identifier of the key for which you are creating the alias. This value cannot be another alias.

", - "CreateGrantRequest$KeyId": "

A unique key identifier for a customer master key. This value can be a globally unique identifier, an ARN, or an alias.

", - "DecryptResponse$KeyId": "

Unique identifier created by the system for the key. This value is always returned as long as no errors are encountered during the operation.

", - "DescribeKeyRequest$KeyId": "

Unique identifier of the customer master key to be described. This can be an ARN, an alias, or a globally unique identifier.

", - "DisableKeyRequest$KeyId": "

Unique identifier of the customer master key to be disabled. This can be an ARN, an alias, or a globally unique identifier.

", - "DisableKeyRotationRequest$KeyId": "

Unique identifier of the customer master key for which rotation is to be disabled. This can be an ARN, an alias, or a globally unique identifier.

", - "EnableKeyRequest$KeyId": "

Unique identifier of the customer master key to be enabled. This can be an ARN, an alias, or a globally unique identifier.

", - "EnableKeyRotationRequest$KeyId": "

Unique identifier of the customer master key for which rotation is to be enabled. This can be an ARN, an alias, or a globally unique identifier.

", - "EncryptRequest$KeyId": "

Unique identifier of the customer master. This can be an ARN, an alias, or the Key ID.

", + "CreateAliasRequest$TargetKeyId": "

An identifier of the key for which you are creating the alias. This value cannot be another alias but can be a globally unique identifier or a fully specified ARN to a key.

", + "CreateGrantRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "DecryptResponse$KeyId": "

ARN of the key used to perform the decryption. This value is returned if no errors are encountered during the operation.

", + "DescribeKeyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", + "DisableKeyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "DisableKeyRotationRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "EnableKeyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "EnableKeyRotationRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "EncryptRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", "EncryptResponse$KeyId": "

The ID of the key used during encryption.

", - "GenerateDataKeyRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "GenerateDataKeyResponse$KeyId": "

System generated unique identifier for the key.

", - "GenerateDataKeyWithoutPlaintextRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "GenerateDataKeyWithoutPlaintextResponse$KeyId": "

System generated unique identifier for the key.

", - "GetKeyPolicyRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "GetKeyRotationStatusRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", + "GenerateDataKeyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", + "GenerateDataKeyResponse$KeyId": "

System generated unique identifier of the key to be used to decrypt the encrypted copy of the data key.

", + "GenerateDataKeyWithoutPlaintextRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", + "GenerateDataKeyWithoutPlaintextResponse$KeyId": "

System generated unique identifier of the key to be used to decrypt the encrypted copy of the data key.

", + "GetKeyPolicyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "GetKeyRotationStatusRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", "KeyListEntry$KeyId": "

Unique identifier of the key.

", "KeyMetadata$KeyId": "

Unique identifier for the key.

", - "ListGrantsRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "ListKeyPoliciesRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "PutKeyPolicyRequest$KeyId": "

Unique identifier of the key. This can be an ARN, an alias, or a globally unique identifier.

", - "ReEncryptRequest$DestinationKeyId": "

Key identifier of the key used to re-encrypt the data.

", + "ListGrantsRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "ListKeyPoliciesRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", + "PutKeyPolicyRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "ReEncryptRequest$DestinationKeyId": "

A unique identifier for the customer master key used to re-encrypt the data. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

", "ReEncryptResponse$SourceKeyId": "

Unique identifier of the key used to originally encrypt the data.

", "ReEncryptResponse$KeyId": "

Unique identifier of the key used to re-encrypt the data.

", - "RevokeGrantRequest$KeyId": "

Unique identifier of the key associated with the grant.

", - "UpdateKeyDescriptionRequest$KeyId": "

Unique value that identifies the key for which the description is to be changed.

" + "RetireGrantRequest$KeyId": "

A unique identifier for the customer master key associated with the grant. This value can be a globally unique identifier or a fully specified ARN of the key.

", + "RevokeGrantRequest$KeyId": "

A unique identifier for the customer master key associated with the grant. This value can be a globally unique identifier or the fully specified ARN to a key.

", + "UpdateAliasRequest$TargetKeyId": "

Unique identifier of the customer master key to be associated with the alias. This value can be a globally unique identifier or the fully specified ARN of a key.

", + "UpdateKeyDescriptionRequest$KeyId": "

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" } }, "KeyList": { @@ -532,8 +538,8 @@ "NumberOfBytesType": { "base": null, "refs": { - "GenerateDataKeyRequest$NumberOfBytes": "

Integer that contains the number of bytes to generate. Common values are 128, 256, 512, 1024 and so on. 1024 is the current limit.

", - "GenerateDataKeyWithoutPlaintextRequest$NumberOfBytes": "

Integer that contains the number of bytes to generate. Common values are 128, 256, 512, 1024 and so on.

", + "GenerateDataKeyRequest$NumberOfBytes": "

Integer that contains the number of bytes to generate. Common values are 128, 256, 512, and 1024. 1024 is the current limit. We recommend that you use the KeySpec parameter instead.

", + "GenerateDataKeyWithoutPlaintextRequest$NumberOfBytes": "

Integer that contains the number of bytes to generate. Common values are 128, 256, 512, 1024 and so on. We recommend that you use the KeySpec parameter instead.

", "GenerateRandomRequest$NumberOfBytes": "

Integer that contains the number of bytes to generate. Common values are 128, 256, 512, 1024 and so on. The current limit is 1024 bytes.

" } }, @@ -542,7 +548,7 @@ "refs": { "DecryptResponse$Plaintext": "

Decrypted plaintext data. This value may not be returned if the customer master key is not available or if you didn't have permission to use it.

", "EncryptRequest$Plaintext": "

Data to be encrypted.

", - "GenerateDataKeyResponse$Plaintext": "

Plaintext that contains the unwrapped key. Use this for encryption and decryption and then remove it from memory as soon as possible.

", + "GenerateDataKeyResponse$Plaintext": "

Plaintext that contains the data key. Use this for encryption and decryption and then remove it from memory as soon as possible.

", "GenerateRandomResponse$Plaintext": "

Plaintext that contains the unpredictable byte string.

" } }, @@ -608,6 +614,11 @@ "refs": { } }, + "UpdateAliasRequest": { + "base": null, + "refs": { + } + }, "UpdateKeyDescriptionRequest": { "base": null, "refs": { diff --git a/src/data/logs/2014-03-28/api-2.json b/src/data/logs/2014-03-28/api-2.json index f8272c8a47..7354603e76 100644 --- a/src/data/logs/2014-03-28/api-2.json +++ b/src/data/logs/2014-03-28/api-2.json @@ -244,6 +244,30 @@ } ] }, + "FilterLogEvents":{ + "name":"FilterLogEvents", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"FilterLogEventsRequest"}, + "output":{"shape":"FilterLogEventsResponse"}, + "errors":[ + { + "shape":"InvalidParameterException", + "exception":true + }, + { + "shape":"ResourceNotFoundException", + "exception":true + }, + { + "shape":"ServiceUnavailableException", + "exception":true, + "fault":true + } + ] + }, "GetLogEvents":{ "name":"GetLogEvents", "http":{ @@ -504,6 +528,7 @@ "nextToken":{"shape":"NextToken"} } }, + "EventId":{"type":"string"}, "EventMessage":{ "type":"string", "min":1 @@ -520,6 +545,28 @@ "value":{"shape":"Value"} }, "FilterCount":{"type":"integer"}, + "FilterLogEventsRequest":{ + "type":"structure", + "required":["logGroupName"], + "members":{ + "logGroupName":{"shape":"LogGroupName"}, + "logStreamNames":{"shape":"InputLogStreamNames"}, + "startTime":{"shape":"Timestamp"}, + "endTime":{"shape":"Timestamp"}, + "filterPattern":{"shape":"FilterPattern"}, + "nextToken":{"shape":"NextToken"}, + "limit":{"shape":"EventsLimit"}, + "interleaved":{"shape":"Interleaved"} + } + }, + "FilterLogEventsResponse":{ + "type":"structure", + "members":{ + "events":{"shape":"FilteredLogEvents"}, + "searchedLogStreams":{"shape":"SearchedLogStreams"}, + "nextToken":{"shape":"NextToken"} + } + }, "FilterName":{ "type":"string", "min":1, @@ -531,6 +578,20 @@ "min":0, "max":512 }, + "FilteredLogEvent":{ + "type":"structure", + "members":{ + "logStreamName":{"shape":"LogStreamName"}, + "timestamp":{"shape":"Timestamp"}, + "message":{"shape":"EventMessage"}, + "ingestionTime":{"shape":"Timestamp"}, + "eventId":{"shape":"EventId"} + } + }, + "FilteredLogEvents":{ + "type":"list", + "member":{"shape":"FilteredLogEvent"} + }, "GetLogEventsRequest":{ "type":"structure", "required":[ @@ -572,6 +633,13 @@ "min":1, "max":10000 }, + "InputLogStreamNames":{ + "type":"list", + "member":{"shape":"LogStreamName"}, + "min":1, + "max":100 + }, + "Interleaved":{"type":"boolean"}, "InvalidParameterException":{ "type":"structure", "members":{ @@ -632,6 +700,7 @@ "max":512, "pattern":"[^:*]*" }, + "LogStreamSearchedCompletely":{"type":"boolean"}, "LogStreams":{ "type":"list", "member":{"shape":"LogStream"} @@ -790,6 +859,17 @@ }, "exception":true }, + "SearchedLogStream":{ + "type":"structure", + "members":{ + "logStreamName":{"shape":"LogStreamName"}, + "searchedCompletely":{"shape":"LogStreamSearchedCompletely"} + } + }, + "SearchedLogStreams":{ + "type":"list", + "member":{"shape":"SearchedLogStream"} + }, "SequenceToken":{ "type":"string", "min":1 diff --git a/src/data/logs/2014-03-28/docs-2.json b/src/data/logs/2014-03-28/docs-2.json index 3b55bd2fa1..0add26c49b 100644 --- a/src/data/logs/2014-03-28/docs-2.json +++ b/src/data/logs/2014-03-28/docs-2.json @@ -10,6 +10,7 @@ "DescribeLogGroups": "

Returns all the log groups that are associated with the AWS account making the request. The list returned in the response is ASCII-sorted by log group name.

By default, this operation returns up to 50 log groups. If there are more log groups to list, the response would contain a nextToken value in the response body. You can also limit the number of log groups returned in the response by specifying the limit parameter in the request.

", "DescribeLogStreams": "

Returns all the log streams that are associated with the specified log group. The list returned in the response is ASCII-sorted by log stream name.

By default, this operation returns up to 50 log streams. If there are more log streams to list, the response would contain a nextToken value in the response body. You can also limit the number of log streams returned in the response by specifying the limit parameter in the request. This operation has a limit of five transactions per second, after which transactions are throttled.

", "DescribeMetricFilters": "

Returns all the metrics filters associated with the specified log group. The list returned in the response is ASCII-sorted by filter name.

By default, this operation returns up to 50 metric filters. If there are more metric filters to list, the response would contain a nextToken value in the response body. You can also limit the number of metric filters returned in the response by specifying the limit parameter in the request.

", + "FilterLogEvents": "

Retrieves log events, optionally filtered by a filter pattern from the specified log group. You can provide an optional time range to filter the results on the event timestamp. You can limit the streams searched to an explicit list of logStreamNames.

By default, this operation returns as much matching log events as can fit in a response size of 1MB, up to 10,000 log events, or all the events found within a time-bounded scan window. If the response includes a nextToken, then there is more data to search, and the search can be resumed with a new request providing the nextToken. The response will contain a list of searchedLogStreams that contains information about which streams were searched in the request and whether they have been searched completely or require further pagination. The limit parameter in the request. can be used to specify the maximum number of events to return in a page.

", "GetLogEvents": "

Retrieves log events from the specified log stream. You can provide an optional time range to filter the results on the event timestamp.

By default, this operation returns as much log events as can fit in a response size of 1MB, up to 10,000 log events. The response will always include a nextForwardToken and a nextBackwardToken in the response body. You can use any of these tokens in subsequent GetLogEvents requests to paginate through events in either forward or backward direction. You can also limit the number of log events returned in the response by specifying the limit parameter in the request.

", "PutLogEvents": "

Uploads a batch of log events to the specified log stream.

Every PutLogEvents request must include the sequenceToken obtained from the response of the previous request. An upload in a newly created log stream does not require a sequenceToken.

The batch of events must satisfy the following constraints:

", "PutMetricFilter": "

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents requests.

", @@ -111,9 +112,16 @@ "refs": { } }, + "EventId": { + "base": null, + "refs": { + "FilteredLogEvent$eventId": "A unique identifier for this event." + } + }, "EventMessage": { "base": null, "refs": { + "FilteredLogEvent$message": null, "InputLogEvent$message": null, "MetricFilterMatchRecord$eventMessage": null, "OutputLogEvent$message": null, @@ -129,7 +137,8 @@ "EventsLimit": { "base": "The maximum number of events to return.", "refs": { - "GetLogEventsRequest$limit": "

The maximum number of log events returned in the response. If you don't specify a value, the request would return as much log events as can fit in a response size of 1MB, up to 10,000 log events.

" + "FilterLogEventsRequest$limit": "

The maximum number of events to return in a page of results. Default is 10,000 events.

", + "GetLogEventsRequest$limit": "

The maximum number of log events returned in the response. If you don't specify a value, the request would return as many log events as can fit in a response size of 1MB, up to 10,000 log events.

" } }, "ExtractedValues": { @@ -144,6 +153,16 @@ "LogGroup$metricFilterCount": null } }, + "FilterLogEventsRequest": { + "base": null, + "refs": { + } + }, + "FilterLogEventsResponse": { + "base": null, + "refs": { + } + }, "FilterName": { "base": "The name of the metric filter.", "refs": { @@ -156,11 +175,24 @@ "FilterPattern": { "base": "A symbolic description of how Amazon CloudWatch Logs should interpret the data in each log entry. For example, a log entry may contain timestamps, IP addresses, strings, and so on. You use the pattern to specify what to look for in the log stream.", "refs": { + "FilterLogEventsRequest$filterPattern": "

A valid CloudWatch Logs filter pattern to use for filtering the response. If not provided, all the events are matched.

", "MetricFilter$filterPattern": null, "PutMetricFilterRequest$filterPattern": null, "TestMetricFilterRequest$filterPattern": null } }, + "FilteredLogEvent": { + "base": "Represents a matched event from a FilterLogEvents request.", + "refs": { + "FilteredLogEvents$member": null + } + }, + "FilteredLogEvents": { + "base": "A list of matched FilteredLogEvent objects returned from a FilterLogEvents request.", + "refs": { + "FilterLogEventsResponse$events": "

A list of FilteredLogEvent objects representing the matched events from the request.

" + } + }, "GetLogEventsRequest": { "base": null, "refs": { @@ -183,6 +215,18 @@ "PutLogEventsRequest$logEvents": null } }, + "InputLogStreamNames": { + "base": "A list of log stream names.", + "refs": { + "FilterLogEventsRequest$logStreamNames": "

Optional list of log stream names within the specified log group to search. Defaults to all the log streams in the log group.

" + } + }, + "Interleaved": { + "base": null, + "refs": { + "FilterLogEventsRequest$interleaved": "

If provided, the API will make a best effort to provide responses that contain events from multiple log streams within the log group interleaved in a single response. If not provided, all the matched log events in the first log stream will be searched first, then those in the next log stream, etc.

" + } + }, "InvalidParameterException": { "base": "

Returned if a parameter of the request is incorrectly specified.

", "refs": { @@ -224,6 +268,7 @@ "DescribeLogGroupsRequest$logGroupNamePrefix": null, "DescribeLogStreamsRequest$logGroupName": null, "DescribeMetricFiltersRequest$logGroupName": null, + "FilterLogEventsRequest$logGroupName": "

The name of the log group to query

", "GetLogEventsRequest$logGroupName": null, "LogGroup$logGroupName": null, "PutLogEventsRequest$logGroupName": null, @@ -249,9 +294,18 @@ "CreateLogStreamRequest$logStreamName": null, "DeleteLogStreamRequest$logStreamName": null, "DescribeLogStreamsRequest$logStreamNamePrefix": "

Will only return log streams that match the provided logStreamNamePrefix. If you don't specify a value, no prefix filter is applied.

", + "FilteredLogEvent$logStreamName": "The name of the log stream this event belongs to.", "GetLogEventsRequest$logStreamName": null, + "InputLogStreamNames$member": null, "LogStream$logStreamName": null, - "PutLogEventsRequest$logStreamName": null + "PutLogEventsRequest$logStreamName": null, + "SearchedLogStream$logStreamName": "The name of the log stream." + } + }, + "LogStreamSearchedCompletely": { + "base": null, + "refs": { + "SearchedLogStream$searchedCompletely": "Indicates whether all the events in this log stream were searched or more data exists to search by paginating further." } }, "LogStreams": { @@ -324,6 +378,8 @@ "DescribeLogStreamsResponse$nextToken": null, "DescribeMetricFiltersRequest$nextToken": "

A string token used for pagination that points to the next page of results. It must be a value obtained from the response of the previous DescribeMetricFilters request.

", "DescribeMetricFiltersResponse$nextToken": null, + "FilterLogEventsRequest$nextToken": "

A pagination token obtained from a FilterLogEvents response to continue paginating the FilterLogEvents results.

", + "FilterLogEventsResponse$nextToken": "

A pagination token obtained from a FilterLogEvents response to continue paginating the FilterLogEvents results.

", "GetLogEventsRequest$nextToken": "

A string token used for pagination that points to the next page of results. It must be a value obtained from the nextForwardToken or nextBackwardToken fields in the response of the previous GetLogEvents request.

", "GetLogEventsResponse$nextForwardToken": null, "GetLogEventsResponse$nextBackwardToken": null @@ -388,6 +444,18 @@ "refs": { } }, + "SearchedLogStream": { + "base": "An object indicating the search status of a log stream in a FilterLogEvents request.", + "refs": { + "SearchedLogStreams$member": null + } + }, + "SearchedLogStreams": { + "base": "A list of SearchedLogStream objects indicating the search status for log streams in a FilterLogEvents request.", + "refs": { + "FilterLogEventsResponse$searchedLogStreams": "

A list of SearchedLogStream objects indicating which log streams have been searched in this request and whether each has been searched completely or still has more to be paginated.

" + } + }, "SequenceToken": { "base": "A string token used for making PutLogEvents requests. A sequenceToken can only be used once, and PutLogEvents requests must include the sequenceToken obtained from the response of the previous request.", "refs": { @@ -435,6 +503,10 @@ "Timestamp": { "base": "A point in time expressed as the number milliseconds since Jan 1, 1970 00:00:00 UTC.", "refs": { + "FilterLogEventsRequest$startTime": "

A unix timestamp indicating the start time of the range for the request. If provided, events with a timestamp prior to this time will not be returned.

", + "FilterLogEventsRequest$endTime": "

A unix timestamp indicating the end time of the range for the request. If provided, events with a timestamp later than this time will not be returned.

", + "FilteredLogEvent$timestamp": null, + "FilteredLogEvent$ingestionTime": null, "GetLogEventsRequest$startTime": null, "GetLogEventsRequest$endTime": null, "InputLogEvent$timestamp": null, diff --git a/src/data/manifest.json b/src/data/manifest.json index dfe6b51e95..6e38fde3f9 100644 --- a/src/data/manifest.json +++ b/src/data/manifest.json @@ -90,6 +90,13 @@ "2012-10-25": "2012-10-25" } }, + "ds": { + "namespace": "DirectoryService", + "versions": { + "latest": "2015-04-16", + "2015-04-16": "2015-04-16" + } + }, "dynamodb": { "namespace": "DynamoDb", "versions": { @@ -100,8 +107,8 @@ "ec2": { "namespace": "Ec2", "versions": { - "latest": "2015-03-01", - "2015-03-01": "2015-03-01" + "latest": "2015-04-15", + "2015-04-15": "2015-04-15" } }, "ecs": { @@ -125,6 +132,13 @@ "2010-12-01": "2010-12-01" } }, + "elasticfilesystem": { + "namespace": "Efs", + "versions": { + "latest": "2015-02-01", + "2015-02-01": "2015-02-01" + } + }, "elasticloadbalancing": { "namespace": "ElasticLoadBalancing", "versions": { diff --git a/src/data/opsworks/2013-02-18/api-2.json b/src/data/opsworks/2013-02-18/api-2.json index 11961a4460..493cc0d066 100644 --- a/src/data/opsworks/2013-02-18/api-2.json +++ b/src/data/opsworks/2013-02-18/api-2.json @@ -770,6 +770,25 @@ } ] }, + "GrantAccess":{ + "name":"GrantAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GrantAccessRequest"}, + "output":{"shape":"GrantAccessResult"}, + "errors":[ + { + "shape":"ValidationException", + "exception":true + }, + { + "shape":"ResourceNotFoundException", + "exception":true + } + ] + }, "RebootInstance":{ "name":"RebootInstance", "http":{ @@ -1285,7 +1304,8 @@ "IgnoreMetricsTime":{"shape":"Minute"}, "CpuThreshold":{"shape":"Double"}, "MemoryThreshold":{"shape":"Double"}, - "LoadThreshold":{"shape":"Double"} + "LoadThreshold":{"shape":"Double"}, + "Alarms":{"shape":"Strings"} } }, "AutoScalingType":{ @@ -2013,6 +2033,20 @@ "Hostname":{"shape":"String"} } }, + "GrantAccessRequest":{ + "type":"structure", + "required":["InstanceId"], + "members":{ + "InstanceId":{"shape":"String"}, + "ValidForInMinutes":{"shape":"ValidForInMinutes"} + } + }, + "GrantAccessResult":{ + "type":"structure", + "members":{ + "TemporaryCredential":{"shape":"TemporaryCredential"} + } + }, "Hour":{"type":"string"}, "Instance":{ "type":"structure", @@ -2545,6 +2579,15 @@ "member":{"shape":"String"} }, "Switch":{"type":"string"}, + "TemporaryCredential":{ + "type":"structure", + "members":{ + "Username":{"shape":"String"}, + "Password":{"shape":"String"}, + "ValidForInMinutes":{"shape":"Integer"}, + "InstanceId":{"shape":"String"} + } + }, "TimeBasedAutoScalingConfiguration":{ "type":"structure", "members":{ @@ -2704,6 +2747,11 @@ "type":"list", "member":{"shape":"UserProfile"} }, + "ValidForInMinutes":{ + "type":"integer", + "min":60, + "max":1440 + }, "ValidationException":{ "type":"structure", "members":{ diff --git a/src/data/opsworks/2013-02-18/docs-2.json b/src/data/opsworks/2013-02-18/docs-2.json index 4dc8563c63..0a051fbcb4 100644 --- a/src/data/opsworks/2013-02-18/docs-2.json +++ b/src/data/opsworks/2013-02-18/docs-2.json @@ -1,7 +1,7 @@ { "version": "2.0", "operations": { - "AssignInstance": "

Assign a registered instance to a custom layer. You cannot use this action with instances that were created with AWS OpsWorks.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", + "AssignInstance": "

Assign a registered instance to a layer.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "AssignVolume": "

Assigns one of the stack's registered Amazon EBS volumes to a specified instance. The volume must first be registered with the stack by calling RegisterVolume. After you register the volume, you must call UpdateVolume to specify a mount point before calling AssignVolume. For more information, see Resource Management.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "AssociateElasticIp": "

Associates one of the stack's registered Elastic IP addresses with a specified instance. The address must first be registered with the stack by calling RegisterElasticIp. For more information, see Resource Management.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "AttachElasticLoadBalancer": "

Attaches an Elastic Load Balancing load balancer to a specified layer. For more information, see Elastic Load Balancing.

You must create the Elastic Load Balancing instance separately, by using the Elastic Load Balancing console, API, or CLI. For more information, see Elastic Load Balancing Developer Guide.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", @@ -43,6 +43,7 @@ "DetachElasticLoadBalancer": "

Detaches a specified Elastic Load Balancing instance from its layer.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "DisassociateElasticIp": "

Disassociates an Elastic IP address from its instance. The address remains registered with the stack. For more information, see Resource Management.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "GetHostnameSuggestion": "

Gets a generated host name for the specified layer, based on the current host name theme.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", + "GrantAccess": "This API can be used only with Windows stacks.

Grants RDP access to a Windows instance for a specified time period.

", "RebootInstance": "

Reboots a specified instance. For more information, see Starting, Stopping, and Rebooting Instances.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "RegisterElasticIp": "

Registers an Elastic IP address with a specified stack. An address can be registered with only one stack at a time. If the address is already registered, you must first deregister it by calling DeregisterElasticIp. For more information, see Resource Management.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "RegisterInstance": "

Registers instances with a specified stack that were created outside of AWS OpsWorks.

We do not recommend using this action to register instances. The complete registration operation has two primary steps, installing the AWS OpsWorks agent on the instance and registering the instance with the stack. RegisterInstance handles only the second step. You should instead use the AWS CLI register command, which performs the entire registration operation.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", @@ -67,7 +68,7 @@ "UpdateUserProfile": "

Updates a specified user profile.

Required Permissions: To use this action, an IAM user must have an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

", "UpdateVolume": "

Updates an Amazon EBS volume's name or mount point. For more information, see Resource Management.

Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions.

" }, - "service": "AWS OpsWorks

Welcome to the AWS OpsWorks API Reference. This guide provides descriptions, syntax, and usage examples about AWS OpsWorks actions and data types, including common parameters and error codes.

AWS OpsWorks is an application management service that provides an integrated experience for overseeing the complete application lifecycle. For information about this product, go to the AWS OpsWorks details page.

SDKs and CLI

The most common way to use the AWS OpsWorks API is by using the AWS Command Line Interface (CLI) or by using one of the AWS SDKs to implement applications in your preferred language. For more information, see:

Endpoints

AWS OpsWorks supports only one endpoint, opsworks.us-east-1.amazonaws.com (HTTPS), so you must connect to that endpoint. You can then use the API to direct AWS OpsWorks to create stacks in any AWS Region.

Chef Versions

When you call CreateStack, CloneStack, or UpdateStack we recommend you use the ConfigurationManager parameter to specify the Chef version, 0.9, 11.4, or 11.10. The default value is currently 11.10. For more information, see Chef Versions.

You can still specify Chef 0.9 for your stack, but new features are not available for Chef 0.9 stacks, and support is scheduled to end on July 24, 2014. We do not recommend using Chef 0.9 for new stacks, and we recommend migrating your existing Chef 0.9 stacks to Chef 11.10 as soon as possible.", + "service": "AWS OpsWorks

Welcome to the AWS OpsWorks API Reference. This guide provides descriptions, syntax, and usage examples about AWS OpsWorks actions and data types, including common parameters and error codes.

AWS OpsWorks is an application management service that provides an integrated experience for overseeing the complete application lifecycle. For information about this product, go to the AWS OpsWorks details page.

SDKs and CLI

The most common way to use the AWS OpsWorks API is by using the AWS Command Line Interface (CLI) or by using one of the AWS SDKs to implement applications in your preferred language. For more information, see:

Endpoints

AWS OpsWorks supports only one endpoint, opsworks.us-east-1.amazonaws.com (HTTPS), so you must connect to that endpoint. You can then use the API to direct AWS OpsWorks to create stacks in any AWS Region.

Chef Versions

When you call CreateStack, CloneStack, or UpdateStack we recommend you use the ConfigurationManager parameter to specify the Chef version, 0.9, 11.4, or 11.10. The default value is currently 11.10. For more information, see Chef Versions.

You can still specify Chef 0.9 for your stack, but new features are not available for Chef 0.9 stacks, and support is scheduled to end on July 24, 2014. We do not recommend using Chef 0.9 for new stacks, and we recommend migrating your existing Chef 0.9 stacks to Chef 11.10 as soon as possible.", "shapes": { "App": { "base": "

A description of the app.

", @@ -93,7 +94,7 @@ "base": null, "refs": { "App$Type": "

The app type.

", - "CreateAppRequest$Type": "

The app type. Each supported type is associated with a particular layer. For example, PHP applications are associated with a PHP layer. AWS OpsWorks deploys an application to those instances that are members of the corresponding layer.

", + "CreateAppRequest$Type": "

The app type. Each supported type is associated with a particular layer. For example, PHP applications are associated with a PHP layer. AWS OpsWorks deploys an application to those instances that are members of the corresponding layer. If your app isn't one of the standard types, or you prefer to implement your own Deploy recipes, specify other.

", "UpdateAppRequest$Type": "

The app type.

" } }, @@ -143,9 +144,9 @@ "AutoScalingType": { "base": null, "refs": { - "CreateInstanceRequest$AutoScalingType": "

For load-based or time-based instances, the type.

", + "CreateInstanceRequest$AutoScalingType": "

For load-based or time-based instances, the type. Windows stacks can use only time-based instances.

", "Instance$AutoScalingType": "

For load-based or time-based instances, the type.

", - "UpdateInstanceRequest$AutoScalingType": "

For load-based or time-based instances, the type.

" + "UpdateInstanceRequest$AutoScalingType": "

For load-based or time-based instances, the type. Windows stacks can use only time-based instances.

" } }, "BlockDeviceMapping": { @@ -183,7 +184,7 @@ "DeleteInstanceRequest$DeleteElasticIp": "

Whether to delete the instance Elastic IP address.

", "DeleteInstanceRequest$DeleteVolumes": "

Whether to delete the instance's Amazon EBS volumes.

", "EbsBlockDevice$DeleteOnTermination": "

Whether the volume is deleted on instance termination.

", - "EnvironmentVariable$Secure": "

(Optional) Whether the variable's value will be returned by the DescribeApps action. To conceal an environment variable's value, set Secure to true. DescribeApps then returns **Filtered** instead of the actual value. The default value for Secure is false.

", + "EnvironmentVariable$Secure": "

(Optional) Whether the variable's value will be returned by the DescribeApps action. To conceal an environment variable's value, set Secure to true. DescribeApps then returns *****FILTERED***** instead of the actual value. The default value for Secure is false.

", "Instance$InstallUpdatesOnBoot": "

Whether to install operating system and package updates when the instance boots. The default value is true. If this value is set to false, you must then update your instances manually by using CreateDeployment to run the update_dependencies stack command or manually running yum (Amazon Linux) or apt-get (Ubuntu) on the instances.

We strongly recommend using the default value of true, to ensure that your instances have the latest security updates.

", "Instance$EbsOptimized": "

Whether this is an Amazon EBS-optimized instance.

", "Layer$EnableAutoHealing": "

Whether auto healing is disabled for the layer.

", @@ -388,13 +389,13 @@ "DeploymentCommandArgs": { "base": null, "refs": { - "DeploymentCommand$Args": "

The arguments of those commands that take arguments. It should be set to a JSON object with the following format:

{\"arg_name1\" : [\"value1\", \"value2\", ...], \"arg_name2\" : [\"value1\", \"value2\", ...], ...}

The update_dependencies command takes two arguments:

For example, to upgrade an instance to Amazon Linux 2014.09, set Args to the following.

{ \"upgrade_os_to\":[\"Amazon Linux 2014.09\"], \"allow_reboot\":[\"true\"] } " + "DeploymentCommand$Args": "

The arguments of those commands that take arguments. It should be set to a JSON object with the following format:

{\"arg_name1\" : [\"value1\", \"value2\", ...], \"arg_name2\" : [\"value1\", \"value2\", ...], ...}

The update_dependencies command takes two arguments:

For example, to upgrade an instance to Amazon Linux 2014.09, set Args to the following.

{ \"upgrade_os_to\":[\"Amazon Linux 2014.09\"], \"allow_reboot\":[\"true\"] } " } }, "DeploymentCommandName": { "base": null, "refs": { - "DeploymentCommand$Name": "

Specifies the operation. You can specify only one command.

For stacks, the following commands are available:

For apps, the following commands are available:

" + "DeploymentCommand$Name": "

Specifies the operation. You can specify only one command.

For stacks, the following commands are available:

The update_dependencies and install_dependencies commands are supported only for Linux instances. You can run the commands successfully on Windows instances, but they do nothing.

For apps, the following commands are available:

" } }, "Deployments": { @@ -680,6 +681,16 @@ "refs": { } }, + "GrantAccessRequest": { + "base": null, + "refs": { + } + }, + "GrantAccessResult": { + "base": "

Contains the response to a GrantAccess request.

", + "refs": { + } + }, "Hour": { "base": null, "refs": { @@ -744,6 +755,7 @@ "ShutdownEventConfiguration$ExecutionTimeout": "

The time, in seconds, that AWS OpsWorks will wait after triggering a Shutdown event before shutting down an instance.

", "StackSummary$LayersCount": "

The number of layers.

", "StackSummary$AppsCount": "

The number of apps.

", + "TemporaryCredential$ValidForInMinutes": "

The length of time (in minutes) that the grant is valid. When the grant expires, at the end of this period, the user will no longer be able to use the credentials to log in. If they are logged in at the time, they will be automatically logged out.

", "Volume$Size": "

The volume size.

", "Volume$Iops": "

For PIOPS volumes, the IOPS per disk.

", "VolumeConfiguration$RaidLevel": "

The volume RAID level.

", @@ -762,7 +774,7 @@ "base": null, "refs": { "CreateLayerRequest$Attributes": "

One or more user-defined key/value pairs to be added to the stack attributes.

", - "Layer$Attributes": "

The layer attributes.

", + "Layer$Attributes": "

The layer attributes.

For the HaproxyStatsPassword, MysqlRootPassword, and GangliaPassword attributes, AWS OpsWorks returns *****FILTERED***** instead of the actual value", "UpdateLayerRequest$Attributes": "

One or more user-defined key/value pairs to be added to the stack attributes.

" } }, @@ -809,7 +821,7 @@ "base": null, "refs": { "AutoScalingThresholds$ThresholdsWaitTime": "

The amount of time, in minutes, that the load must exceed a threshold before more instances are added or removed.

", - "AutoScalingThresholds$IgnoreMetricsTime": "

The amount of time (in minutes) after a scaling event occurs that AWS OpsWorks should ignore metrics and not raise any additional scaling events. For example, AWS OpsWorks adds new instances following an upscaling event but the instances won't start reducing the load until they have been booted and configured. There is no point in raising additional scaling events during that operation, which typically takes several minutes. IgnoreMetricsTime allows you to direct AWS OpsWorks to not raise any scaling events long enough to get the new instances online.

" + "AutoScalingThresholds$IgnoreMetricsTime": "

The amount of time (in minutes) after a scaling event occurs that AWS OpsWorks should ignore metrics and suppress additional scaling events. For example, AWS OpsWorks adds new instances following an upscaling event but the instances won't start reducing the load until they have been booted and configured. There is no point in raising additional scaling events during that operation, which typically takes several minutes. IgnoreMetricsTime allows you to direct AWS OpsWorks to suppress scaling events long enough to get the new instances online.

" } }, "Parameters": { @@ -1079,11 +1091,11 @@ "CloneStackRequest$VpcId": "

The ID of the VPC that the cloned stack is to be launched into. It must be in the specified region. All instances are launched into this VPC, and you cannot change the ID later.

If the VPC ID corresponds to a default VPC and you have specified either the DefaultAvailabilityZone or the DefaultSubnetId parameter only, AWS OpsWorks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks sets these parameters to the first valid Availability Zone for the specified region and the corresponding default VPC subnet ID, respectively.

If you specify a nondefault VPC ID, note the following:

For more information on how to use AWS OpsWorks with a VPC, see Running a Stack in a VPC. For more information on default VPC and EC2 Classic, see Supported Platforms.

", "CloneStackRequest$ServiceRoleArn": "

The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. If you create a stack by using the AWS OpsWorks console, it creates the role for you. You can obtain an existing stack's IAM ARN programmatically by calling DescribePermissions. For more information about IAM ARNs, see Using Identifiers.

You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the source stack's service role ARN, if you prefer, but you must do so explicitly.

", "CloneStackRequest$DefaultInstanceProfileArn": "

The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see Using Identifiers.

", - "CloneStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", - "CloneStackRequest$HostnameTheme": "

The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", + "CloneStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", + "CloneStackRequest$HostnameTheme": "

The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", "CloneStackRequest$DefaultAvailabilityZone": "

The cloned stack's default Availability Zone, which must be in the specified region. For more information, see Regions and Endpoints. If you also specify a value for DefaultSubnetId, the subnet must be in the same zone. For more information, see the VpcId parameter description.

", "CloneStackRequest$DefaultSubnetId": "

The stack's default VPC subnet ID. This parameter is required if you specify a value for the VpcId parameter. All instances are launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for DefaultAvailabilityZone, the subnet must be in that zone. For information on default values and when this parameter is required, see the VpcId parameter description.

", - "CloneStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON

", + "CloneStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes

", "CloneStackRequest$DefaultSshKeyName": "

A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS OpsWorks installs the public key on the instance and you can use the private key with an SSH client to log in to the instance. For more information, see Using SSH to Communicate with an Instance and Managing SSH Access. You can override this setting by specifying a different key pair, or no key pair, when you create an instance.

", "CloneStackResult$StackId": "

The cloned stack ID.

", "Command$CommandId": "

The command ID.

", @@ -1091,7 +1103,7 @@ "Command$DeploymentId": "

The command deployment ID.

", "Command$Status": "

The command status:

", "Command$LogUrl": "

The URL of the command log.

", - "Command$Type": "

The command type:

", + "Command$Type": "

The command type:

", "CreateAppRequest$StackId": "

The stack ID.

", "CreateAppRequest$Shortname": "

The app's short name.

", "CreateAppRequest$Name": "

The app name.

", @@ -1100,12 +1112,12 @@ "CreateDeploymentRequest$StackId": "

The stack ID.

", "CreateDeploymentRequest$AppId": "

The app ID. This parameter is required for app deployments, but not for other deployment commands.

", "CreateDeploymentRequest$Comment": "

A user-defined comment.

", - "CreateDeploymentRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON.

", + "CreateDeploymentRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes.

", "CreateDeploymentResult$DeploymentId": "

The deployment ID, which can be used with other requests to identify the deployment.

", "CreateInstanceRequest$StackId": "

The stack ID.

", "CreateInstanceRequest$InstanceType": "

The instance type. AWS OpsWorks supports all instance types except Cluster Compute, Cluster GPU, and High Memory Cluster. For more information, see Instance Families and Types. The parameter values that you use to specify the various types are in the API Name column of the Available Instance Types table.

", "CreateInstanceRequest$Hostname": "

The instance host name.

", - "CreateInstanceRequest$Os": "

The instance's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version. If you set this parameter to Custom, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see Operating SystemsFor more information on how to use custom AMIs with OpsWorks, see Using Custom AMIs.

", + "CreateInstanceRequest$Os": "

The instance's operating system, which must be set to one of the following.

For Windows stacks: Microsoft Windows Server 2012 R2.

For Linux stacks:

The default option is the current Amazon Linux version. If you set this parameter to Custom, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see Operating SystemsFor more information on how to use custom AMIs with OpsWorks, see Using Custom AMIs.

", "CreateInstanceRequest$AmiId": "

A custom AMI ID to be used to create the instance. The AMI should be based on one of the standard AWS OpsWorks AMIs: Amazon Linux, Ubuntu 12.04 LTS, or Ubuntu 14.04 LTS. For more information, see Instances.

If you specify a custom AMI, you must set Os to Custom.", "CreateInstanceRequest$SshKeyName": "

The instance's Amazon EC2 key pair name.

", "CreateInstanceRequest$AvailabilityZone": "

The instance Availability Zone. For more information, see Regions and Endpoints.

", @@ -1122,11 +1134,11 @@ "CreateStackRequest$VpcId": "

The ID of the VPC that the stack is to be launched into. It must be in the specified region. All instances are launched into this VPC, and you cannot change the ID later.

If the VPC ID corresponds to a default VPC and you have specified either the DefaultAvailabilityZone or the DefaultSubnetId parameter only, AWS OpsWorks infers the value of the other parameter. If you specify neither parameter, AWS OpsWorks sets these parameters to the first valid Availability Zone for the specified region and the corresponding default VPC subnet ID, respectively.

If you specify a nondefault VPC ID, note the following:

For more information on how to use AWS OpsWorks with a VPC, see Running a Stack in a VPC. For more information on default VPC and EC2 Classic, see Supported Platforms.

", "CreateStackRequest$ServiceRoleArn": "

The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. For more information about IAM ARNs, see Using Identifiers.

", "CreateStackRequest$DefaultInstanceProfileArn": "

The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see Using Identifiers.

", - "CreateStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", - "CreateStackRequest$HostnameTheme": "

The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", + "CreateStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", + "CreateStackRequest$HostnameTheme": "

The stack's host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", "CreateStackRequest$DefaultAvailabilityZone": "

The stack's default Availability Zone, which must be in the specified region. For more information, see Regions and Endpoints. If you also specify a value for DefaultSubnetId, the subnet must be in the same zone. For more information, see the VpcId parameter description.

", "CreateStackRequest$DefaultSubnetId": "

The stack's default VPC subnet ID. This parameter is required if you specify a value for the VpcId parameter. All instances are launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for DefaultAvailabilityZone, the subnet must be in that zone. For information on default values and when this parameter is required, see the VpcId parameter description.

", - "CreateStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON.

", + "CreateStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration attribute values, or to pass data to recipes. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes.

", "CreateStackRequest$DefaultSshKeyName": "

A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS OpsWorks installs the public key on the instance and you can use the private key with an SSH client to log in to the instance. For more information, see Using SSH to Communicate with an Instance and Managing SSH Access. You can override this setting by specifying a different key pair, or no key pair, when you create an instance.

", "CreateStackResult$StackId": "

The stack ID, which is an opaque string that you use to identify the stack when performing actions such as DescribeStacks.

", "CreateUserProfileRequest$IamUserArn": "

The user's IAM ARN.

", @@ -1147,7 +1159,7 @@ "Deployment$IamUserArn": "

The user's IAM ARN.

", "Deployment$Comment": "

A user-defined comment.

", "Deployment$Status": "

The deployment status:

", - "Deployment$CustomJson": "

A string that contains user-defined custom JSON. It is used to override the corresponding default stack configuration JSON values for stack. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON.

", + "Deployment$CustomJson": "

A string that contains user-defined custom JSON. It can be used to override the corresponding default stack configuration attribute values for stack or to pass data to recipes. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes.

", "DeploymentCommandArgs$key": null, "DeregisterElasticIpRequest$ElasticIp": "

The Elastic IP address.

", "DeregisterInstanceRequest$InstanceId": "

The instance ID.

", @@ -1197,13 +1209,14 @@ "GetHostnameSuggestionRequest$LayerId": "

The layer ID.

", "GetHostnameSuggestionResult$LayerId": "

The layer ID.

", "GetHostnameSuggestionResult$Hostname": "

The generated host name.

", + "GrantAccessRequest$InstanceId": "

The instance's AWS OpsWorks ID.

", "Instance$InstanceId": "

The instance ID.

", "Instance$Ec2InstanceId": "

The ID of the associated Amazon EC2 instance.

", "Instance$Hostname": "

The instance host name.

", "Instance$StackId": "

The stack ID.

", "Instance$InstanceType": "

The instance type. AWS OpsWorks supports all instance types except Cluster Compute, Cluster GPU, and High Memory Cluster. For more information, see Instance Families and Types. The parameter values that specify the various types are in the API Name column of the Available Instance Types table.

", "Instance$InstanceProfileArn": "

The ARN of the instance's IAM profile. For more information about IAM ARNs, see Using Identifiers.

", - "Instance$Status": "

The instance status:

", + "Instance$Status": "

The instance status:

", "Instance$Os": "

The instance's operating system.

", "Instance$AmiId": "

A custom AMI ID to be used to create the instance. The AMI should be based on one of the standard AWS OpsWorks APIs: Amazon Linux, Ubuntu 12.04 LTS, or Ubuntu 14.04 LTS. For more information, see Instances

", "Instance$AvailabilityZone": "

The instance Availability Zone. For more information, see Regions and Endpoints.

", @@ -1234,7 +1247,7 @@ "Parameters$value": null, "Permission$StackId": "

A stack ID.

", "Permission$IamUserArn": "

The Amazon Resource Name (ARN) for an AWS Identity and Access Management (IAM) role. For more information about IAM ARNs, see Using Identifiers.

", - "Permission$Level": "

The user's permission level, which must be the following:

For more information on the permissions associated with these levels, see Managing User Permissions

", + "Permission$Level": "

The user's permission level, which must be the following:

For more information on the permissions associated with these levels, see Managing User Permissions

", "RaidArray$RaidArrayId": "

The array ID.

", "RaidArray$InstanceId": "

The instance ID.

", "RaidArray$Name": "

The array name.

", @@ -1246,7 +1259,7 @@ "RdsDbInstance$RdsDbInstanceArn": "

The instance's ARN.

", "RdsDbInstance$DbInstanceIdentifier": "

The DB instance identifier.

", "RdsDbInstance$DbUser": "

The master user name.

", - "RdsDbInstance$DbPassword": "

The database password.

", + "RdsDbInstance$DbPassword": "

AWS OpsWorks returns *****FILTERED***** instead of the actual value.

", "RdsDbInstance$Region": "

The instance's AWS region.

", "RdsDbInstance$Address": "

The instance's address.

", "RdsDbInstance$Engine": "

The instance's database engine.

", @@ -1285,12 +1298,12 @@ "SetLoadBasedAutoScalingRequest$LayerId": "

The layer ID.

", "SetPermissionRequest$StackId": "

The stack ID.

", "SetPermissionRequest$IamUserArn": "

The user's IAM ARN.

", - "SetPermissionRequest$Level": "

The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.

For more information on the permissions associated with these levels, see Managing User Permissions

", + "SetPermissionRequest$Level": "

The user's permission level, which must be set to one of the following strings. You cannot set your own permissions level.

For more information on the permissions associated with these levels, see Managing User Permissions

", "SetTimeBasedAutoScalingRequest$InstanceId": "

The instance ID.

", "Source$Url": "

The source URL.

", "Source$Username": "

This parameter depends on the repository type.

", - "Source$Password": "

This parameter depends on the repository type.

For more information on how to safely handle IAM credentials, see .

", - "Source$SshKey": "

The repository's SSH key.

", + "Source$Password": "

When included in a request, the parameter depends on the repository type.

For more information on how to safely handle IAM credentials, see .

In responses, AWS OpsWorks returns *****FILTERED***** instead of the actual value.

", + "Source$SshKey": "

In requests, the repository's SSH key.

In responses, AWS OpsWorks returns *****FILTERED***** instead of the actual value.

", "Source$Revision": "

The application's version. AWS OpsWorks enables you to easily deploy new versions of an application. One of the simplest approaches is to have branches or revisions in your repository that represent different versions that can potentially be deployed.

", "SslConfiguration$Certificate": "

The contents of the certificate's domain.crt file.

", "SslConfiguration$PrivateKey": "

The private key; the contents of the certificate's domain.kex file.

", @@ -1306,7 +1319,7 @@ "Stack$HostnameTheme": "

The stack host name theme, with spaces replaced by underscores.

", "Stack$DefaultAvailabilityZone": "

The stack's default Availability Zone. For more information, see Regions and Endpoints.

", "Stack$DefaultSubnetId": "

The default subnet ID, if the stack is running in a VPC.

", - "Stack$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON.

", + "Stack$CustomJson": "

A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes.

", "Stack$DefaultSshKeyName": "

A default Amazon EC2 key pair for the stack's instances. You can override this value when you create or update an instance.

", "StackAttributes$value": null, "StackConfigurationManager$Name": "

The name. This parameter must be set to \"Chef\".

", @@ -1319,6 +1332,9 @@ "StopInstanceRequest$InstanceId": "

The instance ID.

", "StopStackRequest$StackId": "

The stack ID.

", "Strings$member": null, + "TemporaryCredential$Username": "

The user name.

", + "TemporaryCredential$Password": "

The password.

", + "TemporaryCredential$InstanceId": "

The instance's AWS OpsWorks ID.

", "TimeBasedAutoScalingConfiguration$InstanceId": "

The instance ID.

", "UnassignInstanceRequest$InstanceId": "

The instance ID.

", "UnassignVolumeRequest$VolumeId": "

The volume ID.

", @@ -1330,7 +1346,7 @@ "UpdateInstanceRequest$InstanceId": "

The instance ID.

", "UpdateInstanceRequest$InstanceType": "

The instance type. AWS OpsWorks supports all instance types except Cluster Compute, Cluster GPU, and High Memory Cluster. For more information, see Instance Families and Types. The parameter values that you use to specify the various types are in the API Name column of the Available Instance Types table.

", "UpdateInstanceRequest$Hostname": "

The instance host name.

", - "UpdateInstanceRequest$Os": "

The instance's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version, such as Amazon Linux 2014.09. If you set this parameter to Custom, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see Operating SystemsFor more information on how to use custom AMIs with OpsWorks, see Using Custom AMIs.

", + "UpdateInstanceRequest$Os": "

The instance's operating system, which must be set to one of the following.

For Windows stacks: Microsoft Windows Server 2012 R2.

For Linux stacks:

The default option is the current Amazon Linux version. If you set this parameter to Custom, you must use the CreateInstance action's AmiId parameter to specify the custom AMI that you want to use. For more information on the standard operating systems, see Operating SystemsFor more information on how to use custom AMIs with OpsWorks, see Using Custom AMIs.

", "UpdateInstanceRequest$AmiId": "

A custom AMI ID to be used to create the instance. The AMI should be based on one of the standard AWS OpsWorks AMIs: Amazon Linux, Ubuntu 12.04 LTS, or Ubuntu 14.04 LTS. For more information, see Instances

If you specify a custom AMI, you must set Os to Custom.", "UpdateInstanceRequest$SshKeyName": "

The instance's Amazon EC2 key name.

", "UpdateLayerRequest$LayerId": "

The layer ID.

", @@ -1345,11 +1361,11 @@ "UpdateStackRequest$Name": "

The stack's new name.

", "UpdateStackRequest$ServiceRoleArn": "

The stack AWS Identity and Access Management (IAM) role, which allows AWS OpsWorks to work with AWS resources on your behalf. You must set this parameter to the Amazon Resource Name (ARN) for an existing IAM role. For more information about IAM ARNs, see Using Identifiers.

You must set this parameter to a valid service role ARN or the action will fail; there is no default value. You can specify the stack's current service role ARN, if you prefer, but you must do so explicitly.

", "UpdateStackRequest$DefaultInstanceProfileArn": "

The ARN of an IAM profile that is the default profile for all of the stack's EC2 instances. For more information about IAM ARNs, see Using Identifiers.

", - "UpdateStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", - "UpdateStackRequest$HostnameTheme": "

The stack's new host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", + "UpdateStackRequest$DefaultOs": "

The stack's operating system, which must be set to one of the following.

The default option is the current Amazon Linux version.

", + "UpdateStackRequest$HostnameTheme": "

The stack's new host name theme, with spaces are replaced by underscores. The theme is used to generate host names for the stack's instances. By default, HostnameTheme is set to Layer_Dependent, which creates host names by appending integers to the layer's short name. The other themes are:

To obtain a generated host name, call GetHostNameSuggestion, which returns a host name based on the current theme.

", "UpdateStackRequest$DefaultAvailabilityZone": "

The stack's default Availability Zone, which must be in the specified region. For more information, see Regions and Endpoints. If you also specify a value for DefaultSubnetId, the subnet must be in the same zone. For more information, see CreateStack.

", "UpdateStackRequest$DefaultSubnetId": "

The stack's default VPC subnet ID. This parameter is required if you specify a value for the VpcId parameter. All instances are launched into this subnet unless you specify otherwise when you create the instance. If you also specify a value for DefaultAvailabilityZone, the subnet must be in that zone. For information on default values and when this parameter is required, see the VpcId parameter description.

", - "UpdateStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It is used to override the corresponding default stack configuration JSON values. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration JSON.

", + "UpdateStackRequest$CustomJson": "

A string that contains user-defined, custom JSON. It can be used to override the corresponding default stack configuration JSON values or to pass data to recipes. The string should be in the following format and must escape characters such as '\"'.:

\"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\",...}\"

For more information on custom JSON, see Use Custom JSON to Modify the Stack Configuration Attributes.

", "UpdateStackRequest$DefaultSshKeyName": "

A default Amazon EC2 key pair name. The default value is none. If you specify a key pair name, AWS OpsWorks installs the public key on the instance and you can use the private key with an SSH client to log in to the instance. For more information, see Using SSH to Communicate with an Instance and Managing SSH Access. You can override this setting by specifying a different key pair, or no key pair, when you create an instance.

", "UpdateUserProfileRequest$IamUserArn": "

The user IAM ARN.

", "UpdateUserProfileRequest$SshUsername": "

The user's SSH user name. The allowable characters are [a-z], [A-Z], [0-9], '-', and '_'. If the specified name includes other punctuation marks, AWS OpsWorks removes them. For example, my.name will be changed to myname. If you do not specify an SSH user name, AWS OpsWorks generates one from the IAM user name.

", @@ -1382,6 +1398,7 @@ "refs": { "App$Domains": "

The app vhost settings with multiple domains separated by commas. For example: 'www.example.com, example.com'

", "AssignInstanceRequest$LayerIds": "

The layer ID, which must correspond to a custom layer. You cannot assign a registered instance to a built-in layer.

", + "AutoScalingThresholds$Alarms": "

Custom Cloudwatch auto scaling alarms, to be used as thresholds. This parameter takes a list of up to five alarm names, which are case sensitive and must be in the same region as the stack.

To use custom alarms, you must update your service role to allow cloudwatch:DescribeAlarms. You can either have AWS OpsWorks update the role for you when you first use this feature or you can edit the role manually. For more information, see Allowing AWS OpsWorks to Act on Your Behalf.", "CloneStackRequest$CloneAppIds": "

A list of source stack app IDs to be included in the cloned stack.

", "CreateAppRequest$Domains": "

The app virtual host settings, with multiple domains separated by commas. For example: 'www.example.com, example.com'

", "CreateDeploymentRequest$InstanceIds": "

The instance IDs for the deployment targets.

", @@ -1430,6 +1447,12 @@ "DailyAutoScalingSchedule$value": null } }, + "TemporaryCredential": { + "base": "

Contains the data needed by RDP clients such as the Microsoft Remote Desktop Connection to log in to the instance.

", + "refs": { + "GrantAccessResult$TemporaryCredential": "

A TemporaryCredential object that contains the data needed to log in to the instance by RDP clients, such as the Microsoft Remote Desktop Connection.

" + } + }, "TimeBasedAutoScalingConfiguration": { "base": "

Describes an instance's time-based auto scaling configuration.

", "refs": { @@ -1509,6 +1532,12 @@ "DescribeUserProfilesResult$UserProfiles": "

A Users object that describes the specified users.

" } }, + "ValidForInMinutes": { + "base": null, + "refs": { + "GrantAccessRequest$ValidForInMinutes": "

The length of time (in minutes) that the grant is valid. When the grant expires at the end of this period, the user will no longer be able to use the credentials to log in. If the user is logged in at the time, he or she automatically will be logged out.

" + } + }, "ValidationException": { "base": "

Indicates that a request was invalid.

", "refs": { @@ -1553,7 +1582,7 @@ } }, "WeeklyAutoScalingSchedule": { - "base": "

Describes a time-based instance's auto scaling schedule. The schedule consists of a set of key-value pairs.

The default setting for all time periods is off, so you use the following parameters primarily to specify the online periods. You don't have to explicitly specify offline periods unless you want to change an online period to an offline period.

The following example specifies that the instance should be online for four hours, from UTC 1200 - 1600. It will be off for the remainder of the day.

{ \"12\":\"on\", \"13\":\"on\", \"14\":\"on\", \"15\":\"on\" }

", + "base": "

Describes a time-based instance's auto scaling schedule. The schedule consists of a set of key-value pairs.

The default setting for all time periods is off, so you use the following parameters primarily to specify the online periods. You don't have to explicitly specify offline periods unless you want to change an online period to an offline period.

The following example specifies that the instance should be online for four hours, from UTC 1200 - 1600. It will be off for the remainder of the day.

{ \"12\":\"on\", \"13\":\"on\", \"14\":\"on\", \"15\":\"on\" }

", "refs": { "SetTimeBasedAutoScalingRequest$AutoScalingSchedule": "

An AutoScalingSchedule with the instance schedule.

", "TimeBasedAutoScalingConfiguration$AutoScalingSchedule": "

A WeeklyAutoScalingSchedule object with the instance schedule.

" diff --git a/src/data/route53domains/2014-05-15/api-2.json b/src/data/route53domains/2014-05-15/api-2.json index deea521fd1..40359319f5 100644 --- a/src/data/route53domains/2014-05-15/api-2.json +++ b/src/data/route53domains/2014-05-15/api-2.json @@ -49,6 +49,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -65,6 +70,11 @@ "shape":"InvalidInput", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -96,6 +106,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -112,6 +127,11 @@ "shape":"InvalidInput", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -143,6 +163,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -159,6 +184,11 @@ "shape":"InvalidInput", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -228,6 +258,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -285,6 +320,11 @@ "shape":"InvalidInput", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -357,6 +397,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -388,6 +433,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -419,6 +469,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] }, @@ -440,6 +495,11 @@ "shape":"OperationLimitExceeded", "error":{"httpStatusCode":400}, "exception":true + }, + { + "shape":"UnsupportedTLD", + "error":{"httpStatusCode":400}, + "exception":true } ] } @@ -798,7 +858,8 @@ "UNAVAILABLE", "UNAVAILABLE_PREMIUM", "UNAVAILABLE_RESTRICTED", - "RESERVED" + "RESERVED", + "DONT_KNOW" ] }, "DomainLimitExceeded":{ diff --git a/src/data/route53domains/2014-05-15/docs-2.json b/src/data/route53domains/2014-05-15/docs-2.json index 33e44f1885..17f9937771 100644 --- a/src/data/route53domains/2014-05-15/docs-2.json +++ b/src/data/route53domains/2014-05-15/docs-2.json @@ -158,7 +158,7 @@ "DomainAvailability": { "base": null, "refs": { - "CheckDomainAvailabilityResponse$Availability": "

Whether the domain name is available for registering.

You can only register domains designated as AVAILABLE.

Type: String

Valid values:

" + "CheckDomainAvailabilityResponse$Availability": "

Whether the domain name is available for registering.

You can only register domains designated as AVAILABLE.

Type: String

Valid values:

" } }, "DomainLimitExceeded": { @@ -521,7 +521,7 @@ "TagKey": { "base": null, "refs": { - "Tag$Key": "

The key (name) of a tag.

Type: String

Default: None

Valid values: A-Z, a-z, 0-9, space, \".:/=+\\-%@\"

Constraints: Each key can be 1-128 characters long.

Required: Yes

", + "Tag$Key": "

The key (name) of a tag.

Type: String

Default: None

Valid values: A-Z, a-z, 0-9, space, \".:/=+\\-@\"

Constraints: Each key can be 1-128 characters long.

Required: Yes

", "TagKeyList$member": null } }, @@ -535,13 +535,13 @@ "base": null, "refs": { "ListTagsForDomainResponse$TagList": "

A list of the tags that are associated with the specified domain.

Type: A complex type containing a list of tags

Each tag includes the following elements.

", - "UpdateTagsForDomainRequest$TagsToUpdate": "

A list of the tag keys and values that you want to add or update. If you specify a key that already exists, the corresponding value will be replaced.

Type: A complex type containing a list of tags

Default: None

Required: No

'>

Each tag includes the following elements:

" + "UpdateTagsForDomainRequest$TagsToUpdate": "

A list of the tag keys and values that you want to add or update. If you specify a key that already exists, the corresponding value will be replaced.

Type: A complex type containing a list of tags

Default: None

Required: No

'>

Each tag includes the following elements:

" } }, "TagValue": { "base": null, "refs": { - "Tag$Value": "

The value of a tag.

Type: String

Default: None

Valid values: A-Z, a-z, 0-9, space, \".:/=+\\-%@\"

Constraints: Each value can be 0-256 characters long.

Required: Yes

" + "Tag$Value": "

The value of a tag.

Type: String

Default: None

Valid values: A-Z, a-z, 0-9, space, \".:/=+\\-@\"

Constraints: Each value can be 0-256 characters long.

Required: Yes

" } }, "Timestamp": { diff --git a/src/data/s3/2006-03-01/waiters-2.json b/src/data/s3/2006-03-01/waiters-2.json index 754fb5f66d..b508a8f5b0 100644 --- a/src/data/s3/2006-03-01/waiters-2.json +++ b/src/data/s3/2006-03-01/waiters-2.json @@ -11,6 +11,11 @@ "matcher": "status", "state": "success" }, + { + "expected": 301, + "matcher": "status", + "state": "success" + }, { "expected": 403, "matcher": "status", diff --git a/src/data/sts/2011-06-15/api-2.json b/src/data/sts/2011-06-15/api-2.json index f9ea82cd1f..309a2225d1 100644 --- a/src/data/sts/2011-06-15/api-2.json +++ b/src/data/sts/2011-06-15/api-2.json @@ -1,4 +1,5 @@ { + "version":"2.0", "metadata":{ "apiVersion":"2011-06-15", "endpointPrefix":"sts", @@ -531,8 +532,8 @@ "externalIdType":{ "type":"string", "min":2, - "max":96, - "pattern":"[\\w+=,.@:-]*" + "max":1224, + "pattern":"[\\w+=,.@:\\/-]*" }, "federatedIdType":{ "type":"string", diff --git a/src/data/sts/2011-06-15/docs-2.json b/src/data/sts/2011-06-15/docs-2.json index 12d62d55d4..f1121a9e87 100644 --- a/src/data/sts/2011-06-15/docs-2.json +++ b/src/data/sts/2011-06-15/docs-2.json @@ -1,13 +1,14 @@ { + "version": "2.0", "operations": { "AssumeRole": "

Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) that you can use to access AWS resources that you might not normally have access to. Typically, you use AssumeRole for cross-account access or federation.

Important: You cannot call AssumeRole by using AWS account credentials; access will be denied. You must use IAM user credentials or temporary security credentials to call AssumeRole.

For cross-account access, imagine that you own multiple accounts and need to access resources in each account. You could create long-term credentials in each account to access those resources. However, managing all those credentials and remembering which one can access which account can be time consuming. Instead, you can create one set of long-term credentials in one account and then use temporary security credentials to access all the other accounts by assuming roles in those accounts. For more information about roles, see Roles in Using IAM.

For federation, you can, for example, grant single sign-on access to the AWS Management Console. If you already have an identity and authentication system in your corporate network, you don't have to recreate user identities in AWS in order to grant those user identities access to AWS. Instead, after a user has been authenticated, you call AssumeRole (and specify the role with the appropriate permissions) to get temporary security credentials for that user. With those temporary security credentials, you construct a sign-in URL that users can use to access the console. For more information, see Scenarios for Granting Temporary Access in Using Temporary Security Credentials.

The temporary security credentials are valid for the duration that you specified when calling AssumeRole, which can be from 900 seconds (15 minutes) to 3600 seconds (1 hour). The default is 1 hour.

Optionally, you can pass an IAM access policy to this operation. If you choose not to pass a policy, the temporary security credentials that are returned by the operation have the permissions that are defined in the access policy of the role that is being assumed. If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRole in Using Temporary Security Credentials.

To assume a role, your AWS account must be trusted by the role. The trust relationship is defined in the role's trust policy when the role is created. You must also have a policy that allows you to call sts:AssumeRole.

Using MFA with AssumeRole

You can optionally include multi-factor authentication (MFA) information when you call AssumeRole. This is useful for cross-account scenarios in which you want to make sure that the user who is assuming the role has been authenticated using an AWS MFA device. In that scenario, the trust policy of the role being assumed includes a condition that tests for MFA authentication; if the caller does not include valid MFA information, the request to assume the role is denied. The condition in a trust policy that tests for MFA authentication might look like the following example.

\"Condition\": {\"Null\": {\"aws:MultiFactorAuthAge\": false}}

For more information, see Configuring MFA-Protected API Access in the Using IAM guide.

To use MFA with AssumeRole, you pass values for the SerialNumber and TokenCode parameters. The SerialNumber value identifies the user's hardware or virtual MFA device. The TokenCode is the time-based one-time password (TOTP) that the MFA devices produces.

", - "AssumeRoleWithSAML": "

Returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. This operation provides a mechanism for tying an enterprise identity store or directory to role-based AWS access without user-specific credentials or configuration.

The temporary security credentials returned by this operation consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS services. The credentials are valid for the duration that you specified when calling AssumeRoleWithSAML, which can be up to 3600 seconds (1 hour) or until the time specified in the SAML authentication response's NotOnOrAfter value, whichever is shorter.

Optionally, you can pass an IAM access policy to this operation. If you choose not to pass a policy, the temporary security credentials that are returned by the operation have the permissions that are defined in the access policy of the role that is being assumed. If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithSAML in Using Temporary Security Credentials.

Before your application can call AssumeRoleWithSAML, you must configure your SAML identity provider (IdP) to issue the claims required by AWS. Additionally, you must use AWS Identity and Access Management (IAM) to create a SAML provider entity in your AWS account that represents your identity provider, and create an IAM role that specifies this SAML provider in its trust policy.

Calling AssumeRoleWithSAML does not require the use of AWS security credentials. The identity of the caller is validated by using keys in the metadata document that is uploaded for the SAML provider entity for your identity provider.

For more information, see the following resources:

", - "AssumeRoleWithWebIdentity": "

Returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider, such as Login with Amazon, Amazon Cognito, Facebook, or Google.

Calling AssumeRoleWithWebIdentity does not require the use of AWS security credentials. Therefore, you can distribute an application (for example, on mobile devices) that requests temporary security credentials without including long-term AWS credentials in the application, and without deploying server-based proxy services that use long-term AWS credentials. Instead, the identity of the caller is validated by using a token from the web identity provider.

The temporary security credentials returned by this API consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS service APIs. The credentials are valid for the duration that you specified when calling AssumeRoleWithWebIdentity, which can be from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the temporary security credentials are valid for 1 hour.

Optionally, you can pass an IAM access policy to this operation. If you choose not to pass a policy, the temporary security credentials that are returned by the operation have the permissions that are defined in the access policy of the role that is being assumed. If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithWebIdentity in Using Temporary Security Credentials.

Before your application can call AssumeRoleWithWebIdentity, you must have an identity token from a supported identity provider and create a role that the application can assume. The role that your application assumes must trust the identity provider that is associated with the identity token. In other words, the identity provider must be specified in the role's trust policy.

For more information about how to use web identity federation and the AssumeRoleWithWebIdentity, see the following resources:

", - "DecodeAuthorizationMessage": "

Decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request.

For example, if a user is not authorized to perform an action that he or she has requested, the request returns a Client.UnauthorizedOperation response (an HTTP 403 response). Some AWS actions additionally return an encoded message that can provide details about this authorization failure.

The message is encoded because the details of the authorization status can constitute privileged information that the user who requested the action should not see. To decode an authorization status message, a user must be granted permissions via an IAM policy to request the DecodeAuthorizationMessage (sts:DecodeAuthorizationMessage) action.

The decoded message includes the following type of information:

", - "GetFederationToken": "

Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) for a federated user. A typical use is in a proxy application that gets temporary security credentials on behalf of distributed applications inside a corporate network. Because you must call the GetFederationToken action using the long-term security credentials of an IAM user, this call is appropriate in contexts where those credentials can be safely stored, usually in a server-based application.

Note: Do not use this call in mobile applications or client-based web applications that directly get temporary security credentials. For those types of applications, use AssumeRoleWithWebIdentity.

The GetFederationToken action must be called by using the long-term AWS security credentials of an IAM user. You can also call GetFederationToken using the security credentials of an AWS account (root), but this is not recommended. Instead, we recommend that you create an IAM user for the purpose of the proxy application and then attach a policy to the IAM user that limits federated users to only the actions and resources they need access to. For more information, see IAM Best Practices in Using IAM.

The temporary security credentials that are obtained by using the long-term credentials of an IAM user are valid for the specified duration, between 900 seconds (15 minutes) and 129600 seconds (36 hours). Temporary credentials that are obtained by using AWS account (root) credentials have a maximum duration of 3600 seconds (1 hour)

Permissions

The permissions for the temporary security credentials returned by GetFederationToken are determined by a combination of the following:

The passed policy is attached to the temporary security credentials that result from the GetFederationToken API call--that is, to the federated user. When the federated user makes an AWS request, AWS evaluates the policy attached to the federated user in combination with the policy or policies attached to the IAM user whose credentials were used to call GetFederationToken. AWS allows the federated user's request only when both the federated user and the IAM user are explicitly allowed to perform the requested action. The passed policy cannot grant more permissions than those that are defined in the IAM user policy.

A typical use case is that the permissions of the IAM user whose credentials are used to call GetFederationToken are designed to allow access to all the actions and resources that any federated user will need. Then, for individual users, you pass a policy to the operation that scopes down the permissions to a level that's appropriate to that individual user, using a policy that allows only a subset of permissions that are granted to the IAM user.

If you do not pass a policy, the resulting temporary security credentials have no effective permissions. The only exception is when the temporary security credentials are used to access a resource that has a resource-based policy that specifically allows the federated user to access the resource.

For more information about how permissions work, see Permissions for GetFederationToken in Using Temporary Security Credentials. For information about using GetFederationToken to create temporary security credentials, see Creating Temporary Credentials to Enable Access for Federated Users in Using Temporary Security Credentials.

", - "GetSessionToken": "

Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. Typically, you use GetSessionToken if you want to use MFA to protect programmatic calls to specific AWS APIs like Amazon EC2 StopInstances. MFA-enabled IAM users would need to call GetSessionToken and submit an MFA code that is associated with their MFA device. Using the temporary security credentials that are returned from the call, IAM users can then make programmatic calls to APIs that require MFA authentication.

The GetSessionToken action must be called by using the long-term AWS security credentials of the AWS account or an IAM user. Credentials that are created by IAM users are valid for the duration that you specify, between 900 seconds (15 minutes) and 129600 seconds (36 hours); credentials that are created by using account credentials have a maximum duration of 3600 seconds (1 hour).

The permissions associated with the temporary security credentials returned by GetSessionToken are based on the permissions associated with account or IAM user whose credentials are used to call the action. If GetSessionToken is called using root account credentials, the temporary credentials have root account permissions. Similarly, if GetSessionToken is called using the credentials of an IAM user, the temporary credentials have the same permissions as the IAM user.

For more information about using GetSessionToken to create temporary credentials, go to Creating Temporary Credentials to Enable Access for IAM Users in Using Temporary Security Credentials.

" + "AssumeRoleWithSAML": "

Returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. This operation provides a mechanism for tying an enterprise identity store or directory to role-based AWS access without user-specific credentials or configuration.

The temporary security credentials returned by this operation consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS services. The credentials are valid for the duration that you specified when calling AssumeRoleWithSAML, which can be up to 3600 seconds (1 hour) or until the time specified in the SAML authentication response's SessionNotOnOrAfter value, whichever is shorter.

The maximum duration for a session is 1 hour, and the minimum duration is 15 minutes, even if values outside this range are specified.

Optionally, you can pass an IAM access policy to this operation. If you choose not to pass a policy, the temporary security credentials that are returned by the operation have the permissions that are defined in the access policy of the role that is being assumed. If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithSAML in Using Temporary Security Credentials.

Before your application can call AssumeRoleWithSAML, you must configure your SAML identity provider (IdP) to issue the claims required by AWS. Additionally, you must use AWS Identity and Access Management (IAM) to create a SAML provider entity in your AWS account that represents your identity provider, and create an IAM role that specifies this SAML provider in its trust policy.

Calling AssumeRoleWithSAML does not require the use of AWS security credentials. The identity of the caller is validated by using keys in the metadata document that is uploaded for the SAML provider entity for your identity provider.

For more information, see the following resources:

", + "AssumeRoleWithWebIdentity": "

Returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider, such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible identity provider.

For mobile applications, we recommend that you use Amazon Cognito. You can use Amazon Cognito with the AWS SDK for iOS and the AWS SDK for Android to uniquely identify a user and supply the user with a consistent identity throughout the lifetime of an application.

To learn more about Amazon Cognito, see Amazon Cognito Overview in the AWS SDK for Android Developer Guide guide and Amazon Cognito Overview in the AWS SDK for iOS Developer Guide.

Calling AssumeRoleWithWebIdentity does not require the use of AWS security credentials. Therefore, you can distribute an application (for example, on mobile devices) that requests temporary security credentials without including long-term AWS credentials in the application, and without deploying server-based proxy services that use long-term AWS credentials. Instead, the identity of the caller is validated by using a token from the web identity provider.

The temporary security credentials returned by this API consist of an access key ID, a secret access key, and a security token. Applications can use these temporary security credentials to sign calls to AWS service APIs. The credentials are valid for the duration that you specified when calling AssumeRoleWithWebIdentity, which can be from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the temporary security credentials are valid for 1 hour.

Optionally, you can pass an IAM access policy to this operation. If you choose not to pass a policy, the temporary security credentials that are returned by the operation have the permissions that are defined in the access policy of the role that is being assumed. If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithWebIdentity in Using Temporary Security Credentials.

Before your application can call AssumeRoleWithWebIdentity, you must have an identity token from a supported identity provider and create a role that the application can assume. The role that your application assumes must trust the identity provider that is associated with the identity token. In other words, the identity provider must be specified in the role's trust policy.

For more information about how to use web identity federation and the AssumeRoleWithWebIdentity API, see the following resources:

", + "DecodeAuthorizationMessage": "

Decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request.

For example, if a user is not authorized to perform an action that he or she has requested, the request returns a Client.UnauthorizedOperation response (an HTTP 403 response). Some AWS actions additionally return an encoded message that can provide details about this authorization failure.

Only certain AWS actions return an encoded authorization message. The documentation for an individual action indicates whether that action returns an encoded message in addition to returning an HTTP code.

The message is encoded because the details of the authorization status can constitute privileged information that the user who requested the action should not see. To decode an authorization status message, a user must be granted permissions via an IAM policy to request the DecodeAuthorizationMessage (sts:DecodeAuthorizationMessage) action.

The decoded message includes the following type of information:

", + "GetFederationToken": "

Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) for a federated user. A typical use is in a proxy application that gets temporary security credentials on behalf of distributed applications inside a corporate network. Because you must call the GetFederationToken action using the long-term security credentials of an IAM user, this call is appropriate in contexts where those credentials can be safely stored, usually in a server-based application.

If you are creating a mobile-based or browser-based app that can authenticate users using a web identity provider like Login with Amazon, Facebook, Google, or an OpenID Connect-compatible identity provider, we recommend that you use Amazon Cognito or AssumeRoleWithWebIdentity. For more information, see Creating Temporary Security Credentials for Mobile Apps Using Identity Providers in Using Temporary Security Credentials.

The GetFederationToken action must be called by using the long-term AWS security credentials of an IAM user. You can also call GetFederationToken using the security credentials of an AWS account (root), but this is not recommended. Instead, we recommend that you create an IAM user for the purpose of the proxy application and then attach a policy to the IAM user that limits federated users to only the actions and resources they need access to. For more information, see IAM Best Practices in Using IAM.

The temporary security credentials that are obtained by using the long-term credentials of an IAM user are valid for the specified duration, between 900 seconds (15 minutes) and 129600 seconds (36 hours). Temporary credentials that are obtained by using AWS account (root) credentials have a maximum duration of 3600 seconds (1 hour)

Permissions

The permissions for the temporary security credentials returned by GetFederationToken are determined by a combination of the following:

The passed policy is attached to the temporary security credentials that result from the GetFederationToken API call--that is, to the federated user. When the federated user makes an AWS request, AWS evaluates the policy attached to the federated user in combination with the policy or policies attached to the IAM user whose credentials were used to call GetFederationToken. AWS allows the federated user's request only when both the federated user and the IAM user are explicitly allowed to perform the requested action. The passed policy cannot grant more permissions than those that are defined in the IAM user policy.

A typical use case is that the permissions of the IAM user whose credentials are used to call GetFederationToken are designed to allow access to all the actions and resources that any federated user will need. Then, for individual users, you pass a policy to the operation that scopes down the permissions to a level that's appropriate to that individual user, using a policy that allows only a subset of permissions that are granted to the IAM user.

If you do not pass a policy, the resulting temporary security credentials have no effective permissions. The only exception is when the temporary security credentials are used to access a resource that has a resource-based policy that specifically allows the federated user to access the resource.

For more information about how permissions work, see Permissions for GetFederationToken in Using Temporary Security Credentials. For information about using GetFederationToken to create temporary security credentials, see Creating Temporary Credentials to Enable Access for Federated Users in Using Temporary Security Credentials.

", + "GetSessionToken": "

Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. Typically, you use GetSessionToken if you want to use MFA to protect programmatic calls to specific AWS APIs like Amazon EC2 StopInstances. MFA-enabled IAM users would need to call GetSessionToken and submit an MFA code that is associated with their MFA device. Using the temporary security credentials that are returned from the call, IAM users can then make programmatic calls to APIs that require MFA authentication.

The GetSessionToken action must be called by using the long-term AWS security credentials of the AWS account or an IAM user. Credentials that are created by IAM users are valid for the duration that you specify, between 900 seconds (15 minutes) and 129600 seconds (36 hours); credentials that are created by using account credentials have a maximum duration of 3600 seconds (1 hour).

We recommend that you do not call GetSessionToken with root account credentials. Instead, follow our best practices by creating one or more IAM users, giving them the necessary permissions, and using IAM users for everyday interaction with AWS.

The permissions associated with the temporary security credentials returned by GetSessionToken are based on the permissions associated with account or IAM user whose credentials are used to call the action. If GetSessionToken is called using root account credentials, the temporary credentials have root account permissions. Similarly, if GetSessionToken is called using the credentials of an IAM user, the temporary credentials have the same permissions as the IAM user.

For more information about using GetSessionToken to create temporary credentials, go to Creating Temporary Credentials to Enable Access for IAM Users in Using Temporary Security Credentials.

" }, - "service": "AWS Security Token Service

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). This guide provides descriptions of the STS API. For more detailed information about using this service, go to Using Temporary Security Credentials.

For information about setting up signatures and authorization through the API, go to Signing AWS API Requests in the AWS General Reference. For general information about the Query API, go to Making Query Requests in Using IAM. For information about using security tokens with other AWS products, go to Using Temporary Security Credentials to Access AWS in Using Temporary Security Credentials.

If you're new to AWS and need additional technical information about a specific AWS product, you can find the product's technical documentation at http://aws.amazon.com/documentation/.

Endpoints

For information about STS endpoints, see Regions and Endpoints in the AWS General Reference.

Recording API requests

STS supports AWS CloudTrail, which is a service that records AWS calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information collected by CloudTrail, you can determine what requests were successfully made to STS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.

", + "service": "AWS Security Token Service

The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). This guide provides descriptions of the STS API. For more detailed information about using this service, go to Using Temporary Security Credentials.

As an alternative to using the API, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to STS. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page.

For information about setting up signatures and authorization through the API, go to Signing AWS API Requests in the AWS General Reference. For general information about the Query API, go to Making Query Requests in Using IAM. For information about using security tokens with other AWS products, go to Using Temporary Security Credentials to Access AWS in Using Temporary Security Credentials.

If you're new to AWS and need additional technical information about a specific AWS product, you can find the product's technical documentation at http://aws.amazon.com/documentation/.

Endpoints

The AWS Security Token Service (STS) has a default endpoint of https://sts.amazonaws.com that maps to the US East (N. Virginia) region. Additional regions are available, but must first be activated in the AWS Management Console before you can use a different region's endpoint. For more information about activating a region for STS see Activating STS in a New Region in the Using Temporary Security Credentials guide.

For information about STS endpoints, see Regions and Endpoints in the AWS General Reference.

Recording API requests

STS supports AWS CloudTrail, which is a service that records AWS calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information collected by CloudTrail, you can determine what requests were successfully made to STS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.

", "shapes": { "AssumeRoleRequest": { "base": null, @@ -15,7 +16,7 @@ } }, "AssumeRoleResponse": { - "base": "

Contains the result of a successful call to the AssumeRole action, including temporary AWS credentials that can be used to make AWS requests.

", + "base": "

Contains the response to a successful AssumeRole request, including temporary AWS credentials that can be used to make AWS requests.

", "refs": { } }, @@ -25,7 +26,7 @@ } }, "AssumeRoleWithSAMLResponse": { - "base": "

Contains the result of a successful call to the AssumeRoleWithSAML action, including temporary AWS credentials that can be used to make AWS requests.

", + "base": "

Contains the response to a successful AssumeRoleWithSAML request, including temporary AWS credentials that can be used to make AWS requests.

", "refs": { } }, @@ -35,7 +36,7 @@ } }, "AssumeRoleWithWebIdentityResponse": { - "base": "

Contains the result of a successful call to the AssumeRoleWithWebIdentity action, including temporary AWS credentials that can be used to make AWS requests.

", + "base": "

Contains the response to a successful AssumeRoleWithWebIdentity request, including temporary AWS credentials that can be used to make AWS requests.

", "refs": { } }, @@ -51,7 +52,7 @@ "base": null, "refs": { "AssumeRoleWithSAMLResponse$Audience": "

The value of the Recipient attribute of the SubjectConfirmationData element of the SAML assertion.

", - "AssumeRoleWithWebIdentityResponse$Audience": "

The intended audience of the web identity token. This is traditionally the client identifier issued to the application that requested the web identity token.

" + "AssumeRoleWithWebIdentityResponse$Audience": "

The intended audience (also known as client ID) of the web identity token. This is traditionally the client identifier issued to the application that requested the web identity token.

" } }, "Credentials": { @@ -91,7 +92,7 @@ } }, "GetFederationTokenResponse": { - "base": "

Contains the result of a successful call to the GetFederationToken action, including temporary AWS credentials that can be used to make AWS requests.

", + "base": "

Contains the response to a successful GetFederationToken request, including temporary AWS credentials that can be used to make AWS requests.

", "refs": { } }, @@ -101,7 +102,7 @@ } }, "GetSessionTokenResponse": { - "base": "

Contains the result of a successful call to the GetSessionToken action, including temporary AWS credentials that can be used to make AWS requests.

", + "base": "

Contains the response to a successful GetSessionToken request, including temporary AWS credentials that can be used to make AWS requests.

", "refs": { } }, @@ -129,7 +130,7 @@ "base": null, "refs": { "AssumeRoleWithSAMLResponse$Issuer": "

The value of the Issuer element of the SAML assertion.

", - "AssumeRoleWithWebIdentityResponse$Provider": "

The issuing authority of the web identity token presented. For OpenID Connect ID Tokens this contains the value of the iss field. For OAuth 2.0 Access Tokens, this contains the value of the ProviderId parameter that was passed in the AssumeRoleWithWebIdentity request.

" + "AssumeRoleWithWebIdentityResponse$Provider": "

The issuing authority of the web identity token presented. For OpenID Connect ID Tokens this contains the value of the iss field. For OAuth 2.0 access tokens, this contains the value of the ProviderId parameter that was passed in the AssumeRoleWithWebIdentity request.

" } }, "MalformedPolicyDocumentException": { @@ -216,7 +217,7 @@ "durationSecondsType": { "base": null, "refs": { - "AssumeRoleWithSAMLRequest$DurationSeconds": "

The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds. An expiration can also be specified in the SAML authentication response's NotOnOrAfter value. The actual expiration time is whichever value is shorter.

", + "AssumeRoleWithSAMLRequest$DurationSeconds": "

The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds. An expiration can also be specified in the SAML authentication response's SessionNotOnOrAfter value. The actual expiration time is whichever value is shorter.

The maximum duration for a session is 1 hour, and the minimum duration is 15 minutes, even if values outside this range are specified. ", "AssumeRoleWithWebIdentityRequest$DurationSeconds": "

The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set to 3600 seconds.

", "GetFederationTokenRequest$DurationSeconds": "

The duration, in seconds, that the session should last. Acceptable durations for federation sessions range from 900 seconds (15 minutes) to 129600 seconds (36 hours), with 43200 seconds (12 hours) as the default. Sessions obtained using AWS account (root) credentials are restricted to a maximum of 3600 seconds (one hour). If the specified duration is longer than one hour, the session obtained by using AWS account (root) credentials defaults to one hour.

", "GetSessionTokenRequest$DurationSeconds": "

The duration, in seconds, that the credentials should remain valid. Acceptable durations for IAM user sessions range from 900 seconds (15 minutes) to 129600 seconds (36 hours), with 43200 seconds (12 hours) as the default. Sessions for AWS account owners are restricted to a maximum of 3600 seconds (one hour). If the duration is longer than one hour, the session for AWS account owners defaults to one hour.

" @@ -237,7 +238,7 @@ "externalIdType": { "base": null, "refs": { - "AssumeRoleRequest$ExternalId": "

A unique identifier that is used by third parties to assume a role in their customers' accounts. For each role that the third party can assume, they should instruct their customers to create a role with the external ID that the third party generated. Each time the third party assumes the role, they must pass the customer's external ID. The external ID is useful in order to help third parties bind a role to the customer who created it. For more information about the external ID, see About the External ID in Using Temporary Security Credentials.

" + "AssumeRoleRequest$ExternalId": "

A unique identifier that is used by third parties to assume a role in their customers' accounts. For each role that the third party can assume, they should instruct their customers to create a role with the external ID that the third party generated. Each time the third party assumes the role, they must pass the customer's external ID. The external ID is useful in order to help third parties bind a role to the customer who created it. For more information about the external ID, see About the External ID in Using Temporary Security Credentials.

" } }, "federatedIdType": { @@ -261,7 +262,7 @@ "invalidAuthorizationMessage": { "base": null, "refs": { - "InvalidAuthorizationMessageException$message": "

The error message associated with the error.

" + "InvalidAuthorizationMessageException$message": null } }, "invalidIdentityTokenMessage": { @@ -300,7 +301,7 @@ "serialNumberType": { "base": null, "refs": { - "AssumeRoleRequest$SerialNumber": "

The identification number of the MFA device that is associated with the user who is making the AssumeRole call. Specify this value if the trust policy of the role being assumed includes a condition that requires MFA authentication. The value is either the serial number for a hardware device (such as GAHT12345678) or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user).

", + "AssumeRoleRequest$SerialNumber": "

The identification number of the MFA device that is associated with the user who is making the AssumeRole call. Specify this value if the trust policy of the role being assumed includes a condition that requires MFA authentication. The value is either the serial number for a hardware device (such as GAHT12345678) or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user).

", "GetSessionTokenRequest$SerialNumber": "

The identification number of the MFA device that is associated with the IAM user who is making the GetSessionToken call. Specify this value if the IAM user has a policy that requires MFA authentication. The value is either the serial number for a hardware device (such as GAHT12345678) or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user). You can find the device for an IAM user by going to the AWS Management Console and viewing the user's security credentials.

" } }, @@ -308,7 +309,7 @@ "base": null, "refs": { "AssumeRoleRequest$Policy": "

An IAM policy in JSON format.

The policy parameter is optional. If you pass a policy, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRole in Using Temporary Security Credentials.

", - "AssumeRoleWithSAMLRequest$Policy": "

An IAM policy in JSON format.

The policy parameter is optional. If you pass a policy, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithSAML in Using Temporary Security Credentials.

", + "AssumeRoleWithSAMLRequest$Policy": "

An IAM policy in JSON format.

The policy parameter is optional. If you pass a policy, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithSAML in Using Temporary Security Credentials.

The policy must be 2048 bytes or shorter, and its packed size must be less than 450 bytes.", "AssumeRoleWithWebIdentityRequest$Policy": "

An IAM policy in JSON format.

The policy parameter is optional. If you pass a policy, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant permissions that are in excess of those allowed by the access policy of the role that is being assumed. For more information, see Permissions for AssumeRoleWithWebIdentity in Using Temporary Security Credentials.

", "GetFederationTokenRequest$Policy": "

An IAM policy in JSON format that is passed with the GetFederationToken call and evaluated along with the policy or policies that are attached to the IAM user whose credentials are used to call GetFederationToken. The passed policy is used to scope down the permissions that are available to the IAM user, by allowing only a subset of the permissions that are granted to the IAM user. The passed policy cannot grant more permissions than those granted to the IAM user. The final permissions for the federated user are the most restrictive set based on the intersection of the passed policy and the IAM user policy.

If you do not pass a policy, the resulting temporary security credentials have no effective permissions. The only exception is when the temporary security credentials are used to access a resource that has a resource-based policy that specifically allows the federated user to access the resource.

For more information about how permissions work, see Permissions for GetFederationToken in Using Temporary Security Credentials.

" } @@ -316,7 +317,7 @@ "tokenCodeType": { "base": null, "refs": { - "AssumeRoleRequest$TokenCode": "

The value provided by the MFA device, if the trust policy of the role being assumed requires MFA (that is, if the policy includes a condition that tests for MFA). If the role being assumed requires MFA and if the TokenCode value is missing or expired, the AssumeRole call returns an \"access denied\" error.

", + "AssumeRoleRequest$TokenCode": "

The value provided by the MFA device, if the trust policy of the role being assumed requires MFA (that is, if the policy includes a condition that tests for MFA). If the role being assumed requires MFA and if the TokenCode value is missing or expired, the AssumeRole call returns an \"access denied\" error.

", "GetSessionTokenRequest$TokenCode": "

The value provided by the MFA device, if MFA is required. If any policy requires the IAM user to submit an MFA code, specify this value. If MFA authentication is required, and the user does not provide a code when requesting a set of temporary security credentials, the user will receive an \"access denied\" response when requesting resources that require MFA authentication.

" } }, @@ -329,7 +330,7 @@ "urlType": { "base": null, "refs": { - "AssumeRoleWithWebIdentityRequest$ProviderId": "

The fully-qualified host component of the domain name of the identity provider. Specify this value only for OAuth access tokens. Do not specify this value for OpenID Connect ID tokens, such as accounts.google.com. Do not include URL schemes and port numbers. Currently, www.amazon.com and graph.facebook.com are supported.

" + "AssumeRoleWithWebIdentityRequest$ProviderId": "

The fully qualified host component of the domain name of the identity provider.

Specify this value only for OAuth 2.0 access tokens. Currently www.amazon.com and graph.facebook.com are the only supported identity providers for OAuth 2.0 access tokens. Do not include URL schemes and port numbers.

Do not specify this value for OpenID Connect ID tokens.

" } }, "userNameType": { @@ -343,7 +344,7 @@ "webIdentitySubjectType": { "base": null, "refs": { - "AssumeRoleWithWebIdentityResponse$SubjectFromWebIdentityToken": "

The unique user identifier that is returned by the identity provider. This identifier is associated with the WebIdentityToken that was submitted with the AssumeRoleWithWebIdentity call. The identifier is typically unique to the user and the application that acquired the WebIdentityToken (pairwise identifier). If an OpenID Connect ID token was submitted in the WebIdentityToken, this value is returned by the identity provider as the token's sub (Subject) claim.

" + "AssumeRoleWithWebIdentityResponse$SubjectFromWebIdentityToken": "

The unique user identifier that is returned by the identity provider. This identifier is associated with the WebIdentityToken that was submitted with the AssumeRoleWithWebIdentity call. The identifier is typically unique to the user and the application that acquired the WebIdentityToken (pairwise identifier). For OpenID Connect ID tokens, this field contains the value returned by the identity provider as the token's sub (Subject) claim.

" } } }