diff --git a/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/ApplicationAutoScalingClient.h b/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/ApplicationAutoScalingClient.h index 46a442a1d3d..8da56af0c6c 100644 --- a/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/ApplicationAutoScalingClient.h +++ b/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/ApplicationAutoScalingClient.h @@ -126,14 +126,19 @@ namespace Model * Aurora Replicas
Amazon SageMaker endpoint variants
Custom resources provided by your own applications or services
API Summary
The Application Auto Scaling service API - * includes two key sets of actions:
Register and manage scalable - * targets - Register AWS or custom resources as scalable targets (a resource that - * Application Auto Scaling can scale), set minimum and maximum capacity limits, - * and retrieve information on existing scalable targets.
Configure and manage automatic scaling - Define scaling policies to - * dynamically scale your resources in response to CloudWatch alarms, schedule + * includes three key sets of actions:
Register and manage + * scalable targets - Register AWS or custom resources as scalable targets (a + * resource that Application Auto Scaling can scale), set minimum and maximum + * capacity limits, and retrieve information on existing scalable targets.
+ *Configure and manage automatic scaling - Define scaling policies + * to dynamically scale your resources in response to CloudWatch alarms, schedule * one-time or recurring scaling actions, and retrieve your recent scaling activity - * history.
To learn more about Application Auto Scaling, + * history.
Suspend and resume scaling - Temporarily suspend and + * later resume automatic scaling by calling the RegisterScalableTarget + * action for any Application Auto Scaling scalable target. You can suspend and + * resume, individually or in combination, scale-out activities triggered by a + * scaling policy, scale-in activities triggered by a scaling policy, and scheduled + * scaling.
To learn more about Application Auto Scaling,
* including information about granting IAM users required permissions for
* Application Auto Scaling actions, see the Application
diff --git a/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/model/RegisterScalableTargetRequest.h b/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/model/RegisterScalableTargetRequest.h
index e8a507f9d4f..55afa280c31 100644
--- a/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/model/RegisterScalableTargetRequest.h
+++ b/aws-cpp-sdk-application-autoscaling/include/aws/application-autoscaling/model/RegisterScalableTargetRequest.h
@@ -19,6 +19,7 @@
#include An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. An embedded object that contains attributes and attribute values that are
+ * used to suspend and resume automatic scaling. Setting the value of an attribute
+ * to
+ * Suspension Outcomes For
+ * For For For more information, see Suspend
+ * and Resume Application Auto Scaling in the Application Auto Scaling User
+ * Guide. Specifies whether the scaling activities for a scalable target are in a
+ * suspended state. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. true
suspends the specified scaling activities. Setting it to
+ * false
(default) resumes the specified scaling activities.
DynamicScalingInSuspended
, while a suspension is in effect, all
+ * scale-in activities that are triggered by a scaling policy are suspended.DynamicScalingOutSuspended
, while a suspension is
+ * in effect, all scale-out activities that are triggered by a scaling policy are
+ * suspended.ScheduledScalingSuspended
, while a
+ * suspension is in effect, all scaling activities that involve scheduled actions
+ * are suspended. See Also:
AWS
+ * API Reference
Whether scale in by a target tracking scaling policy or a step scaling policy
+ * is suspended. Set the value to true
if you don't want Application
+ * Auto Scaling to remove capacity when a scaling policy is triggered. The default
+ * is false
.
Whether scale in by a target tracking scaling policy or a step scaling policy
+ * is suspended. Set the value to true
if you don't want Application
+ * Auto Scaling to remove capacity when a scaling policy is triggered. The default
+ * is false
.
Whether scale in by a target tracking scaling policy or a step scaling policy
+ * is suspended. Set the value to true
if you don't want Application
+ * Auto Scaling to remove capacity when a scaling policy is triggered. The default
+ * is false
.
Whether scale in by a target tracking scaling policy or a step scaling policy
+ * is suspended. Set the value to true
if you don't want Application
+ * Auto Scaling to remove capacity when a scaling policy is triggered. The default
+ * is false
.
Whether scale out by a target tracking scaling policy or a step scaling
+ * policy is suspended. Set the value to true
if you don't want
+ * Application Auto Scaling to add capacity when a scaling policy is triggered. The
+ * default is false
.
Whether scale out by a target tracking scaling policy or a step scaling
+ * policy is suspended. Set the value to true
if you don't want
+ * Application Auto Scaling to add capacity when a scaling policy is triggered. The
+ * default is false
.
Whether scale out by a target tracking scaling policy or a step scaling
+ * policy is suspended. Set the value to true
if you don't want
+ * Application Auto Scaling to add capacity when a scaling policy is triggered. The
+ * default is false
.
Whether scale out by a target tracking scaling policy or a step scaling
+ * policy is suspended. Set the value to true
if you don't want
+ * Application Auto Scaling to add capacity when a scaling policy is triggered. The
+ * default is false
.
Whether scheduled scaling is suspended. Set the value to true
if
+ * you don't want Application Auto Scaling to add or remove capacity by initiating
+ * scheduled actions. The default is false
.
Whether scheduled scaling is suspended. Set the value to true
if
+ * you don't want Application Auto Scaling to add or remove capacity by initiating
+ * scheduled actions. The default is false
.
Whether scheduled scaling is suspended. Set the value to true
if
+ * you don't want Application Auto Scaling to add or remove capacity by initiating
+ * scheduled actions. The default is false
.
Whether scheduled scaling is suspended. Set the value to true
if
+ * you don't want Application Auto Scaling to add or remove capacity by initiating
+ * scheduled actions. The default is false
.
Creates a pipeline.
In the pipeline structure, you must include
+ * either artifactStore
or artifactStores
in your
+ * pipeline, but you cannot use both. If you create a cross-region action in your
+ * pipeline, you must use artifactStores
.
Creates a pipeline.
In the pipeline structure, you must include
+ * either artifactStore
or artifactStores
in your
+ * pipeline, but you cannot use both. If you create a cross-region action in your
+ * pipeline, you must use artifactStores
.
Creates a pipeline.
In the pipeline structure, you must include
+ * either artifactStore
or artifactStores
in your
+ * pipeline, but you cannot use both. If you create a cross-region action in your
+ * pipeline, you must use artifactStores
.
The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline const ActionTypeId& GetActionTypeId() const{ return m_actionTypeId; } /** - *The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline bool ActionTypeIdHasBeenSet() const { return m_actionTypeIdHasBeenSet; } /** - *The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline void SetActionTypeId(const ActionTypeId& value) { m_actionTypeIdHasBeenSet = true; m_actionTypeId = value; } /** - *The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline void SetActionTypeId(ActionTypeId&& value) { m_actionTypeIdHasBeenSet = true; m_actionTypeId = std::move(value); } /** - *The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline ActionDeclaration& WithActionTypeId(const ActionTypeId& value) { SetActionTypeId(value); return *this;} /** - *The configuration information for the action type.
+ *Specifies the action type and the provider of the action.
*/ inline ActionDeclaration& WithActionTypeId(ActionTypeId&& value) { SetActionTypeId(std::move(value)); return *this;} @@ -147,67 +147,236 @@ namespace Model /** - *The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The action declaration's configuration.
+ *The action's configuration. These are key-value pairs that specify input + * values for an action. For more information, see Action + * Structure Requirements in CodePipeline. For the list of configuration + * properties for the AWS CloudFormation action type in CodePipeline, see Configuration + * Properties Reference in the AWS CloudFormation User Guide. For + * template snippets with examples, see Using + * Parameter Override Functions with CodePipeline Pipelines in the AWS + * CloudFormation User Guide.
The values can be represented in either + * JSON or YAML format. For example, the JSON configuration item format is as + * follows:
JSON:
"Configuration" : { Key : Value
+ * },
The Amazon S3 bucket where artifacts are stored for the - * pipeline.
You must include either artifactStore
or
+ * artifactStores
in your pipeline, but you cannot use both. If you
+ * create a cross-region action in your pipeline, you must use
+ * artifactStores
.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The ID used to identify the key. For an AWS KMS key, this is the key ID or - * key ARN.
+ *The ID used to identify the key. For an AWS KMS key, you can use the key ID, + * the key ARN, or the alias ARN.
Aliases are recognized only in the + * account that created the customer master key (CMK). For cross-account actions, + * you can only use the key ID or key ARN to identify the key.
The interaction or event that started a pipeline execution.
The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline const TriggerType& GetTriggerType() const{ return m_triggerType; } + + /** + *The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline bool TriggerTypeHasBeenSet() const { return m_triggerTypeHasBeenSet; } + + /** + *The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline void SetTriggerType(const TriggerType& value) { m_triggerTypeHasBeenSet = true; m_triggerType = value; } + + /** + *The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline void SetTriggerType(TriggerType&& value) { m_triggerTypeHasBeenSet = true; m_triggerType = std::move(value); } + + /** + *The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline ExecutionTrigger& WithTriggerType(const TriggerType& value) { SetTriggerType(value); return *this;} + + /** + *The type of change-detection method, command, or user interaction that + * started a pipeline execution.
+ */ + inline ExecutionTrigger& WithTriggerType(TriggerType&& value) { SetTriggerType(std::move(value)); return *this;} + + + /** + *Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Detail related to the event that started a pipeline execution, such as the
+ * webhook ARN of the webhook that triggered the pipeline execution or the user ARN
+ * for a user-initiated start-pipeline-execution
CLI command.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored - * for the pipeline.
+ * for the pipeline.You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
A mapping of artifactStore
objects and their corresponding
* regions. There must be an artifact store for the pipeline region and for each
- * cross-region action within the pipeline. You can only use either
- * artifactStore
or artifactStores
, not both.
If
- * you create a cross-region action in your pipeline, you must use
- * artifactStores
.
You must include either
+ * artifactStore
or artifactStores
in your pipeline, but
+ * you cannot use both. If you create a cross-region action in your pipeline, you
+ * must use artifactStores
.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
The interaction or event that started a pipeline execution, such as automated
+ * change detection or a StartPipelineExecution
API call.
Runs and maintains a desired number of tasks from a specified task
* definition. If the number of tasks running in a service drops below the
- * desiredCount
, Amazon ECS spawns another copy of the task in the
+ * desiredCount
, Amazon ECS runs another copy of the task in the
* specified cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you - * can optionally run your service behind a load balancer. The load balancer - * distributes traffic across the tasks that are associated with the service. For - * more information, see Service * Load Balancing in the Amazon Elastic Container Service Developer * Guide.
Tasks for services that do not use a load balancer are @@ -493,12 +493,12 @@ namespace Model /** *
Runs and maintains a desired number of tasks from a specified task
* definition. If the number of tasks running in a service drops below the
- * desiredCount
, Amazon ECS spawns another copy of the task in the
+ * desiredCount
, Amazon ECS runs another copy of the task in the
* specified cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you - * can optionally run your service behind a load balancer. The load balancer - * distributes traffic across the tasks that are associated with the service. For - * more information, see Service * Load Balancing in the Amazon Elastic Container Service Developer * Guide.
Tasks for services that do not use a load balancer are @@ -592,12 +592,12 @@ namespace Model /** *
Runs and maintains a desired number of tasks from a specified task
* definition. If the number of tasks running in a service drops below the
- * desiredCount
, Amazon ECS spawns another copy of the task in the
+ * desiredCount
, Amazon ECS runs another copy of the task in the
* specified cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you - * can optionally run your service behind a load balancer. The load balancer - * distributes traffic across the tasks that are associated with the service. For - * more information, see Service * Load Balancing in the Amazon Elastic Container Service Developer * Guide.
Tasks for services that do not use a load balancer are diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/Container.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/Container.h index b7d614ecd67..1d8d74632f5 100644 --- a/aws-cpp-sdk-ecs/include/aws/ecs/model/Container.h +++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/Container.h @@ -174,6 +174,47 @@ namespace Model inline Container& WithName(const char* value) { SetName(value); return *this;} + /** + *
The ID of the Docker container.
+ */ + inline const Aws::String& GetRuntimeId() const{ return m_runtimeId; } + + /** + *The ID of the Docker container.
+ */ + inline bool RuntimeIdHasBeenSet() const { return m_runtimeIdHasBeenSet; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const Aws::String& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = value; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(Aws::String&& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = std::move(value); } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const char* value) { m_runtimeIdHasBeenSet = true; m_runtimeId.assign(value); } + + /** + *The ID of the Docker container.
+ */ + inline Container& WithRuntimeId(const Aws::String& value) { SetRuntimeId(value); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline Container& WithRuntimeId(Aws::String&& value) { SetRuntimeId(std::move(value)); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline Container& WithRuntimeId(const char* value) { SetRuntimeId(value); return *this;} + + /** *The last known status of the container.
*/ @@ -605,6 +646,9 @@ namespace Model Aws::String m_name; bool m_nameHasBeenSet; + Aws::String m_runtimeId; + bool m_runtimeIdHasBeenSet; + Aws::String m_lastStatus; bool m_lastStatusHasBeenSet; diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerDefinition.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerDefinition.h index 4e0b34709e1..9c16f60c786 100644 --- a/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerDefinition.h +++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerDefinition.h @@ -2276,11 +2276,11 @@ namespace Model /** - *Time duration to wait before giving up on resolving dependencies for a
- * container. For example, you specify two containers in a task definition with
- * containerA having a dependency on containerB reaching a COMPLETE
,
- * SUCCESS
, or HEALTHY
status. If a
- * startTimeout
value is specified for containerB and it does not
+ *
Time duration (in seconds) to wait before giving up on resolving dependencies
+ * for a container. For example, you specify two containers in a task definition
+ * with containerA having a dependency on containerB reaching a
+ * COMPLETE
, SUCCESS
, or HEALTHY
status. If
+ * a startTimeout
value is specified for containerB and it does not
* reach the desired status within that time then containerA will give up and not
* start. This results in the task transitioning to a STOPPED
* state.
For tasks using the EC2 launch type, the container instances @@ -2304,11 +2304,11 @@ namespace Model inline int GetStartTimeout() const{ return m_startTimeout; } /** - *
Time duration to wait before giving up on resolving dependencies for a
- * container. For example, you specify two containers in a task definition with
- * containerA having a dependency on containerB reaching a COMPLETE
,
- * SUCCESS
, or HEALTHY
status. If a
- * startTimeout
value is specified for containerB and it does not
+ *
Time duration (in seconds) to wait before giving up on resolving dependencies
+ * for a container. For example, you specify two containers in a task definition
+ * with containerA having a dependency on containerB reaching a
+ * COMPLETE
, SUCCESS
, or HEALTHY
status. If
+ * a startTimeout
value is specified for containerB and it does not
* reach the desired status within that time then containerA will give up and not
* start. This results in the task transitioning to a STOPPED
* state.
For tasks using the EC2 launch type, the container instances @@ -2332,11 +2332,11 @@ namespace Model inline bool StartTimeoutHasBeenSet() const { return m_startTimeoutHasBeenSet; } /** - *
Time duration to wait before giving up on resolving dependencies for a
- * container. For example, you specify two containers in a task definition with
- * containerA having a dependency on containerB reaching a COMPLETE
,
- * SUCCESS
, or HEALTHY
status. If a
- * startTimeout
value is specified for containerB and it does not
+ *
Time duration (in seconds) to wait before giving up on resolving dependencies
+ * for a container. For example, you specify two containers in a task definition
+ * with containerA having a dependency on containerB reaching a
+ * COMPLETE
, SUCCESS
, or HEALTHY
status. If
+ * a startTimeout
value is specified for containerB and it does not
* reach the desired status within that time then containerA will give up and not
* start. This results in the task transitioning to a STOPPED
* state.
For tasks using the EC2 launch type, the container instances @@ -2360,11 +2360,11 @@ namespace Model inline void SetStartTimeout(int value) { m_startTimeoutHasBeenSet = true; m_startTimeout = value; } /** - *
Time duration to wait before giving up on resolving dependencies for a
- * container. For example, you specify two containers in a task definition with
- * containerA having a dependency on containerB reaching a COMPLETE
,
- * SUCCESS
, or HEALTHY
status. If a
- * startTimeout
value is specified for containerB and it does not
+ *
Time duration (in seconds) to wait before giving up on resolving dependencies
+ * for a container. For example, you specify two containers in a task definition
+ * with containerA having a dependency on containerB reaching a
+ * COMPLETE
, SUCCESS
, or HEALTHY
status. If
+ * a startTimeout
value is specified for containerB and it does not
* reach the desired status within that time then containerA will give up and not
* start. This results in the task transitioning to a STOPPED
* state.
For tasks using the EC2 launch type, the container instances @@ -2389,17 +2389,17 @@ namespace Model /** - *
Time duration to wait before the container is forcefully killed if it doesn't
- * exit normally on its own. For tasks using the Fargate launch type, the max
- * stopTimeout
value is 2 minutes. This parameter is available for
- * tasks using the Fargate launch type in the Ohio (us-east-2) region only and the
- * task or service requires platform version 1.3.0 or later.
For tasks using
- * the EC2 launch type, the stop timeout value for the container takes precedence
- * over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration
- * parameter, if used. Container instances require at least version 1.26.0 of the
- * container agent to enable a container stop timeout value. However, we recommend
- * using the latest container agent version. For information about checking your
- * agent version and updating to the latest version, see Time duration (in seconds) to wait before the container is forcefully killed
+ * if it doesn't exit normally on its own. For tasks using the Fargate launch type,
+ * the max stopTimeout
value is 2 minutes. This parameter is available
+ * for tasks using the Fargate launch type in the Ohio (us-east-2) region only and
+ * the task or service requires platform version 1.3.0 or later.
For tasks
+ * using the EC2 launch type, the stop timeout value for the container takes
+ * precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent
+ * configuration parameter, if used. Container instances require at least version
+ * 1.26.0 of the container agent to enable a container stop timeout value. However,
+ * we recommend using the latest container agent version. For information about
+ * checking your agent version and updating to the latest version, see Updating
* the Amazon ECS Container Agent in the Amazon Elastic Container Service
* Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your
@@ -2414,17 +2414,17 @@ namespace Model
inline int GetStopTimeout() const{ return m_stopTimeout; }
/**
- *
Time duration to wait before the container is forcefully killed if it doesn't
- * exit normally on its own. For tasks using the Fargate launch type, the max
- * stopTimeout
value is 2 minutes. This parameter is available for
- * tasks using the Fargate launch type in the Ohio (us-east-2) region only and the
- * task or service requires platform version 1.3.0 or later.
For tasks using
- * the EC2 launch type, the stop timeout value for the container takes precedence
- * over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration
- * parameter, if used. Container instances require at least version 1.26.0 of the
- * container agent to enable a container stop timeout value. However, we recommend
- * using the latest container agent version. For information about checking your
- * agent version and updating to the latest version, see Time duration (in seconds) to wait before the container is forcefully killed
+ * if it doesn't exit normally on its own. For tasks using the Fargate launch type,
+ * the max stopTimeout
value is 2 minutes. This parameter is available
+ * for tasks using the Fargate launch type in the Ohio (us-east-2) region only and
+ * the task or service requires platform version 1.3.0 or later.
For tasks
+ * using the EC2 launch type, the stop timeout value for the container takes
+ * precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent
+ * configuration parameter, if used. Container instances require at least version
+ * 1.26.0 of the container agent to enable a container stop timeout value. However,
+ * we recommend using the latest container agent version. For information about
+ * checking your agent version and updating to the latest version, see Updating
* the Amazon ECS Container Agent in the Amazon Elastic Container Service
* Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your
@@ -2439,17 +2439,17 @@ namespace Model
inline bool StopTimeoutHasBeenSet() const { return m_stopTimeoutHasBeenSet; }
/**
- *
Time duration to wait before the container is forcefully killed if it doesn't
- * exit normally on its own. For tasks using the Fargate launch type, the max
- * stopTimeout
value is 2 minutes. This parameter is available for
- * tasks using the Fargate launch type in the Ohio (us-east-2) region only and the
- * task or service requires platform version 1.3.0 or later.
For tasks using
- * the EC2 launch type, the stop timeout value for the container takes precedence
- * over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration
- * parameter, if used. Container instances require at least version 1.26.0 of the
- * container agent to enable a container stop timeout value. However, we recommend
- * using the latest container agent version. For information about checking your
- * agent version and updating to the latest version, see Time duration (in seconds) to wait before the container is forcefully killed
+ * if it doesn't exit normally on its own. For tasks using the Fargate launch type,
+ * the max stopTimeout
value is 2 minutes. This parameter is available
+ * for tasks using the Fargate launch type in the Ohio (us-east-2) region only and
+ * the task or service requires platform version 1.3.0 or later.
For tasks
+ * using the EC2 launch type, the stop timeout value for the container takes
+ * precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent
+ * configuration parameter, if used. Container instances require at least version
+ * 1.26.0 of the container agent to enable a container stop timeout value. However,
+ * we recommend using the latest container agent version. For information about
+ * checking your agent version and updating to the latest version, see Updating
* the Amazon ECS Container Agent in the Amazon Elastic Container Service
* Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your
@@ -2464,17 +2464,17 @@ namespace Model
inline void SetStopTimeout(int value) { m_stopTimeoutHasBeenSet = true; m_stopTimeout = value; }
/**
- *
Time duration to wait before the container is forcefully killed if it doesn't
- * exit normally on its own. For tasks using the Fargate launch type, the max
- * stopTimeout
value is 2 minutes. This parameter is available for
- * tasks using the Fargate launch type in the Ohio (us-east-2) region only and the
- * task or service requires platform version 1.3.0 or later.
For tasks using
- * the EC2 launch type, the stop timeout value for the container takes precedence
- * over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration
- * parameter, if used. Container instances require at least version 1.26.0 of the
- * container agent to enable a container stop timeout value. However, we recommend
- * using the latest container agent version. For information about checking your
- * agent version and updating to the latest version, see Time duration (in seconds) to wait before the container is forcefully killed
+ * if it doesn't exit normally on its own. For tasks using the Fargate launch type,
+ * the max stopTimeout
value is 2 minutes. This parameter is available
+ * for tasks using the Fargate launch type in the Ohio (us-east-2) region only and
+ * the task or service requires platform version 1.3.0 or later.
For tasks
+ * using the EC2 launch type, the stop timeout value for the container takes
+ * precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent
+ * configuration parameter, if used. Container instances require at least version
+ * 1.26.0 of the container agent to enable a container stop timeout value. However,
+ * we recommend using the latest container agent version. For information about
+ * checking your agent version and updating to the latest version, see Updating
* the Amazon ECS Container Agent in the Amazon Elastic Container Service
* Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your
diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerStateChange.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerStateChange.h
index 712610075c1..ab2f38ee0b0 100644
--- a/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerStateChange.h
+++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/ContainerStateChange.h
@@ -91,6 +91,47 @@ namespace Model
inline ContainerStateChange& WithContainerName(const char* value) { SetContainerName(value); return *this;}
+ /**
+ *
The ID of the Docker container.
+ */ + inline const Aws::String& GetRuntimeId() const{ return m_runtimeId; } + + /** + *The ID of the Docker container.
+ */ + inline bool RuntimeIdHasBeenSet() const { return m_runtimeIdHasBeenSet; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const Aws::String& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = value; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(Aws::String&& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = std::move(value); } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const char* value) { m_runtimeIdHasBeenSet = true; m_runtimeId.assign(value); } + + /** + *The ID of the Docker container.
+ */ + inline ContainerStateChange& WithRuntimeId(const Aws::String& value) { SetRuntimeId(value); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline ContainerStateChange& WithRuntimeId(Aws::String&& value) { SetRuntimeId(std::move(value)); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline ContainerStateChange& WithRuntimeId(const char* value) { SetRuntimeId(value); return *this;} + + /** *The exit code for the container, if the state change is a result of the * container exiting.
@@ -243,6 +284,9 @@ namespace Model Aws::String m_containerName; bool m_containerNameHasBeenSet; + Aws::String m_runtimeId; + bool m_runtimeIdHasBeenSet; + int m_exitCode; bool m_exitCodeHasBeenSet; diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/CreateServiceRequest.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/CreateServiceRequest.h index c1fe43945b2..f6b0080e984 100644 --- a/aws-cpp-sdk-ecs/include/aws/ecs/model/CreateServiceRequest.h +++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/CreateServiceRequest.h @@ -252,16 +252,20 @@ namespace Model /** - *A load balancer object representing the load balancer to use with your - * service.
If the service is using the ECS
deployment
- * controller, you are limited to one load balancer or target group.
If the
- * service is using the CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ *
A load balancer object representing the load balancers to use with your + * service. For more information, see Service + * Load Balancing in the Amazon Elastic Container Service Developer + * Guide.
If the service is using the rolling update (ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.
If the service is using the CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.
After you
@@ -269,16 +273,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Classic Load Balancers, this object must - * contain the load balancer name, the container name (as it appears in a container - * definition), and the container port to access from the load balancer. When a - * task from this service is placed on a container instance, the container instance - * is registered with the load balancer specified here.
For Application Load - * Balancers and Network Load Balancers, this object must contain the load balancer - * target group ARN, the container name (as it appears in a container definition), - * and the container port to access from the load balancer. When a task from this - * service is placed on a container instance, the container instance and port - * combination is registered as a target in the target group specified here.
+ * updating the service.For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the A load balancer object representing the load balancer to use with your
- * service. If the service is using the If the
- * service is using the A load balancer object representing the load balancers to use with your
+ * service. For more information, see Service
+ * Load Balancing in the Amazon Elastic Container Service Developer
+ * Guide. If the service is using the rolling update ( If the service is using the After you
@@ -307,16 +315,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* For Classic Load Balancers, this object must
- * contain the load balancer name, the container name (as it appears in a container
- * definition), and the container port to access from the load balancer. When a
- * task from this service is placed on a container instance, the container instance
- * is registered with the load balancer specified here. For Application Load
- * Balancers and Network Load Balancers, this object must contain the load balancer
- * target group ARN, the container name (as it appears in a container definition),
- * and the container port to access from the load balancer. When a task from this
- * service is placed on a container instance, the container instance and port
- * combination is registered as a target in the target group specified here.awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -290,16 +294,20 @@ namespace Model
inline const Aws::VectorECS
deployment
- * controller, you are limited to one load balancer or target group.CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ * ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -328,16 +336,20 @@ namespace Model
inline bool LoadBalancersHasBeenSet() const { return m_loadBalancersHasBeenSet; }
/**
- *
A load balancer object representing the load balancer to use with your - * service.
If the service is using the ECS
deployment
- * controller, you are limited to one load balancer or target group.
If the
- * service is using the CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ *
A load balancer object representing the load balancers to use with your + * service. For more information, see Service + * Load Balancing in the Amazon Elastic Container Service Developer + * Guide.
If the service is using the rolling update (ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.
If the service is using the CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.
After you
@@ -345,16 +357,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Classic Load Balancers, this object must - * contain the load balancer name, the container name (as it appears in a container - * definition), and the container port to access from the load balancer. When a - * task from this service is placed on a container instance, the container instance - * is registered with the load balancer specified here.
For Application Load - * Balancers and Network Load Balancers, this object must contain the load balancer - * target group ARN, the container name (as it appears in a container definition), - * and the container port to access from the load balancer. When a task from this - * service is placed on a container instance, the container instance and port - * combination is registered as a target in the target group specified here.
+ * updating the service.For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the A load balancer object representing the load balancer to use with your
- * service. If the service is using the If the
- * service is using the A load balancer object representing the load balancers to use with your
+ * service. For more information, see Service
+ * Load Balancing in the Amazon Elastic Container Service Developer
+ * Guide. If the service is using the rolling update ( If the service is using the After you
@@ -383,16 +399,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* For Classic Load Balancers, this object must
- * contain the load balancer name, the container name (as it appears in a container
- * definition), and the container port to access from the load balancer. When a
- * task from this service is placed on a container instance, the container instance
- * is registered with the load balancer specified here. For Application Load
- * Balancers and Network Load Balancers, this object must contain the load balancer
- * target group ARN, the container name (as it appears in a container definition),
- * and the container port to access from the load balancer. When a task from this
- * service is placed on a container instance, the container instance and port
- * combination is registered as a target in the target group specified here.awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -366,16 +378,20 @@ namespace Model
inline void SetLoadBalancers(const Aws::VectorECS
deployment
- * controller, you are limited to one load balancer or target group.CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ * ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the A load balancer object representing the load balancer to use with your
- * service. If the service is using the If the
- * service is using the A load balancer object representing the load balancers to use with your
+ * service. For more information, see Service
+ * Load Balancing in the Amazon Elastic Container Service Developer
+ * Guide. If the service is using the rolling update ( If the service is using the After you
@@ -421,16 +441,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* For Classic Load Balancers, this object must
- * contain the load balancer name, the container name (as it appears in a container
- * definition), and the container port to access from the load balancer. When a
- * task from this service is placed on a container instance, the container instance
- * is registered with the load balancer specified here. For Application Load
- * Balancers and Network Load Balancers, this object must contain the load balancer
- * target group ARN, the container name (as it appears in a container definition),
- * and the container port to access from the load balancer. When a task from this
- * service is placed on a container instance, the container instance and port
- * combination is registered as a target in the target group specified here.awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -404,16 +420,20 @@ namespace Model
inline void SetLoadBalancers(Aws::VectorECS
deployment
- * controller, you are limited to one load balancer or target group.CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ * ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the A load balancer object representing the load balancer to use with your
- * service. If the service is using the If the
- * service is using the A load balancer object representing the load balancers to use with your
+ * service. For more information, see Service
+ * Load Balancing in the Amazon Elastic Container Service Developer
+ * Guide. If the service is using the rolling update ( If the service is using the After you
@@ -459,16 +483,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* For Classic Load Balancers, this object must
- * contain the load balancer name, the container name (as it appears in a container
- * definition), and the container port to access from the load balancer. When a
- * task from this service is placed on a container instance, the container instance
- * is registered with the load balancer specified here. For Application Load
- * Balancers and Network Load Balancers, this object must contain the load balancer
- * target group ARN, the container name (as it appears in a container definition),
- * and the container port to access from the load balancer. When a task from this
- * service is placed on a container instance, the container instance and port
- * combination is registered as a target in the target group specified here.awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -442,16 +462,20 @@ namespace Model
inline CreateServiceRequest& WithLoadBalancers(const Aws::VectorECS
deployment
- * controller, you are limited to one load balancer or target group.CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ * ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the A load balancer object representing the load balancer to use with your
- * service. If the service is using the If the
- * service is using the A load balancer object representing the load balancers to use with your
+ * service. For more information, see Service
+ * Load Balancing in the Amazon Elastic Container Service Developer
+ * Guide. If the service is using the rolling update ( If the service is using the After you
@@ -497,16 +525,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* For Classic Load Balancers, this object must
- * contain the load balancer name, the container name (as it appears in a container
- * definition), and the container port to access from the load balancer. When a
- * task from this service is placed on a container instance, the container instance
- * is registered with the load balancer specified here. For Application Load
- * Balancers and Network Load Balancers, this object must contain the load balancer
- * target group ARN, the container name (as it appears in a container definition),
- * and the container port to access from the load balancer. When a task from this
- * service is placed on a container instance, the container instance and port
- * combination is registered as a target in the target group specified here.awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -480,16 +504,20 @@ namespace Model
inline CreateServiceRequest& WithLoadBalancers(Aws::VectorECS
deployment
- * controller, you are limited to one load balancer or target group.CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ * ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
@@ -518,16 +546,20 @@ namespace Model
inline CreateServiceRequest& AddLoadBalancers(const LoadBalancer& value) { m_loadBalancersHasBeenSet = true; m_loadBalancers.push_back(value); return *this; }
/**
- *
A load balancer object representing the load balancer to use with your - * service.
If the service is using the ECS
deployment
- * controller, you are limited to one load balancer or target group.
If the
- * service is using the CODE_DEPLOY
deployment controller, the service
- * is required to use either an Application Load Balancer or Network Load Balancer.
- * When creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). During a deployment, AWS
- * CodeDeploy determines which task set in your service has the status
- * PRIMARY
and associates one target group with it, and then
- * associates the other target group with the replacement task set. The load
+ *
A load balancer object representing the load balancers to use with your + * service. For more information, see Service + * Load Balancing in the Amazon Elastic Container Service Developer + * Guide.
If the service is using the rolling update (ECS
)
+ * deployment controller and using either an Application Load Balancer or Network
+ * Load Balancer, you can specify multiple target groups to attach to the
+ * service.
If the service is using the CODE_DEPLOY
deployment
+ * controller, the service is required to use either an Application Load Balancer
+ * or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you
+ * specify two target groups (referred to as a targetGroupPair
).
+ * During a deployment, AWS CodeDeploy determines which task set in your service
+ * has the status PRIMARY
and associates one target group with it, and
+ * then associates the other target group with the replacement task set. The load
* balancer can also have up to two listeners: a required listener for production
* traffic and an optional listener that allows you perform validation tests with
* Lambda functions before routing production traffic to it.
After you
@@ -535,16 +567,16 @@ namespace Model
* balancer name or target group ARN, container name, and container port specified
* in the service definition are immutable. If you are using the
* CODE_DEPLOY
deployment controller, these values can be changed when
- * updating the service.
For Classic Load Balancers, this object must - * contain the load balancer name, the container name (as it appears in a container - * definition), and the container port to access from the load balancer. When a - * task from this service is placed on a container instance, the container instance - * is registered with the load balancer specified here.
For Application Load - * Balancers and Network Load Balancers, this object must contain the load balancer - * target group ARN, the container name (as it appears in a container definition), - * and the container port to access from the load balancer. When a task from this - * service is placed on a container instance, the container instance and port - * combination is registered as a target in the target group specified here.
+ * updating the service.For Application Load Balancers and Network Load + * Balancers, this object must contain the load balancer target group ARN, the + * container name (as it appears in a container definition), and the container port + * to access from the load balancer. When a task from this service is placed on a + * container instance, the container instance and port combination is registered as + * a target in the target group specified here.
For Classic Load Balancers, + * this object must contain the load balancer name, the container name (as it + * appears in a container definition), and the container port to access from the + * load balancer. When a task from this service is placed on a container instance, + * the container instance is registered with the load balancer specified here.
*Services with tasks that use the awsvpc
network mode (for
* example, those with the Fargate launch type) only support Application Load
* Balancers and Network Load Balancers. Classic Load Balancers are not supported.
diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/LoadBalancer.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/LoadBalancer.h
index b910bb4d69e..95664a96e72 100644
--- a/aws-cpp-sdk-ecs/include/aws/ecs/model/LoadBalancer.h
+++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/LoadBalancer.h
@@ -34,24 +34,9 @@ namespace Model
{
/**
- *
Details on a load balancer to be used with a service or task set.
If
- * the service is using the ECS
deployment controller, you are limited
- * to one load balancer or target group.
If the service is using the
- * CODE_DEPLOY
deployment controller, the service is required to use
- * either an Application Load Balancer or Network Load Balancer. When you are
- * creating an AWS CodeDeploy deployment group, you specify two target groups
- * (referred to as a targetGroupPair
). Each target group binds to a
- * separate task set in the deployment. The load balancer can also have up to two
- * listeners, a required listener for production traffic and an optional listener
- * that allows you to test new revisions of the service before routing production
- * traffic to it.
Services with tasks that use the awsvpc
- * network mode (for example, those with the Fargate launch type) only support
- * Application Load Balancers and Network Load Balancers. Classic Load Balancers
- * are not supported. Also, when you create any target groups for these services,
- * you must choose ip
as the target type, not instance
.
- * Tasks that use the awsvpc
network mode are associated with an
- * elastic network interface, not an Amazon EC2 instance.
Details on the load balancer or load balancers to use with a service or task + * set.
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The full Amazon Resource Name (ARN) of the Elastic Load Balancing target * group or groups associated with a service or task set.
A target group ARN - * is only specified when using an application load balancer or a network load - * balancer. If you are using a classic load balancer this should be omitted.
- *For services using the ECS
deployment controller, you are
- * limited to one target group. For services using the CODE_DEPLOY
- * deployment controller, you are required to define two target groups for the load
- * balancer.
If your service's task definition uses the
- * awsvpc
network mode (which is required for the Fargate launch
+ * is only specified when using an Application Load Balancer or Network Load
+ * Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can
+ * specify one or multiple target groups. For more information, see Registering
+ * Multiple Target Groups with a Service in the Amazon Elastic Container
+ * Service Developer Guide.
For services using the
+ * CODE_DEPLOY
deployment controller, you are required to define two
+ * target groups for the load balancer. For more information, see Blue/Green
+ * Deployment with CodeDeploy in the Amazon Elastic Container Service
+ * Developer Guide.
If your service's task definition uses
+ * the awsvpc
network mode (which is required for the Fargate launch
* type), you must choose ip
as the target type, not
- * instance
, because tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
instance
, when creating your target groups because tasks that use
+ * the awsvpc
network mode are associated with an elastic network
+ * interface, not an Amazon EC2 instance. The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline const Aws::String& GetLoadBalancerName() const{ return m_loadBalancerName; } /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline bool LoadBalancerNameHasBeenSet() const { return m_loadBalancerNameHasBeenSet; } /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline void SetLoadBalancerName(const Aws::String& value) { m_loadBalancerNameHasBeenSet = true; m_loadBalancerName = value; } /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline void SetLoadBalancerName(Aws::String&& value) { m_loadBalancerNameHasBeenSet = true; m_loadBalancerName = std::move(value); } /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline void SetLoadBalancerName(const char* value) { m_loadBalancerNameHasBeenSet = true; m_loadBalancerName.assign(value); } /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline LoadBalancer& WithLoadBalancerName(const Aws::String& value) { SetLoadBalancerName(value); return *this;} /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline LoadBalancer& WithLoadBalancerName(Aws::String&& value) { SetLoadBalancerName(std::move(value)); return *this;} /** *The name of the load balancer to associate with the Amazon ECS service or - * task set.
A load balancer name is only specified when using a classic - * load balancer. If you are using an application load balancer or a network load - * balancer this should be omitted.
+ * task set.A load balancer name is only specified when using a Classic + * Load Balancer. If you are using an Application Load Balancer or a Network Load + * Balancer this should be omitted.
*/ inline LoadBalancer& WithLoadBalancerName(const char* value) { SetLoadBalancerName(value); return *this;} @@ -316,33 +349,37 @@ namespace Model /** *The port on the container to associate with the load balancer. This port must
- * correspond to a containerPort
in the service's task definition.
- * Your container instances must allow ingress traffic on the hostPort
- * of the port mapping.
containerPort
in the task definition the tasks in
+ * the service are using. For tasks that use the EC2 launch type, the container
+ * instance they are launched on must allow ingress traffic on the
+ * hostPort
of the port mapping.
*/
inline int GetContainerPort() const{ return m_containerPort; }
/**
* The port on the container to associate with the load balancer. This port must
- * correspond to a containerPort
in the service's task definition.
- * Your container instances must allow ingress traffic on the hostPort
- * of the port mapping.
containerPort
in the task definition the tasks in
+ * the service are using. For tasks that use the EC2 launch type, the container
+ * instance they are launched on must allow ingress traffic on the
+ * hostPort
of the port mapping.
*/
inline bool ContainerPortHasBeenSet() const { return m_containerPortHasBeenSet; }
/**
* The port on the container to associate with the load balancer. This port must
- * correspond to a containerPort
in the service's task definition.
- * Your container instances must allow ingress traffic on the hostPort
- * of the port mapping.
containerPort
in the task definition the tasks in
+ * the service are using. For tasks that use the EC2 launch type, the container
+ * instance they are launched on must allow ingress traffic on the
+ * hostPort
of the port mapping.
*/
inline void SetContainerPort(int value) { m_containerPortHasBeenSet = true; m_containerPort = value; }
/**
* The port on the container to associate with the load balancer. This port must
- * correspond to a containerPort
in the service's task definition.
- * Your container instances must allow ingress traffic on the hostPort
- * of the port mapping.
containerPort
in the task definition the tasks in
+ * the service are using. For tasks that use the EC2 launch type, the container
+ * instance they are launched on must allow ingress traffic on the
+ * hostPort
of the port mapping.
*/
inline LoadBalancer& WithContainerPort(int value) { SetContainerPort(value); return *this;}
diff --git a/aws-cpp-sdk-ecs/include/aws/ecs/model/Service.h b/aws-cpp-sdk-ecs/include/aws/ecs/model/Service.h
index ac685027e04..65c37c288c6 100644
--- a/aws-cpp-sdk-ecs/include/aws/ecs/model/Service.h
+++ b/aws-cpp-sdk-ecs/include/aws/ecs/model/Service.h
@@ -245,112 +245,56 @@ namespace Model
/**
* A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
A list of Elastic Load Balancing load balancer objects, containing the load * balancer name, the container name (as it appears in a container definition), and - * the container port to access from the load balancer.
Services with tasks
- * that use the awsvpc
network mode (for example, those with the
- * Fargate launch type) only support Application Load Balancers and Network Load
- * Balancers. Classic Load Balancers are not supported. Also, when you create any
- * target groups for these services, you must choose ip
as the target
- * type, not instance
. Tasks that use the awsvpc
network
- * mode are associated with an elastic network interface, not an Amazon EC2
- * instance.
The ID of the Docker container.
+ */ + inline const Aws::String& GetRuntimeId() const{ return m_runtimeId; } + + /** + *The ID of the Docker container.
+ */ + inline bool RuntimeIdHasBeenSet() const { return m_runtimeIdHasBeenSet; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const Aws::String& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = value; } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(Aws::String&& value) { m_runtimeIdHasBeenSet = true; m_runtimeId = std::move(value); } + + /** + *The ID of the Docker container.
+ */ + inline void SetRuntimeId(const char* value) { m_runtimeIdHasBeenSet = true; m_runtimeId.assign(value); } + + /** + *The ID of the Docker container.
+ */ + inline SubmitContainerStateChangeRequest& WithRuntimeId(const Aws::String& value) { SetRuntimeId(value); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline SubmitContainerStateChangeRequest& WithRuntimeId(Aws::String&& value) { SetRuntimeId(std::move(value)); return *this;} + + /** + *The ID of the Docker container.
+ */ + inline SubmitContainerStateChangeRequest& WithRuntimeId(const char* value) { SetRuntimeId(value); return *this;} + + /** *The status of the state change request.
*/ @@ -331,6 +372,9 @@ namespace Model Aws::String m_containerName; bool m_containerNameHasBeenSet; + Aws::String m_runtimeId; + bool m_runtimeIdHasBeenSet; + Aws::String m_status; bool m_statusHasBeenSet; diff --git a/aws-cpp-sdk-ecs/source/model/Container.cpp b/aws-cpp-sdk-ecs/source/model/Container.cpp index 3c09af419fe..7bd16d948de 100644 --- a/aws-cpp-sdk-ecs/source/model/Container.cpp +++ b/aws-cpp-sdk-ecs/source/model/Container.cpp @@ -32,6 +32,7 @@ Container::Container() : m_containerArnHasBeenSet(false), m_taskArnHasBeenSet(false), m_nameHasBeenSet(false), + m_runtimeIdHasBeenSet(false), m_lastStatusHasBeenSet(false), m_exitCode(0), m_exitCodeHasBeenSet(false), @@ -51,6 +52,7 @@ Container::Container(JsonView jsonValue) : m_containerArnHasBeenSet(false), m_taskArnHasBeenSet(false), m_nameHasBeenSet(false), + m_runtimeIdHasBeenSet(false), m_lastStatusHasBeenSet(false), m_exitCode(0), m_exitCodeHasBeenSet(false), @@ -90,6 +92,13 @@ Container& Container::operator =(JsonView jsonValue) m_nameHasBeenSet = true; } + if(jsonValue.ValueExists("runtimeId")) + { + m_runtimeId = jsonValue.GetString("runtimeId"); + + m_runtimeIdHasBeenSet = true; + } + if(jsonValue.ValueExists("lastStatus")) { m_lastStatus = jsonValue.GetString("lastStatus"); @@ -194,6 +203,12 @@ JsonValue Container::Jsonize() const } + if(m_runtimeIdHasBeenSet) + { + payload.WithString("runtimeId", m_runtimeId); + + } + if(m_lastStatusHasBeenSet) { payload.WithString("lastStatus", m_lastStatus); diff --git a/aws-cpp-sdk-ecs/source/model/ContainerStateChange.cpp b/aws-cpp-sdk-ecs/source/model/ContainerStateChange.cpp index 9789a244b29..caee0be523b 100644 --- a/aws-cpp-sdk-ecs/source/model/ContainerStateChange.cpp +++ b/aws-cpp-sdk-ecs/source/model/ContainerStateChange.cpp @@ -30,6 +30,7 @@ namespace Model ContainerStateChange::ContainerStateChange() : m_containerNameHasBeenSet(false), + m_runtimeIdHasBeenSet(false), m_exitCode(0), m_exitCodeHasBeenSet(false), m_networkBindingsHasBeenSet(false), @@ -40,6 +41,7 @@ ContainerStateChange::ContainerStateChange() : ContainerStateChange::ContainerStateChange(JsonView jsonValue) : m_containerNameHasBeenSet(false), + m_runtimeIdHasBeenSet(false), m_exitCode(0), m_exitCodeHasBeenSet(false), m_networkBindingsHasBeenSet(false), @@ -58,6 +60,13 @@ ContainerStateChange& ContainerStateChange::operator =(JsonView jsonValue) m_containerNameHasBeenSet = true; } + if(jsonValue.ValueExists("runtimeId")) + { + m_runtimeId = jsonValue.GetString("runtimeId"); + + m_runtimeIdHasBeenSet = true; + } + if(jsonValue.ValueExists("exitCode")) { m_exitCode = jsonValue.GetInteger("exitCode"); @@ -102,6 +111,12 @@ JsonValue ContainerStateChange::Jsonize() const } + if(m_runtimeIdHasBeenSet) + { + payload.WithString("runtimeId", m_runtimeId); + + } + if(m_exitCodeHasBeenSet) { payload.WithInteger("exitCode", m_exitCode); diff --git a/aws-cpp-sdk-ecs/source/model/SubmitContainerStateChangeRequest.cpp b/aws-cpp-sdk-ecs/source/model/SubmitContainerStateChangeRequest.cpp index 1d0294adaba..1d59aa39313 100644 --- a/aws-cpp-sdk-ecs/source/model/SubmitContainerStateChangeRequest.cpp +++ b/aws-cpp-sdk-ecs/source/model/SubmitContainerStateChangeRequest.cpp @@ -26,6 +26,7 @@ SubmitContainerStateChangeRequest::SubmitContainerStateChangeRequest() : m_clusterHasBeenSet(false), m_taskHasBeenSet(false), m_containerNameHasBeenSet(false), + m_runtimeIdHasBeenSet(false), m_statusHasBeenSet(false), m_exitCode(0), m_exitCodeHasBeenSet(false), @@ -56,6 +57,12 @@ Aws::String SubmitContainerStateChangeRequest::SerializePayload() const } + if(m_runtimeIdHasBeenSet) + { + payload.WithString("runtimeId", m_runtimeId); + + } + if(m_statusHasBeenSet) { payload.WithString("status", m_status); diff --git a/aws-cpp-sdk-elasticache/include/aws/elasticache/ElastiCacheClient.h b/aws-cpp-sdk-elasticache/include/aws/elasticache/ElastiCacheClient.h index 92d4b3d3dae..d742dd9078d 100644 --- a/aws-cpp-sdk-elasticache/include/aws/elasticache/ElastiCacheClient.h +++ b/aws-cpp-sdk-elasticache/include/aws/elasticache/ElastiCacheClient.h @@ -1718,9 +1718,9 @@ namespace Model /** *Lists all available node types that you can scale your Redis cluster's or - * replication group's current node type up to.
When you use the + * replication group's current node type.
When you use the
* ModifyCacheCluster
or ModifyReplicationGroup
- * operations to scale up your cluster or replication group, the value of the
+ * operations to scale your cluster or replication group, the value of the
* CacheNodeType
parameter must be one of the node types returned by
* this operation.
Lists all available node types that you can scale your Redis cluster's or - * replication group's current node type up to.
When you use the + * replication group's current node type.
When you use the
* ModifyCacheCluster
or ModifyReplicationGroup
- * operations to scale up your cluster or replication group, the value of the
+ * operations to scale your cluster or replication group, the value of the
* CacheNodeType
parameter must be one of the node types returned by
* this operation.
Lists all available node types that you can scale your Redis cluster's or - * replication group's current node type up to.
When you use the + * replication group's current node type.
When you use the
* ModifyCacheCluster
or ModifyReplicationGroup
- * operations to scale up your cluster or replication group, the value of the
+ * operations to scale your cluster or replication group, the value of the
* CacheNodeType
parameter must be one of the node types returned by
* this operation.
The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline const Aws::String& GetKmsKeyId() const{ return m_kmsKeyId; } + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline bool KmsKeyIdHasBeenSet() const { return m_kmsKeyIdHasBeenSet; } + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline void SetKmsKeyId(const Aws::String& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = value; } + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline void SetKmsKeyId(Aws::String&& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = std::move(value); } + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline void SetKmsKeyId(const char* value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId.assign(value); } + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline CopySnapshotRequest& WithKmsKeyId(const Aws::String& value) { SetKmsKeyId(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline CopySnapshotRequest& WithKmsKeyId(Aws::String&& value) { SetKmsKeyId(std::move(value)); return *this;} + + /** + *The ID of the KMS key used to encrypt the target snapshot.
+ */ + inline CopySnapshotRequest& WithKmsKeyId(const char* value) { SetKmsKeyId(value); return *this;} + private: Aws::String m_sourceSnapshotName; @@ -262,6 +303,9 @@ namespace Model Aws::String m_targetBucket; bool m_targetBucketHasBeenSet; + + Aws::String m_kmsKeyId; + bool m_kmsKeyIdHasBeenSet; }; } // namespace Model diff --git a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateCacheClusterRequest.h b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateCacheClusterRequest.h index 3bed3745831..5266c86d066 100644 --- a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateCacheClusterRequest.h +++ b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateCacheClusterRequest.h @@ -56,7 +56,7 @@ namespace Model /** *The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase * string.
Constraints:
A name must contain from 1 - * to 20 alphanumeric characters or hyphens.
The first character + * to 50 alphanumeric characters or hyphens.
The first character * must be a letter.
A name cannot end with a hyphen or contain * two consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase - * string.
Constraints:
A name must contain from 1 to 20 + * string.
Constraints:
A name must contain from 1 to 40 * alphanumeric characters or hyphens.
The first character must * be a letter.
A name cannot end with a hyphen or contain two * consecutive hyphens.
TransitEncryptionEnabled
to true
when you create a
* cluster. This parameter is valid only if the Engine
* parameter is redis
, the EngineVersion
parameter is
- * 3.2.6
or 4.x
, and the cluster is being created in an
- * Amazon VPC.
If you enable in-transit encryption, you must also specify a
- * value for CacheSubnetGroup
.
Required: Only available
- * when creating a replication group in an Amazon VPC using redis version
+ * 3.2.6
, 4.x
or later, and the cluster is being created
+ * in an Amazon VPC.
If you enable in-transit encryption, you must also
+ * specify a value for CacheSubnetGroup
.
Required: Only
+ * available when creating a replication group in an Amazon VPC using redis version
* 3.2.6
, 4.x
or later.
Default:
* false
For HIPAA compliance, you must specify
* TransitEncryptionEnabled
as true
, an
@@ -2211,10 +2211,10 @@ namespace Model
* set TransitEncryptionEnabled
to true
when you create a
* cluster.
This parameter is valid only if the Engine
* parameter is redis
, the EngineVersion
parameter is
- * 3.2.6
or 4.x
, and the cluster is being created in an
- * Amazon VPC.
If you enable in-transit encryption, you must also specify a
- * value for CacheSubnetGroup
.
Required: Only available
- * when creating a replication group in an Amazon VPC using redis version
+ * 3.2.6
, 4.x
or later, and the cluster is being created
+ * in an Amazon VPC.
If you enable in-transit encryption, you must also
+ * specify a value for CacheSubnetGroup
.
Required: Only
+ * available when creating a replication group in an Amazon VPC using redis version
* 3.2.6
, 4.x
or later.
Default:
* false
For HIPAA compliance, you must specify
* TransitEncryptionEnabled
as true
, an
@@ -2229,10 +2229,10 @@ namespace Model
* set TransitEncryptionEnabled
to true
when you create a
* cluster.
This parameter is valid only if the Engine
* parameter is redis
, the EngineVersion
parameter is
- * 3.2.6
or 4.x
, and the cluster is being created in an
- * Amazon VPC.
If you enable in-transit encryption, you must also specify a
- * value for CacheSubnetGroup
.
Required: Only available
- * when creating a replication group in an Amazon VPC using redis version
+ * 3.2.6
, 4.x
or later, and the cluster is being created
+ * in an Amazon VPC.
If you enable in-transit encryption, you must also
+ * specify a value for CacheSubnetGroup
.
Required: Only
+ * available when creating a replication group in an Amazon VPC using redis version
* 3.2.6
, 4.x
or later.
Default:
* false
For HIPAA compliance, you must specify
* TransitEncryptionEnabled
as true
, an
@@ -2247,10 +2247,10 @@ namespace Model
* set TransitEncryptionEnabled
to true
when you create a
* cluster.
This parameter is valid only if the Engine
* parameter is redis
, the EngineVersion
parameter is
- * 3.2.6
or 4.x
, and the cluster is being created in an
- * Amazon VPC.
If you enable in-transit encryption, you must also specify a
- * value for CacheSubnetGroup
.
Required: Only available
- * when creating a replication group in an Amazon VPC using redis version
+ * 3.2.6
, 4.x
or later, and the cluster is being created
+ * in an Amazon VPC.
If you enable in-transit encryption, you must also
+ * specify a value for CacheSubnetGroup
.
Required: Only
+ * available when creating a replication group in an Amazon VPC using redis version
* 3.2.6
, 4.x
or later.
Default:
* false
For HIPAA compliance, you must specify
* TransitEncryptionEnabled
as true
, an
@@ -2307,6 +2307,47 @@ namespace Model
*/
inline CreateReplicationGroupRequest& WithAtRestEncryptionEnabled(bool value) { SetAtRestEncryptionEnabled(value); return *this;}
+
+ /**
+ *
The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline const Aws::String& GetKmsKeyId() const{ return m_kmsKeyId; } + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline bool KmsKeyIdHasBeenSet() const { return m_kmsKeyIdHasBeenSet; } + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline void SetKmsKeyId(const Aws::String& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = value; } + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline void SetKmsKeyId(Aws::String&& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = std::move(value); } + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline void SetKmsKeyId(const char* value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId.assign(value); } + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline CreateReplicationGroupRequest& WithKmsKeyId(const Aws::String& value) { SetKmsKeyId(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline CreateReplicationGroupRequest& WithKmsKeyId(Aws::String&& value) { SetKmsKeyId(std::move(value)); return *this;} + + /** + *The ID of the KMS key used to encrypt the disk on the cluster.
+ */ + inline CreateReplicationGroupRequest& WithKmsKeyId(const char* value) { SetKmsKeyId(value); return *this;} + private: Aws::String m_replicationGroupId; @@ -2392,6 +2433,9 @@ namespace Model bool m_atRestEncryptionEnabled; bool m_atRestEncryptionEnabledHasBeenSet; + + Aws::String m_kmsKeyId; + bool m_kmsKeyIdHasBeenSet; }; } // namespace Model diff --git a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateSnapshotRequest.h b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateSnapshotRequest.h index e0d8144a8b2..8daf4180da0 100644 --- a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateSnapshotRequest.h +++ b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/CreateSnapshotRequest.h @@ -188,6 +188,47 @@ namespace Model */ inline CreateSnapshotRequest& WithSnapshotName(const char* value) { SetSnapshotName(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline const Aws::String& GetKmsKeyId() const{ return m_kmsKeyId; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline bool KmsKeyIdHasBeenSet() const { return m_kmsKeyIdHasBeenSet; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(const Aws::String& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = value; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(Aws::String&& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = std::move(value); } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(const char* value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId.assign(value); } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline CreateSnapshotRequest& WithKmsKeyId(const Aws::String& value) { SetKmsKeyId(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline CreateSnapshotRequest& WithKmsKeyId(Aws::String&& value) { SetKmsKeyId(std::move(value)); return *this;} + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline CreateSnapshotRequest& WithKmsKeyId(const char* value) { SetKmsKeyId(value); return *this;} + private: Aws::String m_replicationGroupId; @@ -198,6 +239,9 @@ namespace Model Aws::String m_snapshotName; bool m_snapshotNameHasBeenSet; + + Aws::String m_kmsKeyId; + bool m_kmsKeyIdHasBeenSet; }; } // namespace Model diff --git a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/ReplicationGroup.h b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/ReplicationGroup.h index 5f1e01f1935..b38e9565d41 100644 --- a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/ReplicationGroup.h +++ b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/ReplicationGroup.h @@ -798,6 +798,47 @@ namespace Model */ inline ReplicationGroup& WithAtRestEncryptionEnabled(bool value) { SetAtRestEncryptionEnabled(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline const Aws::String& GetKmsKeyId() const{ return m_kmsKeyId; } + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline bool KmsKeyIdHasBeenSet() const { return m_kmsKeyIdHasBeenSet; } + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline void SetKmsKeyId(const Aws::String& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = value; } + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline void SetKmsKeyId(Aws::String&& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = std::move(value); } + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline void SetKmsKeyId(const char* value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId.assign(value); } + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline ReplicationGroup& WithKmsKeyId(const Aws::String& value) { SetKmsKeyId(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline ReplicationGroup& WithKmsKeyId(Aws::String&& value) { SetKmsKeyId(std::move(value)); return *this;} + + /** + *The ID of the KMS key used to encrypt the disk in the cluster.
+ */ + inline ReplicationGroup& WithKmsKeyId(const char* value) { SetKmsKeyId(value); return *this;} + private: Aws::String m_replicationGroupId; @@ -847,6 +888,9 @@ namespace Model bool m_atRestEncryptionEnabled; bool m_atRestEncryptionEnabledHasBeenSet; + + Aws::String m_kmsKeyId; + bool m_kmsKeyIdHasBeenSet; }; } // namespace Model diff --git a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/Snapshot.h b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/Snapshot.h index c9872b2b129..9b528c7ce5b 100644 --- a/aws-cpp-sdk-elasticache/include/aws/elasticache/model/Snapshot.h +++ b/aws-cpp-sdk-elasticache/include/aws/elasticache/model/Snapshot.h @@ -1437,6 +1437,47 @@ namespace Model */ inline Snapshot& AddNodeSnapshots(NodeSnapshot&& value) { m_nodeSnapshotsHasBeenSet = true; m_nodeSnapshots.push_back(std::move(value)); return *this; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline const Aws::String& GetKmsKeyId() const{ return m_kmsKeyId; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline bool KmsKeyIdHasBeenSet() const { return m_kmsKeyIdHasBeenSet; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(const Aws::String& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = value; } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(Aws::String&& value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId = std::move(value); } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline void SetKmsKeyId(const char* value) { m_kmsKeyIdHasBeenSet = true; m_kmsKeyId.assign(value); } + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline Snapshot& WithKmsKeyId(const Aws::String& value) { SetKmsKeyId(value); return *this;} + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline Snapshot& WithKmsKeyId(Aws::String&& value) { SetKmsKeyId(std::move(value)); return *this;} + + /** + *The ID of the KMS key used to encrypt the snapshot.
+ */ + inline Snapshot& WithKmsKeyId(const char* value) { SetKmsKeyId(value); return *this;} + private: Aws::String m_snapshotName; @@ -1510,6 +1551,9 @@ namespace Model Aws::VectorThe position in a stream from which to start reading. Required for Amazon
* Kinesis and Amazon DynamoDB Streams sources. AT_TIMESTAMP
is only
@@ -377,6 +390,9 @@ namespace Model
int m_batchSize;
bool m_batchSizeHasBeenSet;
+ int m_maximumBatchingWindowInSeconds;
+ bool m_maximumBatchingWindowInSecondsHasBeenSet;
+
EventSourcePosition m_startingPosition;
bool m_startingPositionHasBeenSet;
diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/CreateEventSourceMappingResult.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/CreateEventSourceMappingResult.h
index 4928f440a74..9eb2b1b8ec8 100644
--- a/aws-cpp-sdk-lambda/include/aws/lambda/model/CreateEventSourceMappingResult.h
+++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/CreateEventSourceMappingResult.h
@@ -101,6 +101,16 @@ namespace Model
inline CreateEventSourceMappingResult& WithBatchSize(int value) { SetBatchSize(value); return *this;}
+
+ inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; }
+
+
+ inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSeconds = value; }
+
+
+ inline CreateEventSourceMappingResult& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;}
+
+
/**
*
The Amazon Resource Name (ARN) of the event source.
*/ @@ -340,6 +350,8 @@ namespace Model int m_batchSize; + int m_maximumBatchingWindowInSeconds; + Aws::String m_eventSourceArn; Aws::String m_functionArn; diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/DeleteEventSourceMappingResult.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/DeleteEventSourceMappingResult.h index add6a9c31ca..bb2b504c162 100644 --- a/aws-cpp-sdk-lambda/include/aws/lambda/model/DeleteEventSourceMappingResult.h +++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/DeleteEventSourceMappingResult.h @@ -101,6 +101,16 @@ namespace Model inline DeleteEventSourceMappingResult& WithBatchSize(int value) { SetBatchSize(value); return *this;} + + inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; } + + + inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSeconds = value; } + + + inline DeleteEventSourceMappingResult& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;} + + /** *The Amazon Resource Name (ARN) of the event source.
*/ @@ -340,6 +350,8 @@ namespace Model int m_batchSize; + int m_maximumBatchingWindowInSeconds; + Aws::String m_eventSourceArn; Aws::String m_functionArn; diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/EventSourceMappingConfiguration.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/EventSourceMappingConfiguration.h index 80eb8da40c7..0b616e4e3d0 100644 --- a/aws-cpp-sdk-lambda/include/aws/lambda/model/EventSourceMappingConfiguration.h +++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/EventSourceMappingConfiguration.h @@ -111,6 +111,19 @@ namespace Model inline EventSourceMappingConfiguration& WithBatchSize(int value) { SetBatchSize(value); return *this;} + + inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; } + + + inline bool MaximumBatchingWindowInSecondsHasBeenSet() const { return m_maximumBatchingWindowInSecondsHasBeenSet; } + + + inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSecondsHasBeenSet = true; m_maximumBatchingWindowInSeconds = value; } + + + inline EventSourceMappingConfiguration& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;} + + /** *The Amazon Resource Name (ARN) of the event source.
*/ @@ -386,6 +399,9 @@ namespace Model int m_batchSize; bool m_batchSizeHasBeenSet; + int m_maximumBatchingWindowInSeconds; + bool m_maximumBatchingWindowInSecondsHasBeenSet; + Aws::String m_eventSourceArn; bool m_eventSourceArnHasBeenSet; diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/GetEventSourceMappingResult.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/GetEventSourceMappingResult.h index bd2279a316f..58dadaae9fc 100644 --- a/aws-cpp-sdk-lambda/include/aws/lambda/model/GetEventSourceMappingResult.h +++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/GetEventSourceMappingResult.h @@ -101,6 +101,16 @@ namespace Model inline GetEventSourceMappingResult& WithBatchSize(int value) { SetBatchSize(value); return *this;} + + inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; } + + + inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSeconds = value; } + + + inline GetEventSourceMappingResult& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;} + + /** *The Amazon Resource Name (ARN) of the event source.
*/ @@ -340,6 +350,8 @@ namespace Model int m_batchSize; + int m_maximumBatchingWindowInSeconds; + Aws::String m_eventSourceArn; Aws::String m_functionArn; diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingRequest.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingRequest.h index 76391bd04bd..60ef5de095c 100644 --- a/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingRequest.h +++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingRequest.h @@ -249,6 +249,19 @@ namespace Model */ inline UpdateEventSourceMappingRequest& WithBatchSize(int value) { SetBatchSize(value); return *this;} + + + inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; } + + + inline bool MaximumBatchingWindowInSecondsHasBeenSet() const { return m_maximumBatchingWindowInSecondsHasBeenSet; } + + + inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSecondsHasBeenSet = true; m_maximumBatchingWindowInSeconds = value; } + + + inline UpdateEventSourceMappingRequest& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;} + private: Aws::String m_uUID; @@ -262,6 +275,9 @@ namespace Model int m_batchSize; bool m_batchSizeHasBeenSet; + + int m_maximumBatchingWindowInSeconds; + bool m_maximumBatchingWindowInSecondsHasBeenSet; }; } // namespace Model diff --git a/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingResult.h b/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingResult.h index ac688ab83eb..8482ebd30b0 100644 --- a/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingResult.h +++ b/aws-cpp-sdk-lambda/include/aws/lambda/model/UpdateEventSourceMappingResult.h @@ -101,6 +101,16 @@ namespace Model inline UpdateEventSourceMappingResult& WithBatchSize(int value) { SetBatchSize(value); return *this;} + + inline int GetMaximumBatchingWindowInSeconds() const{ return m_maximumBatchingWindowInSeconds; } + + + inline void SetMaximumBatchingWindowInSeconds(int value) { m_maximumBatchingWindowInSeconds = value; } + + + inline UpdateEventSourceMappingResult& WithMaximumBatchingWindowInSeconds(int value) { SetMaximumBatchingWindowInSeconds(value); return *this;} + + /** *The Amazon Resource Name (ARN) of the event source.
*/ @@ -340,6 +350,8 @@ namespace Model int m_batchSize; + int m_maximumBatchingWindowInSeconds; + Aws::String m_eventSourceArn; Aws::String m_functionArn; diff --git a/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingRequest.cpp b/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingRequest.cpp index baa0193b871..be5fb36d8a8 100644 --- a/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingRequest.cpp +++ b/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingRequest.cpp @@ -29,6 +29,8 @@ CreateEventSourceMappingRequest::CreateEventSourceMappingRequest() : m_enabledHasBeenSet(false), m_batchSize(0), m_batchSizeHasBeenSet(false), + m_maximumBatchingWindowInSeconds(0), + m_maximumBatchingWindowInSecondsHasBeenSet(false), m_startingPosition(EventSourcePosition::NOT_SET), m_startingPositionHasBeenSet(false), m_startingPositionTimestampHasBeenSet(false) @@ -63,6 +65,12 @@ Aws::String CreateEventSourceMappingRequest::SerializePayload() const } + if(m_maximumBatchingWindowInSecondsHasBeenSet) + { + payload.WithInteger("MaximumBatchingWindowInSeconds", m_maximumBatchingWindowInSeconds); + + } + if(m_startingPositionHasBeenSet) { payload.WithString("StartingPosition", EventSourcePositionMapper::GetNameForEventSourcePosition(m_startingPosition)); diff --git a/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingResult.cpp b/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingResult.cpp index 20f88484b38..c3bf49bdb6d 100644 --- a/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingResult.cpp +++ b/aws-cpp-sdk-lambda/source/model/CreateEventSourceMappingResult.cpp @@ -27,12 +27,14 @@ using namespace Aws::Utils; using namespace Aws; CreateEventSourceMappingResult::CreateEventSourceMappingResult() : - m_batchSize(0) + m_batchSize(0), + m_maximumBatchingWindowInSeconds(0) { } CreateEventSourceMappingResult::CreateEventSourceMappingResult(const Aws::AmazonWebServiceResultApplication Auto Scaling creates a service-linked role that grants it permissions to modify the scalable target on your behalf. For more information, see Service-Linked Roles for Application Auto Scaling.
For resources that are not supported using a service-linked role, this parameter is required, and it must specify the ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf.
" + }, + "SuspendedState":{ + "shape":"SuspendedState", + "documentation":"An embedded object that contains attributes and attribute values that are used to suspend and resume automatic scaling. Setting the value of an attribute to true
suspends the specified scaling activities. Setting it to false
(default) resumes the specified scaling activities.
Suspension Outcomes
For DynamicScalingInSuspended
, while a suspension is in effect, all scale-in activities that are triggered by a scaling policy are suspended.
For DynamicScalingOutSuspended
, while a suspension is in effect, all scale-out activities that are triggered by a scaling policy are suspended.
For ScheduledScalingSuspended
, while a suspension is in effect, all scaling activities that involve scheduled actions are suspended.
For more information, see Suspend and Resume Application Auto Scaling in the Application Auto Scaling User Guide.
" } } }, @@ -853,7 +857,8 @@ "CreationTime":{ "shape":"TimestampType", "documentation":"The Unix timestamp for when the scalable target was created.
" - } + }, + "SuspendedState":{"shape":"SuspendedState"} }, "documentation":"Represents a scalable target.
" }, @@ -1010,6 +1015,7 @@ }, "documentation":"Represents a scaling policy to use with Application Auto Scaling.
" }, + "ScalingSuspended":{"type":"boolean"}, "ScheduledAction":{ "type":"structure", "required":[ @@ -1136,6 +1142,24 @@ }, "documentation":"Represents a step scaling policy configuration to use with Application Auto Scaling.
" }, + "SuspendedState":{ + "type":"structure", + "members":{ + "DynamicScalingInSuspended":{ + "shape":"ScalingSuspended", + "documentation":"Whether scale in by a target tracking scaling policy or a step scaling policy is suspended. Set the value to true
if you don't want Application Auto Scaling to remove capacity when a scaling policy is triggered. The default is false
.
Whether scale out by a target tracking scaling policy or a step scaling policy is suspended. Set the value to true
if you don't want Application Auto Scaling to add capacity when a scaling policy is triggered. The default is false
.
Whether scheduled scaling is suspended. Set the value to true
if you don't want Application Auto Scaling to add or remove capacity by initiating scheduled actions. The default is false
.
Specifies whether the scaling activities for a scalable target are in a suspended state.
" + }, "TargetTrackingScalingPolicyConfiguration":{ "type":"structure", "required":["TargetValue"], @@ -1181,5 +1205,5 @@ "pattern":"[\\u0020-\\uD7FF\\uE000-\\uFFFD\\uD800\\uDC00-\\uDBFF\\uDFFF\\r\\n\\t]*" } }, - "documentation":"With Application Auto Scaling, you can configure automatic scaling for the following resources:
Amazon ECS services
Amazon EC2 Spot Fleet requests
Amazon EMR clusters
Amazon AppStream 2.0 fleets
Amazon DynamoDB tables and global secondary indexes throughput capacity
Amazon Aurora Replicas
Amazon SageMaker endpoint variants
Custom resources provided by your own applications or services
API Summary
The Application Auto Scaling service API includes two key sets of actions:
Register and manage scalable targets - Register AWS or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets.
Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history.
To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide.
" + "documentation":"With Application Auto Scaling, you can configure automatic scaling for the following resources:
Amazon ECS services
Amazon EC2 Spot Fleet requests
Amazon EMR clusters
Amazon AppStream 2.0 fleets
Amazon DynamoDB tables and global secondary indexes throughput capacity
Amazon Aurora Replicas
Amazon SageMaker endpoint variants
Custom resources provided by your own applications or services
API Summary
The Application Auto Scaling service API includes three key sets of actions:
Register and manage scalable targets - Register AWS or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets.
Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history.
Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget action for any Application Auto Scaling scalable target. You can suspend and resume, individually or in combination, scale-out activities triggered by a scaling policy, scale-in activities triggered by a scaling policy, and scheduled scaling.
To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide.
" } diff --git a/code-generation/api-descriptions/codepipeline-2015-07-09.normal.json b/code-generation/api-descriptions/codepipeline-2015-07-09.normal.json index d3b04bbde29..306b7359cec 100644 --- a/code-generation/api-descriptions/codepipeline-2015-07-09.normal.json +++ b/code-generation/api-descriptions/codepipeline-2015-07-09.normal.json @@ -81,7 +81,7 @@ {"shape":"InvalidTagsException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Creates a pipeline.
" + "documentation":"Creates a pipeline.
In the pipeline structure, you must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
The configuration information for the action type.
" + "documentation":"Specifies the action type and the provider of the action.
" }, "runOrder":{ "shape":"ActionRunOrder", @@ -793,7 +793,7 @@ }, "configuration":{ "shape":"ActionConfigurationMap", - "documentation":"The action declaration's configuration.
" + "documentation":"The action's configuration. These are key-value pairs that specify input values for an action. For more information, see Action Structure Requirements in CodePipeline. For the list of configuration properties for the AWS CloudFormation action type in CodePipeline, see Configuration Properties Reference in the AWS CloudFormation User Guide. For template snippets with examples, see Using Parameter Override Functions with CodePipeline Pipelines in the AWS CloudFormation User Guide.
The values can be represented in either JSON or YAML format. For example, the JSON configuration item format is as follows:
JSON:
\"Configuration\" : { Key : Value },
The encryption key used to encrypt the data in the artifact store, such as an AWS Key Management Service (AWS KMS) key. If this is undefined, the default key for Amazon S3 is used.
" } }, - "documentation":"The Amazon S3 bucket where artifacts are stored for the pipeline.
" + "documentation":"The Amazon S3 bucket where artifacts are stored for the pipeline.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
The ID used to identify the key. For an AWS KMS key, this is the key ID or key ARN.
" + "documentation":"The ID used to identify the key. For an AWS KMS key, you can use the key ID, the key ARN, or the alias ARN.
Aliases are recognized only in the account that created the customer master key (CMK). For cross-account actions, you can only use the key ID or key ARN to identify the key.
The type of change-detection method, command, or user interaction that started a pipeline execution.
" + }, + "triggerDetail":{ + "shape":"TriggerDetail", + "documentation":"Detail related to the event that started a pipeline execution, such as the webhook ARN of the webhook that triggered the pipeline execution or the user ARN for a user-initiated start-pipeline-execution
CLI command.
The interaction or event that started a pipeline execution.
" + }, "ExternalExecutionId":{"type":"string"}, "ExternalExecutionSummary":{"type":"string"}, "FailureDetails":{ @@ -2462,11 +2476,11 @@ }, "artifactStore":{ "shape":"ArtifactStore", - "documentation":"Represents information about the Amazon S3 bucket where artifacts are stored for the pipeline.
" + "documentation":"Represents information about the Amazon S3 bucket where artifacts are stored for the pipeline.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
A mapping of artifactStore
objects and their corresponding regions. There must be an artifact store for the pipeline region and for each cross-region action within the pipeline. You can only use either artifactStore
or artifactStores
, not both.
If you create a cross-region action in your pipeline, you must use artifactStores
.
A mapping of artifactStore
objects and their corresponding regions. There must be an artifact store for the pipeline region and for each cross-region action within the pipeline.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
A list of the source artifact revisions that initiated a pipeline execution.
" + }, + "trigger":{ + "shape":"ExecutionTrigger", + "documentation":"The interaction or event that started a pipeline execution, such as automated change detection or a StartPipelineExecution
API call.
Summary information about a pipeline execution.
" @@ -3371,6 +3389,22 @@ }, "documentation":"Represents information about the state of transitions between one stage and another stage.
" }, + "TriggerDetail":{ + "type":"string", + "max":1024, + "min":0 + }, + "TriggerType":{ + "type":"string", + "enum":[ + "CreatePipeline", + "StartPipelineExecution", + "PollForSourceChanges", + "Webhook", + "CloudWatchEvent", + "PutActionRevision" + ] + }, "UntagResourceInput":{ "type":"structure", "required":[ diff --git a/code-generation/api-descriptions/ecs-2014-11-13.normal.json b/code-generation/api-descriptions/ecs-2014-11-13.normal.json index aea0f057deb..f3253afc650 100644 --- a/code-generation/api-descriptions/ecs-2014-11-13.normal.json +++ b/code-generation/api-descriptions/ecs-2014-11-13.normal.json @@ -46,7 +46,7 @@ {"shape":"PlatformTaskDefinitionIncompatibilityException"}, {"shape":"AccessDeniedException"} ], - "documentation":"Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS spawns another copy of the task in the specified cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and the container instance that they're hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is triggered by changing properties, such as the task definition or the desired count of a service, with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service is using the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and they're reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.
If a service is using the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service is using either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used, although they're currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement in your cluster using the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy) with the placementStrategy
parameter):
Sort the valid container instances, giving priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and the container instance that they're hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is triggered by changing properties, such as the task definition or the desired count of a service, with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service is using the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and they're reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.
If a service is using the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service is using either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used, although they're currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement in your cluster using the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy) with the placementStrategy
parameter):
Sort the valid container instances, giving priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
The name of the container.
" }, + "runtimeId":{ + "shape":"String", + "documentation":"The ID of the Docker container.
" + }, "lastStatus":{ "shape":"String", "documentation":"The last known status of the container.
" @@ -1172,11 +1176,11 @@ }, "startTimeout":{ "shape":"BoxedInteger", - "documentation":"Time duration to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE
, SUCCESS
, or HEALTHY
status. If a startTimeout
value is specified for containerB and it does not reach the desired status within that time then containerA will give up and not start. This results in the task transitioning to a STOPPED
state.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to enable a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
This parameter is available for tasks using the Fargate launch type in the Ohio (us-east-2) region only and the task or service requires platform version 1.3.0 or later.
" + "documentation":"Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE
, SUCCESS
, or HEALTHY
status. If a startTimeout
value is specified for containerB and it does not reach the desired status within that time then containerA will give up and not start. This results in the task transitioning to a STOPPED
state.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to enable a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
This parameter is available for tasks using the Fargate launch type in the Ohio (us-east-2) region only and the task or service requires platform version 1.3.0 or later.
" }, "stopTimeout":{ "shape":"BoxedInteger", - "documentation":"Time duration to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the max stopTimeout
value is 2 minutes. This parameter is available for tasks using the Fargate launch type in the Ohio (us-east-2) region only and the task or service requires platform version 1.3.0 or later.
For tasks using the EC2 launch type, the stop timeout value for the container takes precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration parameter, if used. Container instances require at least version 1.26.0 of the container agent to enable a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own. For tasks using the Fargate launch type, the max stopTimeout
value is 2 minutes. This parameter is available for tasks using the Fargate launch type in the Ohio (us-east-2) region only and the task or service requires platform version 1.3.0 or later.
For tasks using the EC2 launch type, the stop timeout value for the container takes precedence over the ECS_CONTAINER_STOP_TIMEOUT
container agent configuration parameter, if used. Container instances require at least version 1.26.0 of the container agent to enable a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The name of the container.
" }, + "runtimeId":{ + "shape":"String", + "documentation":"The ID of the Docker container.
" + }, "exitCode":{ "shape":"BoxedInteger", "documentation":"The exit code for the container, if the state change is a result of the container exiting.
" @@ -1487,7 +1495,7 @@ }, "loadBalancers":{ "shape":"LoadBalancers", - "documentation":"A load balancer object representing the load balancer to use with your service.
If the service is using the ECS
deployment controller, you are limited to one load balancer or target group.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
A load balancer object representing the load balancers to use with your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
If the service is using the rolling update (ECS
) deployment controller and using either an Application Load Balancer or Network Load Balancer, you can specify multiple target groups to attach to the service.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). During a deployment, AWS CodeDeploy determines which task set in your service has the status PRIMARY
and associates one target group with it, and then associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that allows you perform validation tests with Lambda functions before routing production traffic to it.
After you create a service using the ECS
deployment controller, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable. If you are using the CODE_DEPLOY
deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an application load balancer or a network load balancer. If you are using a classic load balancer this should be omitted.
For services using the ECS
deployment controller, you are limited to one target group. For services using the CODE_DEPLOY
deployment controller, you are required to define two target groups for the load balancer.
If your service's task definition uses the awsvpc
network mode (which is required for the Fargate launch type), you must choose ip
as the target type, not instance
, because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer. If you are using a Classic Load Balancer this should be omitted.
For services using the ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering Multiple Target Groups with a Service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY
deployment controller, you are required to define two target groups for the load balancer. For more information, see Blue/Green Deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
If your service's task definition uses the awsvpc
network mode (which is required for the Fargate launch type), you must choose ip
as the target type, not instance
, when creating your target groups because tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a classic load balancer. If you are using an application load balancer or a network load balancer this should be omitted.
" + "documentation":"The name of the load balancer to associate with the Amazon ECS service or task set.
A load balancer name is only specified when using a Classic Load Balancer. If you are using an Application Load Balancer or a Network Load Balancer this should be omitted.
" }, "containerName":{ "shape":"String", @@ -2683,10 +2691,10 @@ }, "containerPort":{ "shape":"BoxedInteger", - "documentation":"The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the service's task definition. Your container instances must allow ingress traffic on the hostPort
of the port mapping.
The port on the container to associate with the load balancer. This port must correspond to a containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they are launched on must allow ingress traffic on the hostPort
of the port mapping.
Details on a load balancer to be used with a service or task set.
If the service is using the ECS
deployment controller, you are limited to one load balancer or target group.
If the service is using the CODE_DEPLOY
deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When you are creating an AWS CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair
). Each target group binds to a separate task set in the deployment. The load balancer can also have up to two listeners, a required listener for production traffic and an optional listener that allows you to test new revisions of the service before routing production traffic to it.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
. Tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
Details on the load balancer or load balancers to use with a service or task set.
" }, "LoadBalancers":{ "type":"list", @@ -3424,7 +3432,7 @@ }, "loadBalancers":{ "shape":"LoadBalancers", - "documentation":"A list of Elastic Load Balancing load balancer objects, containing the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer.
Services with tasks that use the awsvpc
network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers are not supported. Also, when you create any target groups for these services, you must choose ip
as the target type, not instance
. Tasks that use the awsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance.
A list of Elastic Load Balancing load balancer objects, containing the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer.
" }, "serviceRegistries":{ "shape":"ServiceRegistries", @@ -3786,6 +3794,10 @@ "shape":"String", "documentation":"The name of the container.
" }, + "runtimeId":{ + "shape":"String", + "documentation":"The ID of the Docker container.
" + }, "status":{ "shape":"String", "documentation":"The status of the state change request.
" diff --git a/code-generation/api-descriptions/elasticache-2015-02-02.normal.json b/code-generation/api-descriptions/elasticache-2015-02-02.normal.json index 404f6719e58..ad54af4340a 100644 --- a/code-generation/api-descriptions/elasticache-2015-02-02.normal.json +++ b/code-generation/api-descriptions/elasticache-2015-02-02.normal.json @@ -643,6 +643,7 @@ {"shape":"NodeGroupsPerReplicationGroupQuotaExceededFault"}, {"shape":"NodeQuotaForCustomerExceededFault"}, {"shape":"NoOperationFault"}, + {"shape":"InvalidKMSKeyFault"}, {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], @@ -665,7 +666,7 @@ {"shape":"InvalidParameterCombinationException"}, {"shape":"InvalidParameterValueException"} ], - "documentation":"Lists all available node types that you can scale your Redis cluster's or replication group's current node type up to.
When you use the ModifyCacheCluster
or ModifyReplicationGroup
operations to scale up your cluster or replication group, the value of the CacheNodeType
parameter must be one of the node types returned by this operation.
Lists all available node types that you can scale your Redis cluster's or replication group's current node type.
When you use the ModifyCacheCluster
or ModifyReplicationGroup
operations to scale your cluster or replication group, the value of the CacheNodeType
parameter must be one of the node types returned by this operation.
The Amazon S3 bucket to which the snapshot is exported. This parameter is used only when exporting a snapshot for external access.
When using this parameter to export a snapshot, be sure Amazon ElastiCache has the needed permissions to this S3 bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the Amazon ElastiCache User Guide.
For more information, see Exporting a Snapshot in the Amazon ElastiCache User Guide.
" + }, + "KmsKeyId":{ + "shape":"String", + "documentation":"The ID of the KMS key used to encrypt the target snapshot.
" } }, "documentation":"Represents the input of a CopySnapshotMessage
operation.
The node group (shard) identifier. This parameter is stored as a lowercase string.
Constraints:
A name must contain from 1 to 20 alphanumeric characters or hyphens.
The first character must be a letter.
A name cannot end with a hyphen or contain two consecutive hyphens.
The node group (shard) identifier. This parameter is stored as a lowercase string.
Constraints:
A name must contain from 1 to 50 alphanumeric characters or hyphens.
The first character must be a letter.
A name cannot end with a hyphen or contain two consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase string.
Constraints:
A name must contain from 1 to 20 alphanumeric characters or hyphens.
The first character must be a letter.
A name cannot end with a hyphen or contain two consecutive hyphens.
The replication group identifier. This parameter is stored as a lowercase string.
Constraints:
A name must contain from 1 to 40 alphanumeric characters or hyphens.
The first character must be a letter.
A name cannot end with a hyphen or contain two consecutive hyphens.
A flag that enables in-transit encryption when set to true
.
You cannot modify the value of TransitEncryptionEnabled
after the cluster is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled
to true
when you create a cluster.
This parameter is valid only if the Engine
parameter is redis
, the EngineVersion
parameter is 3.2.6
or 4.x
, and the cluster is being created in an Amazon VPC.
If you enable in-transit encryption, you must also specify a value for CacheSubnetGroup
.
Required: Only available when creating a replication group in an Amazon VPC using redis version 3.2.6
, 4.x
or later.
Default: false
For HIPAA compliance, you must specify TransitEncryptionEnabled
as true
, an AuthToken
, and a CacheSubnetGroup
.
A flag that enables in-transit encryption when set to true
.
You cannot modify the value of TransitEncryptionEnabled
after the cluster is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled
to true
when you create a cluster.
This parameter is valid only if the Engine
parameter is redis
, the EngineVersion
parameter is 3.2.6
, 4.x
or later, and the cluster is being created in an Amazon VPC.
If you enable in-transit encryption, you must also specify a value for CacheSubnetGroup
.
Required: Only available when creating a replication group in an Amazon VPC using redis version 3.2.6
, 4.x
or later.
Default: false
For HIPAA compliance, you must specify TransitEncryptionEnabled
as true
, an AuthToken
, and a CacheSubnetGroup
.
A flag that enables encryption at rest when set to true
.
You cannot modify the value of AtRestEncryptionEnabled
after the replication group is created. To enable encryption at rest on a replication group you must set AtRestEncryptionEnabled
to true
when you create the replication group.
Required: Only available when creating a replication group in an Amazon VPC using redis version 3.2.6
, 4.x
or later.
Default: false
The ID of the KMS key used to encrypt the disk on the cluster.
" } }, "documentation":"Represents the input of a CreateReplicationGroup
operation.
A name for the snapshot being created.
" + }, + "KmsKeyId":{ + "shape":"String", + "documentation":"The ID of the KMS key used to encrypt the snapshot.
" } }, "documentation":"Represents the input of a CreateSnapshot
operation.
The KMS key supplied is not valid.
", + "error":{ + "code":"InvalidKMSKeyFault", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "InvalidParameterCombinationException":{ "type":"structure", "members":{ @@ -3861,6 +3889,10 @@ "AtRestEncryptionEnabled":{ "shape":"BooleanOptional", "documentation":"A flag that enables encryption at-rest when set to true
.
You cannot modify the value of AtRestEncryptionEnabled
after the cluster is created. To enable encryption at-rest on a cluster you must set AtRestEncryptionEnabled
to true
when you create a cluster.
Required: Only available when creating a replication group in an Amazon VPC using redis version 3.2.6
, 4.x
or later.
Default: false
The ID of the KMS key used to encrypt the disk in the cluster.
" } }, "documentation":"Contains all of the attributes of a specific Redis replication group.
", @@ -4466,6 +4498,10 @@ "NodeSnapshots":{ "shape":"NodeSnapshotList", "documentation":"A list of the cache nodes in the source cluster.
" + }, + "KmsKeyId":{ + "shape":"String", + "documentation":"The ID of the KMS key used to encrypt the snapshot.
" } }, "documentation":"Represents a copy of an entire Redis cluster as of the time when the snapshot was taken.
", diff --git a/code-generation/api-descriptions/lambda-2015-03-31.normal.json b/code-generation/api-descriptions/lambda-2015-03-31.normal.json index e5289b83ff5..8301e2d9d28 100644 --- a/code-generation/api-descriptions/lambda-2015-03-31.normal.json +++ b/code-generation/api-descriptions/lambda-2015-03-31.normal.json @@ -1023,6 +1023,7 @@ "shape":"BatchSize", "documentation":"The maximum number of items to retrieve in a single batch.
Amazon Kinesis - Default 100. Max 10,000.
Amazon DynamoDB Streams - Default 100. Max 1,000.
Amazon Simple Queue Service - Default 10. Max 10.
The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB Streams sources. AT_TIMESTAMP
is only supported for Amazon Kinesis streams.
The maximum number of items to retrieve in a single batch.
" }, + "MaximumBatchingWindowInSeconds":{"shape":"MaximumBatchingWindowInSeconds"}, "EventSourceArn":{ "shape":"Arn", "documentation":"The Amazon Resource Name (ARN) of the event source.
" @@ -2417,6 +2419,11 @@ "max":10000, "min":1 }, + "MaximumBatchingWindowInSeconds":{ + "type":"integer", + "max":300, + "min":0 + }, "MemorySize":{ "type":"integer", "max":3008, @@ -2980,7 +2987,8 @@ "BatchSize":{ "shape":"BatchSize", "documentation":"The maximum number of items to retrieve in a single batch.
Amazon Kinesis - Default 100. Max 10,000.
Amazon DynamoDB Streams - Default 100. Max 1,000.
Amazon Simple Queue Service - Default 10. Max 10.