-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(release): 2.127.0 #29060
Merged
Merged
chore(release): 2.127.0 #29060
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date.
> # Issue > > The issue is that LogFormat is a String so it doesn't allow the enum LogFormat. > # Solution > > Created a new enum for the LoggingFormat and added testing. So the solution sets these values as potential environment variables. The main difference is LoggingFormat is assigned to an enum instead of a string. > # Important Design Decisions > > This is so that an enum could be used for LoggingFormat without breaking JSII target languages. Some background information is in this pr #28127. Was a recommended solution here. #28127 > > Remember to follow the [CONTRIBUTING GUIDE] and [DESIGN GUIDELINES] for any > code you submit. > > [CONTRIBUTING GUIDE]: https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md > [DESIGN GUIDELINES]: https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md Closes #28114. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Automated changes by [create-pull-request](https://github.com/peter-evans/create-pull-request) GitHub action
…is always overwrited (#28793) After #28422 was merged, the regression that overwrites the Retry field defined in the stateJson was introduced. The `this.renderRetryCatch()` method overwrites the Retry field in the stateJson. https://github.com/aws/aws-cdk/blob/45b8398bec9ba9c03f195c14f3b92188c9058a7b/packages/aws-cdk-lib/aws-stepfunctions/lib/states/custom-state.ts#L74 This PR fixes this regression and clarifies the current behavior for configuring the Retry and Catch field. Previously, I added the `addRetry` method to add the Retry field and did not render the Retry field in the stateJson in #28598, but this is initially a regression and should have been fixed. Closes #28769 Relates #28586 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…est (#28799) Currently, the solutionStack of Elastic Beanstalk that is specified is out of date, causing integ tests to fail. ``` "No Solution Stack named '64bit Amazon Linux 2023 v6.0.2 running Node.js 18' found. (Service: ElasticBeanstalk, Status Code: 400, Request ID: 625a591c-9bb2-4c7c-9404-cf192fdae9bb)" ``` It may be better in the future that integ-test can always deploy using the latest version using the custom resource. (It may be difficult to use AwsCustomResource because the size of the `SolutionStacks` (array) property alone is 4KB.) https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-elastic-beanstalk/Interface/ListAvailableSolutionStacksResultMessage/ ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This change adds a new context key to the `cdk.json` file when an app is generated by the `cdk migrate` cli command. If the context key `"cdk-migrate"` is `true`, then that information is added to the end of the analytics string in the AWS::CDK::Metadata resource. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) Closes #<issue number here>. ### Reason for this change ### Description of changes ### Description of how you validated changes ### Checklist - [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…8961) How the readme looks below | [`cdk migrate`](#cdk-migrate) | Migrate AWS resources, CloudFormation stacks, and CloudFormation templates to CDK | ### `cdk migrate`⚠️ **CAUTION**⚠️ : CDK Migrate is currently experimental and may have breaking changes in the future. CDK Migrate generates a CDK app from deployed AWS resources using `--from-scan`, deployed AWS CloudFormation stacks using `--from-stack`, and local AWS CloudFormation templates using `--from-path`. To learn more about the CDK Migrate feature, see [Migrate to AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/migrate.html). For more information on `cdk migrate` command options, see [cdk migrate command reference](https://docs.aws.amazon.com/cdk/v2/guide/ref-cli-cdk-migrate.html). The new CDK app will be initialized in the current working directory and will include a single stack that is named with the value you provide using `--stack-name`. The new stack, app, and directory will all use this name. To specify a different output directory, use `--output-path`. You can create the new CDK app in any CDK supported programming language using `--language`. #### Migrate from an AWS CloudFormation stack Migrate from a deployed AWS CloudFormation stack in a specific AWS account and AWS Region using `--from-stack`. Provide `--stack-name` to identify the name of your stack. Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that migrates **myCloudFormationStack** to a new CDK app using TypeScript: ```console $ cdk migrate --language typescript --from-stack --stack-name 'myCloudFormationStack' ``` #### Migrate from a local AWS CloudFormation template Migrate from a local `YAML` or `JSON` AWS CloudFormation template using `--from-path`. Provide a name for the stack that will be created in your new CDK app using `--stack-name`. Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that creates a new CDK app using TypeScript that includes a **myCloudFormationStack** stack from a local `template.json` file: ```console $ cdk migrate --language typescript --from-path "./template.json" --stack-name "myCloudFormationStack" ``` #### Migrate from deployed AWS resources Migrate from deployed AWS resources in a specific AWS account and Region that are not associated with an AWS CloudFormation stack using `--from-scan`. These would be resources that were provisioned outside of an IaC tool. CDK Migrate utilizes the IaC generator service to scan for resources and generate a template. Then, the CDK CLI references the template to create a new CDK app. To learn more about IaC generator, see [Generating templates for existing resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC.html). Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that creates a new CDK app using TypeScript that includes a new **myCloudFormationStack** stack from deployed resources: ```console $ cdk migrate --language typescript --from-scan --stack-name "myCloudFormationStack" ``` Since CDK Migrate relies on the IaC generator service, any limitations of IaC generator will apply to CDK Migrate. For general limitations, see [Considerations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC.html#generate-template-considerations). IaC generator limitations with discovering resource and property values will also apply here. As a result, CDK Migrate will only migrate resources supported by IaC generator. Some of your resources may not be supported and some property values may not be accessible. For more information, see [Iac generator and write-only properties](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC-write-only-properties.html) and [Supported resource types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC-supported-resources.html). You can specify filters using `--filter` to specify which resources to migrate. This is a good option to use if you are over the IaC generator total resource limit. After migration, you must resolve any write-only properties that were detected by IaC generator from your deployed resources. To learn more, see [Resolve write-only properties](https://docs.aws.amazon.com/cdk/v2/guide/migrate.html#migrate-resources-writeonly). #### Examples ##### Generate a TypeScript CDK app from a local AWS CloudFormation template.json file ```console $ # template.json is a valid cloudformation template in the local directory $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-path MyTemplate.json ``` This command generates a new directory named `MyAwesomeApplication` within your current working directory, and then initializes a new CDK application within that directory. The CDK app contains a `MyAwesomeApplication` stack with resources configured to match those in your local CloudFormation template. This results in a CDK application with the following structure, where the lib directory contains a stack definition with the same resource configuration as the provided template.json. ```console ├── README.md ├── bin │ └── my_awesome_application.ts ├── cdk.json ├── jest.config.js ├── lib │ └── my_awesome_application-stack.ts ├── package.json ├── tsconfig.json ``` ##### Generate a Python CDK app from a deployed stack If you already have a CloudFormation stack deployed in your account and would like to manage it with CDK, you can migrate the deployed stack to a new CDK app. The value provided with `--stack-name` must match the name of the deployed stack. ```console $ # generate a Python application from MyDeployedStack in your account $ cdk migrate --stack-name MyDeployedStack --language python --from-stack ``` This will generate a Python CDK app which will synthesize the same configuration of resources as the deployed stack. ##### Generate a TypeScript CDK app from deployed AWS resources that are not associated with a stack If you have resources in your account that were provisioned outside AWS IaC tools and would like to manage them with the CDK, you can use the `--from-scan` option to generate the application. In this example, we use the `--filter` option to specify which resources to migrate. You can filter resources to limit the number of resources migrated to only those specified by the `--filter` option, including any resources they depend on, or resources that depend on them (for example A filter which specifies a single Lambda Function, will find that specific table and any alarms that may monitor it). The `--filter` argument offers both AND as well as OR filtering. OR filtering can be specified by passing multiple `--filter` options, and AND filtering can be specified by passing a single `--filter` option with multiple comma separated key/value pairs as seen below (see below for examples). It is recommended to use the `--filter` option to limit the number of resources returned as some resource types provide sample resources by default in all accounts which can add to the resource limits. `--from-scan` takes 3 potential arguments: `--new`, `most-recent`, and undefined. If `--new` is passed, CDK Migrate will initiate a new scan of the account and use that new scan to discover resources. If `--most-recent` is passed, CDK Migrate will use the most recent scan of the account to discover resources. If neither `--new` nor `--most-recent` are passed, CDK Migrate will take the most recent scan of the account to discover resources, unless there is no recent scan, in which case it will initiate a new scan. ``` # Filtering options identifier|id|resource-identifier=<resource-specific-resource-identifier-value> type|resource-type-prefix=<resource-type-prefix> tag-key=<tag-key> tag-value=<tag-value> ``` ##### Additional examples of migrating from deployed resources ```console $ # Generate a typescript application from all un-managed resources in your account $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan $ # Generate a typescript application from all un-managed resources in your account with the tag key "Environment" AND the tag value "Production" $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan --filter tag-key=Environment,tag-value=Production $ # Generate a python application from any dynamoDB resources with the tag-key "dev" AND the tag-value "true" OR any SQS::Queue $ cdk migrate --stack-name MyAwesomeApplication --language python --from-scan --filter type=AWS::DynamoDb::,tag-key=dev,tag-value=true --filter type=SQS::Queue $ # Generate a typescript application from a specific lambda function by providing it's specific resource identifier $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan --filter identifier=myAwesomeLambdaFunction ``` #### **CDK Migrate Limitations** - CDK Migrate does not currently support nested stacks, custom resources, or the `Fn::ForEach` intrinsic function. - CDK Migrate will only generate L1 constructs and does not currently support any higher level abstractions. - CDK Migrate successfully generating an application does *not* guarantee the application is immediately deployable. It simply generates a CDK application which will synthesize a template that has identical resource configurations to the provided template. - CDK Migrate does not interact with the CloudFormation service to verify the template provided can deploy on its own. This means CDK Migrate will not verify that any resources in the provided template are already managed in other CloudFormation templates, nor will it verify that the resources in the provided template are available in the desired regions, which may impact ADC or Opt-In regions. - If the provided template has parameters without default values, those will need to be provided before deploying the generated application. In practice this is how CDK Migrate generated applications will operate in the following scenarios: | Situation | Result | | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | | Provided template + stack-name is from a deployed stack in the account/region | The CDK application will deploy as a changeset to the existing stack | | Provided template has no overlap with resources already in the account/region | The CDK application will deploy a new stack successfully | | Provided template has overlap with Cloudformation managed resources already in the account/region | The CDK application will not be deployable unless those resources are removed | | Provided template has overlap with un-managed resources already in the account/region | The CDK application will not be deployable until those resources are adopted with [`cdk import`](#cdk-import) | | No template has been provided and resources exist in the region the scan is done | The CDK application will be immediatly deployable and will import those resources into a new cloudformation stack upon deploy | ##### **The provided template is already deployed to CloudFormation in the account/region** If the provided template came directly from a deployed CloudFormation stack, and that stack has not experienced any drift, then the generated application will be immediately deployable, and will not cause any changes to the deployed resources. Drift might occur if a resource in your template was modified outside of CloudFormation, namely via the AWS Console or AWS CLI. ##### **The provided template is not deployed to CloudFormation in the account/region, and there *is not* overlap with existing resources in the account/region** If the provided template represents a set of resources that have no overlap with resources already deployed in the account/region, then the generated application will be immediately deployable. This could be because the stack has never been deployed, or the application was generated from a stack deployed in another account/region. In practice this means for any resource in the provided template, for example, ```Json "S3Bucket": { "Type": "AWS::S3::Bucket", "Properties": { "BucketName": "MyBucket", "AccessControl": "PublicRead", }, "DeletionPolicy": "Retain" } ``` There must not exist a resource of that type with the same identifier in the desired region. In this example that identfier would be "MyBucket" ##### **The provided template is not deployed to CloudFormation in the account/region, and there *is* overlap with existing resources in the account/region** If the provided template represents a set of resources that overlap with resources already deployed in the account/region, then the generated application will not be immediately deployable. If those overlapped resources are already managed by another CloudFormation stack in that account/region, then those resources will need to be manually removed from the provided template. Otherwise, if the overlapped resources are not managed by another CloudFormation stack, then first remove those resources from your CDK Application Stack, deploy the cdk application successfully, then re-add them and run `cdk import` to import them into your deployed stack. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### `cdk migrate`⚠️ **CAUTION**⚠️ : CDK Migrate is currently experimental and may have breaking changes in the future. CDK Migrate generates a CDK app from deployed AWS resources using `--from-scan`, deployed AWS CloudFormation stacks using `--from-stack`, and local AWS CloudFormation templates using `--from-path`. To learn more about the CDK Migrate feature, see [Migrate to AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/migrate.html). For more information on `cdk migrate` command options, see [cdk migrate command reference](https://docs.aws.amazon.com/cdk/v2/guide/ref-cli-cdk-migrate.html). The new CDK app will be initialized in the current working directory and will include a single stack that is named with the value you provide using `--stack-name`. The new stack, app, and directory will all use this name. To specify a different output directory, use `--output-path`. You can create the new CDK app in any CDK supported programming language using `--language`. #### Migrate from an AWS CloudFormation stack Migrate from a deployed AWS CloudFormation stack in a specific AWS account and AWS Region using `--from-stack`. Provide `--stack-name` to identify the name of your stack. Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that migrates **myCloudFormationStack** to a new CDK app using TypeScript: ```console $ cdk migrate --language typescript --from-stack --stack-name 'myCloudFormationStack' ``` #### Migrate from a local AWS CloudFormation template Migrate from a local `YAML` or `JSON` AWS CloudFormation template using `--from-path`. Provide a name for the stack that will be created in your new CDK app using `--stack-name`. Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that creates a new CDK app using TypeScript that includes a **myCloudFormationStack** stack from a local `template.json` file: ```console $ cdk migrate --language typescript --from-path "./template.json" --stack-name "myCloudFormationStack" ``` #### Migrate from deployed AWS resources Migrate from deployed AWS resources in a specific AWS account and Region that are not associated with an AWS CloudFormation stack using `--from-scan`. These would be resources that were provisioned outside of an IaC tool. CDK Migrate utilizes the IaC generator service to scan for resources and generate a template. Then, the CDK CLI references the template to create a new CDK app. To learn more about IaC generator, see [Generating templates for existing resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC.html). Account and Region information are retrieved from default CDK CLI sources. Use `--account` and `--region` options to provide other values. The following is an example that creates a new CDK app using TypeScript that includes a new **myCloudFormationStack** stack from deployed resources: ```console $ cdk migrate --language typescript --from-scan --stack-name "myCloudFormationStack" ``` Since CDK Migrate relies on the IaC generator service, any limitations of IaC generator will apply to CDK Migrate. For general limitations, see [Considerations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC.html#generate-template-considerations). IaC generator limitations with discovering resource and property values will also apply here. As a result, CDK Migrate will only migrate resources supported by IaC generator. Some of your resources may not be supported and some property values may not be accessible. For more information, see [Iac generator and write-only properties](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC-write-only-properties.html) and [Supported resource types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC-supported-resources.html). You can specify filters using `--filter` to specify which resources to migrate. This is a good option to use if you are over the IaC generator total resource limit. After migration, you must resolve any write-only properties that were detected by IaC generator from your deployed resources. To learn more, see [Resolve write-only properties](https://docs.aws.amazon.com/cdk/v2/guide/migrate.html#migrate-resources-writeonly). #### Examples ##### Generate a TypeScript CDK app from a local AWS CloudFormation template.json file ```console $ # template.json is a valid cloudformation template in the local directory $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-path MyTemplate.json ``` This command generates a new directory named `MyAwesomeApplication` within your current working directory, and then initializes a new CDK application within that directory. The CDK app contains a `MyAwesomeApplication` stack with resources configured to match those in your local CloudFormation template. This results in a CDK application with the following structure, where the lib directory contains a stack definition with the same resource configuration as the provided template.json. ```console ├── README.md ├── bin │ └── my_awesome_application.ts ├── cdk.json ├── jest.config.js ├── lib │ └── my_awesome_application-stack.ts ├── package.json ├── tsconfig.json ``` ##### Generate a Python CDK app from a deployed stack If you already have a CloudFormation stack deployed in your account and would like to manage it with CDK, you can migrate the deployed stack to a new CDK app. The value provided with `--stack-name` must match the name of the deployed stack. ```console $ # generate a Python application from MyDeployedStack in your account $ cdk migrate --stack-name MyDeployedStack --language python --from-stack ``` This will generate a Python CDK app which will synthesize the same configuration of resources as the deployed stack. ##### Generate a TypeScript CDK app from deployed AWS resources that are not associated with a stack If you have resources in your account that were provisioned outside AWS IaC tools and would like to manage them with the CDK, you can use the `--from-scan` option to generate the application. In this example, we use the `--filter` option to specify which resources to migrate. You can filter resources to limit the number of resources migrated to only those specified by the `--filter` option, including any resources they depend on, or resources that depend on them (for example A filter which specifies a single Lambda Function, will find that specific table and any alarms that may monitor it). The `--filter` argument offers both AND as well as OR filtering. OR filtering can be specified by passing multiple `--filter` options, and AND filtering can be specified by passing a single `--filter` option with multiple comma separated key/value pairs as seen below (see below for examples). It is recommended to use the `--filter` option to limit the number of resources returned as some resource types provide sample resources by default in all accounts which can add to the resource limits. `--from-scan` takes 3 potential arguments: `--new`, `most-recent`, and undefined. If `--new` is passed, CDK Migrate will initiate a new scan of the account and use that new scan to discover resources. If `--most-recent` is passed, CDK Migrate will use the most recent scan of the account to discover resources. If neither `--new` nor `--most-recent` are passed, CDK Migrate will take the most recent scan of the account to discover resources, unless there is no recent scan, in which case it will initiate a new scan. ``` # Filtering options identifier|id|resource-identifier=<resource-specific-resource-identifier-value> type|resource-type-prefix=<resource-type-prefix> tag-key=<tag-key> tag-value=<tag-value> ``` ##### Additional examples of migrating from deployed resources ```console $ # Generate a typescript application from all un-managed resources in your account $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan $ # Generate a typescript application from all un-managed resources in your account with the tag key "Environment" AND the tag value "Production" $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan --filter tag-key=Environment,tag-value=Production $ # Generate a python application from any dynamoDB resources with the tag-key "dev" AND the tag-value "true" OR any SQS::Queue $ cdk migrate --stack-name MyAwesomeApplication --language python --from-scan --filter type=AWS::DynamoDb::,tag-key=dev,tag-value=true --filter type=SQS::Queue $ # Generate a typescript application from a specific lambda function by providing it's specific resource identifier $ cdk migrate --stack-name MyAwesomeApplication --language typescript --from-scan --filter identifier=myAwesomeLambdaFunction ``` #### **CDK Migrate Limitations** - CDK Migrate does not currently support nested stacks, custom resources, or the `Fn::ForEach` intrinsic function. - CDK Migrate will only generate L1 constructs and does not currently support any higher level abstractions. - CDK Migrate successfully generating an application does *not* guarantee the application is immediately deployable. It simply generates a CDK application which will synthesize a template that has identical resource configurations to the provided template. - CDK Migrate does not interact with the CloudFormation service to verify the template provided can deploy on its own. This means CDK Migrate will not verify that any resources in the provided template are already managed in other CloudFormation templates, nor will it verify that the resources in the provided template are available in the desired regions, which may impact ADC or Opt-In regions. - If the provided template has parameters without default values, those will need to be provided before deploying the generated application. In practice this is how CDK Migrate generated applications will operate in the following scenarios: | Situation | Result | | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | | Provided template + stack-name is from a deployed stack in the account/region | The CDK application will deploy as a changeset to the existing stack | | Provided template has no overlap with resources already in the account/region | The CDK application will deploy a new stack successfully | | Provided template has overlap with Cloudformation managed resources already in the account/region | The CDK application will not be deployable unless those resources are removed | | Provided template has overlap with un-managed resources already in the account/region | The CDK application will not be deployable until those resources are adopted with [`cdk import`](#cdk-import) | | No template has been provided and resources exist in the region the scan is done | The CDK application will be immediatly deployable and will import those resources into a new cloudformation stack upon deploy | ##### **The provided template is already deployed to CloudFormation in the account/region** If the provided template came directly from a deployed CloudFormation stack, and that stack has not experienced any drift, then the generated application will be immediately deployable, and will not cause any changes to the deployed resources. Drift might occur if a resource in your template was modified outside of CloudFormation, namely via the AWS Console or AWS CLI. ##### **The provided template is not deployed to CloudFormation in the account/region, and there *is not* overlap with existing resources in the account/region** If the provided template represents a set of resources that have no overlap with resources already deployed in the account/region, then the generated application will be immediately deployable. This could be because the stack has never been deployed, or the application was generated from a stack deployed in another account/region. In practice this means for any resource in the provided template, for example, ```Json "S3Bucket": { "Type": "AWS::S3::Bucket", "Properties": { "BucketName": "MyBucket", "AccessControl": "PublicRead", }, "DeletionPolicy": "Retain" } ``` There must not exist a resource of that type with the same identifier in the desired region. In this example that identfier would be "MyBucket" ##### **The provided template is not deployed to CloudFormation in the account/region, and there *is* overlap with existing resources in the account/region** If the provided template represents a set of resources that overlap with resources already deployed in the account/region, then the generated application will not be immediately deployable. If those overlapped resources are already managed by another CloudFormation stack in that account/region, then those resources will need to be manually removed from the provided template. Otherwise, if the overlapped resources are not managed by another CloudFormation stack, then first remove those resources from your CDK Application Stack, deploy the cdk application successfully, then re-add them and run `cdk import` to import them into your deployed stack. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Ran npm-check-updates and yarn upgrade to keep the `yarn.lock` file up-to-date.
…e (under feature flag) (#28556) [The documentation](https://github.com/aws/aws-cdk/blob/f4c1d1253ee34c2837a57a93faa47c9da97ef6d8/packages/aws-cdk-lib/aws-codepipeline/lib/pipeline.ts#L380-L381) mentions updating the default for CDK v2. Sounds like we should add it in with feature flag. Closes #28247. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Reason for this change Include an explanation of the option in the documentation to make it easier for users to understand. ### Description of changes This PR adds an explanation of missing options in README. ### Description of how you validated changes ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add new Lambda compute images for both `aarch64` and `x86_64` arhictectures: - `aws/codebuild/amazonlinux-aarch64-lambda-standard:corretto21` - `aws/codebuild/amazonlinux-aarch64-lambda-standard:nodejs20` - `aws/codebuild/amazonlinux-aarch64-lambda-standard:python3.12` Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html#lambda-compute-images aws/aws-codebuild-docker-images#687 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This is a follow up to #28658, #28772, and #28760. We had to fix multiple places where that file path extended beyond the package itself into other areas of the local repository (that would not be available after packaging). This caused myriad issues at synth time with `file not found` errors. This PR introduces a linter rule with the following specifications: - no inefficient paths, i.e. no going backwards multiple times. Ex. `path.join(__dirname, '..', 'folder', '..', 'another-folder')`. This should and can be easily simplified - no paths that go backwards past a `package.json` file. This should catch the instances we faced next time. The `yarn lint` command on `aws-cdk-lib` took 51.47s seconds without this new rule and 53.32s seconds with the rule enabled. The difference of ~2 seconds shouldn't be a hindrance in this case but I am happy to look for additional efficiencies in the rule I've written. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…igger() (#28899) I have added a `lambdaVersion` to the `UserPool.addTrigger()`. This is in response to the [support for V2.0 trigger event in preTokenGeneration](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-pre-token-generation.html). ```ts declare const userpool: cognito.UserPool; declare const preTokenGenerationFn: lambda.Function; userpool.addTrigger(cognito.UserPoolOperation.PRE_TOKEN_GENERATION_CONFIG, preTokenGenerationFn, LambdaVersion.V2_0); ``` In #28683, apart from the current implementation approach, there was also a proposal to add `lambdaVersion` to `UserPoolProps.lambdaTrigger`. However, it was not adopted as it would result in a breaking change. Closes #28683 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…28877) This PR supports the timeout configuration for Service Connect. Release https://aws.amazon.com/about-aws/whats-new/2024/01/amazon-ecs-configurable-timeout-service-connect/ Developer guide https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-concepts.html#service-connect-concepts-proxy CloudFormation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-serviceconnectservice.html#cfn-ecs-service-serviceconnectservice-timeout https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-timeoutconfiguration.html ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When using the `awsApiCall` of integ-tests, several possible ways exist to specify the `service` and `api`. This is made possible by the following PR. https://github.com/aws/aws-cdk/pull/27313/files#diff-3ab65cbf843775673ff370c9c90deceba5f0ead8a3e016e0c2f243d27bf84609 However, currently, when specifying the package name or client name in SDK V3, the resource type in custom resource or the logical id in CloudFormation Output contains non-alphanumeric (`@`, `/`, `-`), which results in an error. For custom resources, the resource type can include alphanumeric characters and the following characters: `_@-` https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-customresource.html#aws-resource-cloudformation-customresource--remarks For `CfnOutput`, the logical id can include only alphanumeric. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html#outputs-section-syntax This PR fixes to remove these strings that cannot be included and allows users to specify the SDK v3 package name and client name when using `awsApiCall`. Closes #28844 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) Fixes bug in LoggingFormat `@default` ### Reason for this change Incorrect #28114 related issue ### Description of changes Not much to describe here. ### Description of how you validated changes Shouldn't need any as there is integ tests from this pr #28942 ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) BREAKING CHANGE: changing the default type of `lambda.loggingFormat`. Previously it was a string `"Text format"`, not it is an enum `LoggingFormat.TEXT`. *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…nCreation` is set to true (#28902) This PR resolves the issue where deploying an isolated subnet with `ipv6AssignAddressOnCreation` enabled fails. ### example ```ts new Vpc(stack, 'TheVPC', { ipProtocol: IpProtocol.DUAL_STACK, subnetConfiguration: [ { subnetType: testData.subnetType, name: 'subnetName', ipv6AssignAddressOnCreation: true, }, ], }); ``` ### error ```sh 6:39:48 PM | CREATE_FAILED | AWS::EC2::Subnet | vpcisolatedSubnet1Subnet06BBE51F Template error: Fn::Select cannot select nonexistent value at index 0 ``` ### solution A dependency on the CidrBlock has been added [as discussed in issue](#28843 (comment)). ```ts (this.isolatedSubnets as PrivateSubnet[]).forEach((isolatedSubnet) => { if (this.ipv6CidrBlock !== undefined) { isolatedSubnet.node.addDependency(this.ipv6CidrBlock); } }); ``` ## Question This modification results in the failure of existing integration tests. I don't consider this change to be a breaking one, so I went ahead and updated the snapshot. Is that okay? ```sh CHANGED aws-ec2/test/integ.vpc-dual-stack-ec2 0.776s Resources [~] AWS::EC2::Subnet Ip6VpcDualStackPrivateSubnet1Subnet842B7F4C └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] [~] AWS::EC2::RouteTable Ip6VpcDualStackPrivateSubnet1RouteTable5326D239 └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] [~] AWS::EC2::SubnetRouteTableAssociation Ip6VpcDualStackPrivateSubnet1RouteTableAssociationF1C10B6A └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] [~] AWS::EC2::Subnet Ip6VpcDualStackPrivateSubnet2SubnetEB493489 └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] [~] AWS::EC2::RouteTable Ip6VpcDualStackPrivateSubnet2RouteTable56BF517C └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] [~] AWS::EC2::SubnetRouteTableAssociation Ip6VpcDualStackPrivateSubnet2RouteTableAssociationD37A3D3D └─ [+] DependsOn └─ ["Ip6VpcDualStackipv6cidr40BE830A"] ``` Closes #28843 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 5 to 6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's releases</a>.</em></p> <blockquote> <h2>Create Pull Request v6.0.0</h2> <h2>Behaviour changes</h2> <ul> <li>The default values for <code>author</code> and <code>committer</code> have changed. See "What's new" below for details. If you are overriding the default values you will not be affected by this change.</li> <li>On completion, the action now removes the temporary git remote configuration it adds when using <code>push-to-fork</code>. This should not affect you unless you were using the temporary configuration for some other purpose after the action completes.</li> </ul> <h2>What's new</h2> <ul> <li>Updated runtime to Node.js 20 <ul> <li>The action now requires a minimum version of <a href="https://github.com/actions/runner/releases/tag/v2.308.0">v2.308.0</a> for the Actions runner. Update self-hosted runners to v2.308.0 or later to ensure compatibility.</li> </ul> </li> <li>The default value for <code>author</code> has been changed to <code>${{ github.actor }} <${{ github.actor_id }}+${{ github.actor }}@users.noreply.github.com></code>. The change adds the <code>${{ github.actor_id }}+</code> prefix to the email address to align with GitHub's standard format for the author email address.</li> <li>The default value for <code>committer</code> has been changed to <code>github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com></code>. This is to align with the default GitHub Actions bot user account.</li> <li>Adds input <code>git-token</code>, the <a href="https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token">Personal Access Token (PAT)</a> that the action will use for git operations. This input defaults to the value of <code>token</code>. Use this input if you would like the action to use a different token for git operations than the one used for the GitHub API.</li> <li><code>push-to-fork</code> now supports pushing to sibling repositories in the same network.</li> <li>Previously, when using <code>push-to-fork</code>, the action did not remove temporary git remote configuration it adds during execution. This has been fixed and the configuration is now removed when the action completes.</li> <li>If the pull request body is truncated due to exceeding the maximum length, the action will now suffix the body with the message "...<em>[Pull request body truncated]</em>" to indicate that the body has been truncated.</li> <li>The action now uses <code>--unshallow</code> only when necessary, rather than as a default argument of <code>git fetch</code>. This should improve performance, particularly for large git repositories with extensive commit history.</li> <li>The action can now be executed on one GitHub server and create pull requests on a <em>different</em> GitHub server. Server products include GitHub hosted (github.com), GitHub Enterprise Server (GHES), and GitHub Enterprise Cloud (GHEC). For example, the action can be executed on GitHub hosted and create pull requests on a GHES or GHEC instance.</li> </ul> <h2>What's Changed</h2> <ul> <li>Update distribution by <a href="https://github.com/actions-bot"><code>@actions-bot</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2086">peter-evans/create-pull-request#2086</a></li> <li>fix crazy-max/ghaction-import-gp parameters by <a href="https://github.com/fharper"><code>@fharper</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2177">peter-evans/create-pull-request#2177</a></li> <li>Update distribution by <a href="https://github.com/actions-bot"><code>@actions-bot</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2364">peter-evans/create-pull-request#2364</a></li> <li>Use checkout v4 by <a href="https://github.com/okuramasafumi"><code>@okuramasafumi</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2521">peter-evans/create-pull-request#2521</a></li> <li>Note about <code>delete-branch</code> by <a href="https://github.com/dezren39"><code>@dezren39</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2631">peter-evans/create-pull-request#2631</a></li> <li>98 dependency updates by <a href="https://github.com/dependabot"><code>@dependabot</code></a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/fharper"><code>@fharper</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2177">peter-evans/create-pull-request#2177</a></li> <li><a href="https://github.com/okuramasafumi"><code>@okuramasafumi</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2521">peter-evans/create-pull-request#2521</a></li> <li><a href="https://github.com/dezren39"><code>@dezren39</code></a> made their first contribution in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2631">peter-evans/create-pull-request#2631</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v5.0.2...v6.0.0">https://github.com/peter-evans/create-pull-request/compare/v5.0.2...v6.0.0</a></p> <h2>Create Pull Request v5.0.2</h2> <p>⚙️ Fixes an issue that occurs when using <code>push-to-fork</code> and both base and head repositories are in the same org/user account.</p> <h2>What's Changed</h2> <ul> <li>fix: specify head repo by <a href="https://github.com/peter-evans"><code>@peter-evans</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/2044">peter-evans/create-pull-request#2044</a></li> <li>20 dependency updates by <a href="https://github.com/dependabot"><code>@dependabot</code></a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v5.0.1...v5.0.2">https://github.com/peter-evans/create-pull-request/compare/v5.0.1...v5.0.2</a></p> <h2>Create Pull Request v5.0.1</h2> <h2>What's Changed</h2> <ul> <li>fix: truncate body if exceeds max length by <a href="https://github.com/peter-evans"><code>@peter-evans</code></a> in <a href="https://redirect.github.com/peter-evans/create-pull-request/pull/1915">peter-evans/create-pull-request#1915</a></li> <li>12 dependency updates by <a href="https://github.com/dependabot"><code>@dependabot</code></a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/peter-evans/create-pull-request/compare/v5.0.0...v5.0.1">https://github.com/peter-evans/create-pull-request/compare/v5.0.0...v5.0.1</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/peter-evans/create-pull-request/commit/b1ddad2c994a25fbc81a28b3ec0e368bb2021c50"><code>b1ddad2</code></a> feat: v6 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2717">#2717</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/bb809027fda03cc267431a7d36a88148eb9f3846"><code>bb80902</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.8 to 18.19.10 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2712">#2712</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/e0037d470cdeb1c8133acfba89af08639bb69eb3"><code>e0037d4</code></a> build(deps): bump peter-evans/create-or-update-comment from 3 to 4 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2702">#2702</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/94b1f99e3a73880074d0e669c3b69d376cc8ceae"><code>94b1f99</code></a> build(deps): bump peter-evans/find-comment from 2 to 3 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2703">#2703</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/69c27eaf4a14a67b5362a51e681f83d3d5e0f96b"><code>69c27ea</code></a> build(deps-dev): bump ts-jest from 29.1.1 to 29.1.2 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2685">#2685</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/7ea722a0f6286a45eb3005280d83575a74bc8fef"><code>7ea722a</code></a> build(deps-dev): bump prettier from 3.2.2 to 3.2.4 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2684">#2684</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/5ee839affd4c87811108724370a2819a40e2e5d3"><code>5ee839a</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.7 to 18.19.8 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2683">#2683</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/60fc256c678e6ed78d0d42e09675c9beba09cb94"><code>60fc256</code></a> build(deps-dev): bump eslint-plugin-prettier from 5.1.2 to 5.1.3 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2660">#2660</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/0c677233614c017442253060c74fd2cb7ff349fc"><code>0c67723</code></a> build(deps-dev): bump <code>@types/node</code> from 18.19.5 to 18.19.7 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2661">#2661</a>)</li> <li><a href="https://github.com/peter-evans/create-pull-request/commit/4e288e851b95bd1362e281a255094fcc47ada675"><code>4e288e8</code></a> build(deps-dev): bump prettier from 3.1.1 to 3.2.2 (<a href="https://redirect.github.com/peter-evans/create-pull-request/issues/2659">#2659</a>)</li> <li>Additional commits viewable in <a href="https://github.com/peter-evans/create-pull-request/compare/v5...v6">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/create-pull-request&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
Bumps [hmarr/auto-approve-action](https://github.com/hmarr/auto-approve-action) from 3.2.1 to 4.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/hmarr/auto-approve-action/releases">hmarr/auto-approve-action's releases</a>.</em></p> <blockquote> <h2>v4.0.0</h2> <h2>What's Changed</h2> <ul> <li>Upgrade from node 16 to node 20</li> <li>Upgrade dependencies and switch from nock to msw for API mocking</li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/hmarr/auto-approve-action/compare/v3.2.1...v4.0.0">https://github.com/hmarr/auto-approve-action/compare/v3.2.1...v4.0.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/hmarr/auto-approve-action/commit/f0939ea97e9205ef24d872e76833fa908a770363"><code>f0939ea</code></a> rebuild</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/37b1c4c6a8eb2a5a86e9b07d286e59f1a29a8f36"><code>37b1c4c</code></a> prettier</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/5c6d6d8923903fbaa1270779f043d339501e10a0"><code>5c6d6d8</code></a> bump version in readme</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/6eba12cfa8a263db29ee1dd3b80c3e5d47f58a4b"><code>6eba12c</code></a> bump to node20</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/b1026103765d2bea1fc21d4b96390ec953a8a745"><code>b102610</code></a> upgrade deps, switch from nock to msw</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/7d0ab8fdbb906da8a6297d373561d5ccb137d98f"><code>7d0ab8f</code></a> Bump <code>@babel/traverse</code> from 7.17.3 to 7.23.2 (<a href="https://redirect.github.com/hmarr/auto-approve-action/issues/222">#222</a>)</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/134e1011866211ab5b452c8ab06ea0447e888610"><code>134e101</code></a> Bump tough-cookie from 4.0.0 to 4.1.3 (<a href="https://redirect.github.com/hmarr/auto-approve-action/issues/220">#220</a>)</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/05117a7ec2883d5c9a48de532442e96d57ae12f6"><code>05117a7</code></a> Bump word-wrap from 1.2.3 to 1.2.4 (<a href="https://redirect.github.com/hmarr/auto-approve-action/issues/221">#221</a>)</li> <li><a href="https://github.com/hmarr/auto-approve-action/commit/93c80b3919aae15c0da0d3ca49c70f57e3c4a58f"><code>93c80b3</code></a> Fix a wrong example (<a href="https://redirect.github.com/hmarr/auto-approve-action/issues/218">#218</a>)</li> <li>See full diff in <a href="https://github.com/hmarr/auto-approve-action/compare/v3.2.1...v4.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=hmarr/auto-approve-action&package-manager=github_actions&previous-version=3.2.1&new-version=4.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
### Reason for this change The current introduction does not properly explain how it works. ### Description of changes updated readme only ### Description of how you validated changes I have asked for feedback from peers ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Enforcing `@typescript-eslint/comma-dangle` instead of the regular `eslint/comma-dangle`. This gives us additional linting on enums, generics, and tuples. Mostly, I care about the enums. https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/docs/rules/comma-dangle.md https://medium.com/@nikgraf/why-you-should-enforce-dangling-commas-for-multiline-statements-d034c98e36f8 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Enforces that a `;` separates properties of an interface. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
#28099 added this file erroneously. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…IC (#28870) The CloudWatch `MathExpression` class warns about identifiers missing from `usingMetrics` when `INSIGHT_RULE_METRIC` is used in the expression. It incorrectly parses the arguments to `INSIGHT_RULE_METRIC` as identifiers. When using `INSIGHT_RULE_METRIC`, I don't believe there is anything that needs to be added to `usingMetrics`. This implementation follows a similar fix done for some other expressions here: #24313 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…28943) ### Issue # (if applicable) Closes #4323 ### Reason for this change S3EventSource should accept `IBucket` instead of `Bucket`. `aws_s3.Bucket.from_bucket_name(...)` or `aws_s3.Bucket.from_bucket_arn(...)`, etc. returns aws_s3.IBucket type ### Description of changes Based on @otaviomacedo 's comment in #25782 , a new class `S3EventSourceV2` is implementd to accept `IBucket` instead of `Bucket`. And avoids breaking changes. ### Description of how you validated changes - unit test - integration test ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue Closes #29007. ### Reason for this change [In October 2023, it became possible to set a message archive policy for FIFO topics](https://aws.amazon.com/jp/blogs/compute/archiving-and-replaying-messages-with-amazon-sns-fifo/). While this could be configured via CloudFormation (Cfn), it was not possible to do so from the L2 construct. ### Description of changes In this pull request, the "messageRetentionPeriodInDays" parameter has been added to the Topic class, enabling the configuration of the message archive policy. ```ts new Topic(this, 'MyTopic', { fifo: true, // only fifo topic messageRetentionPeriodInDays: 12, // added }); ``` ### Description of how you validated changes I've added unit and integ tests. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) Closes #<issue number here>. ### Reason for this change ### Description of changes ### Description of how you validated changes ### Checklist - [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…29035) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) n/a ### Reason for this change If the comment is accidentally left in the PR description the PR could get incorrectly flagged for having breaking changes ### Description of changes Remove breaking changes comment from PR description ### Description of how you validated changes ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Reason for this change Want to recommend against creating AppConfig resources across multiple stacks. ### Description of changes ### Description of how you validated changes ### Checklist - [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Adds support for Step Functions Map state in Distributed mode. Currently, in order to create a Distributed Map in CDK, CDK users have to define a Custom State containing their Amazon States Language definition. This solution consists of the creation of a new L2 construct, `DistributedMap`. This design decision was made due to the fact that some fields are exclusive to Distributed Maps, such as `ItemReader`. Adding support for it through the existing `Map` L2 construct would lead to some fields being conditionally available. Some design decisions that were made: - I created an abstract class `MapBase` that encapsulates all fields currently supported by both `inline` and `distributed` maps. This includes all currently supported fields in the CDK except for `iterator` and `parameters` (deprecated fields). Those are now part of the Map subclass which extends `MapBase`. All new Distributed Maps fields are part of the new `DistributedMap` construct (also a subclass of `MapBase`) - Permissions specific to Distributed Maps are added as part of this new construct Thanks to @beck3905 and their PR #24331 for inspiration. A lot of the ideas here are re-used from the PR cited. Closes #23216 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…een submitted (#29000) ### Issue # (if applicable) Closes #28803 ### Reason for this change If a contributor requests an exemption or clarification for a PR, they will get a second message (due to the comment event) telling them again that they can request an exemption/clarification. This may confuse contributors, especially beginners. ### Description of changes Detect if an exempt request comment already exists, next comment would mention an request has been submitted and waiting for a maintainer's review. ### Description of how you validated changes unit tests *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) Closes #<issue number here>. ### Reason for this change ### Description of changes ### Description of how you validated changes ### Checklist - [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Reason for this change Making changes after API review. ### Description of changes Refactor README and integ tests. ### Description of how you validated changes ### Checklist - [ ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec` **L1 CloudFormation resource definition changes:** ``` ├[~] service aws-acmpca │ └ resources │ └[~] resource AWS::ACMPCA::CertificateAuthority │ └ types │ ├[~] type CrlConfiguration │ │ ├ - documentation: Contains configuration information for a certificate revocation list (CRL). Your private certificate authority (CA) creates base CRLs. Delta CRLs are not supported. You can enable CRLs for your new or an existing private CA by setting the *Enabled* parameter to `true` . Your private CA writes CRLs to an S3 bucket that you specify in the *S3BucketName* parameter. You can hide the name of your bucket by specifying a value for the *CustomCname* parameter. Your private CA copies the CNAME or the S3 bucket name to the *CRL Distribution Points* extension of each certificate it issues. Your S3 bucket policy must give write permission to AWS Private CA. │ │ │ AWS Private CA assets that are stored in Amazon S3 can be protected with encryption. For more information, see [Encrypting Your CRLs](https://docs.aws.amazon.com/privateca/latest/userguide/PcaCreateCa.html#crl-encryption) . │ │ │ Your private CA uses the value in the *ExpirationInDays* parameter to calculate the *nextUpdate* field in the CRL. The CRL is refreshed prior to a certificate's expiration date or when a certificate is revoked. When a certificate is revoked, it appears in the CRL until the certificate expires, and then in one additional CRL after expiration, and it always appears in the audit report. │ │ │ A CRL is typically updated approximately 30 minutes after a certificate is revoked. If for any reason a CRL update fails, AWS Private CA makes further attempts every 15 minutes. │ │ │ CRLs contain the following fields: │ │ │ - *Version* : The current version number defined in RFC 5280 is V2. The integer value is 0x1. │ │ │ - *Signature Algorithm* : The name of the algorithm used to sign the CRL. │ │ │ - *Issuer* : The X.500 distinguished name of your private CA that issued the CRL. │ │ │ - *Last Update* : The issue date and time of this CRL. │ │ │ - *Next Update* : The day and time by which the next CRL will be issued. │ │ │ - *Revoked Certificates* : List of revoked certificates. Each list item contains the following information. │ │ │ - *Serial Number* : The serial number, in hexadecimal format, of the revoked certificate. │ │ │ - *Revocation Date* : Date and time the certificate was revoked. │ │ │ - *CRL Entry Extensions* : Optional extensions for the CRL entry. │ │ │ - *X509v3 CRL Reason Code* : Reason the certificate was revoked. │ │ │ - *CRL Extensions* : Optional extensions for the CRL. │ │ │ - *X509v3 Authority Key Identifier* : Identifies the public key associated with the private key used to sign the certificate. │ │ │ - *X509v3 CRL Number:* : Decimal sequence number for the CRL. │ │ │ - *Signature Algorithm* : Algorithm used by your private CA to sign the CRL. │ │ │ - *Signature Value* : Signature computed over the CRL. │ │ │ Certificate revocation lists created by AWS Private CA are DER-encoded. You can use the following OpenSSL command to list a CRL. │ │ │ `openssl crl -inform DER -text -in *crl_path* -noout` │ │ │ For more information, see [Planning a certificate revocation list (CRL)](https://docs.aws.amazon.com/privateca/latest/userguide/crl-planning.html) in the *AWS Private Certificate Authority User Guide* │ │ │ + documentation: Contains configuration information for a certificate revocation list (CRL). Your private certificate authority (CA) creates base CRLs. Delta CRLs are not supported. You can enable CRLs for your new or an existing private CA by setting the *Enabled* parameter to `true` . Your private CA writes CRLs to an S3 bucket that you specify in the *S3BucketName* parameter. You can hide the name of your bucket by specifying a value for the *CustomCname* parameter. Your private CA by default copies the CNAME or the S3 bucket name to the *CRL Distribution Points* extension of each certificate it issues. If you want to configure this default behavior to be something different, you can set the *CrlDistributionPointExtensionConfiguration* parameter. Your S3 bucket policy must give write permission to AWS Private CA. │ │ │ AWS Private CA assets that are stored in Amazon S3 can be protected with encryption. For more information, see [Encrypting Your CRLs](https://docs.aws.amazon.com/privateca/latest/userguide/PcaCreateCa.html#crl-encryption) . │ │ │ Your private CA uses the value in the *ExpirationInDays* parameter to calculate the *nextUpdate* field in the CRL. The CRL is refreshed prior to a certificate's expiration date or when a certificate is revoked. When a certificate is revoked, it appears in the CRL until the certificate expires, and then in one additional CRL after expiration, and it always appears in the audit report. │ │ │ A CRL is typically updated approximately 30 minutes after a certificate is revoked. If for any reason a CRL update fails, AWS Private CA makes further attempts every 15 minutes. │ │ │ CRLs contain the following fields: │ │ │ - *Version* : The current version number defined in RFC 5280 is V2. The integer value is 0x1. │ │ │ - *Signature Algorithm* : The name of the algorithm used to sign the CRL. │ │ │ - *Issuer* : The X.500 distinguished name of your private CA that issued the CRL. │ │ │ - *Last Update* : The issue date and time of this CRL. │ │ │ - *Next Update* : The day and time by which the next CRL will be issued. │ │ │ - *Revoked Certificates* : List of revoked certificates. Each list item contains the following information. │ │ │ - *Serial Number* : The serial number, in hexadecimal format, of the revoked certificate. │ │ │ - *Revocation Date* : Date and time the certificate was revoked. │ │ │ - *CRL Entry Extensions* : Optional extensions for the CRL entry. │ │ │ - *X509v3 CRL Reason Code* : Reason the certificate was revoked. │ │ │ - *CRL Extensions* : Optional extensions for the CRL. │ │ │ - *X509v3 Authority Key Identifier* : Identifies the public key associated with the private key used to sign the certificate. │ │ │ - *X509v3 CRL Number:* : Decimal sequence number for the CRL. │ │ │ - *Signature Algorithm* : Algorithm used by your private CA to sign the CRL. │ │ │ - *Signature Value* : Signature computed over the CRL. │ │ │ Certificate revocation lists created by AWS Private CA are DER-encoded. You can use the following OpenSSL command to list a CRL. │ │ │ `openssl crl -inform DER -text -in *crl_path* -noout` │ │ │ For more information, see [Planning a certificate revocation list (CRL)](https://docs.aws.amazon.com/privateca/latest/userguide/crl-planning.html) in the *AWS Private Certificate Authority User Guide* │ │ └ properties │ │ └ CrlDistributionPointExtensionConfiguration: (documentation changed) │ └[~] type CrlDistributionPointExtensionConfiguration │ ├ - documentation: Configures the default behavior of the CRL Distribution Point extension for certificates issued by your certificate authority │ │ + documentation: Contains configuration information for the default behavior of the CRL Distribution Point (CDP) extension in certificates issued by your CA. This extension contains a link to download the CRL, so you can check whether a certificate has been revoked. To choose whether you want this extension omitted or not in certificates issued by your CA, you can set the *OmitExtension* parameter. │ └ properties │ └ OmitExtension: (documentation changed) ├[~] service aws-amazonmq │ └ resources │ └[~] resource AWS::AmazonMQ::Broker │ └ types │ └[~] type User │ └ properties │ └ ReplicationUser: (documentation changed) ├[~] service aws-amplifyuibuilder │ └ resources │ ├[~] resource AWS::AmplifyUIBuilder::Component │ │ ├ properties │ │ │ ├ AppId: - string │ │ │ │ + string (immutable) │ │ │ ├ BindingProperties: - Map<string, ComponentBindingPropertiesValue> (required) │ │ │ │ + Map<string, ComponentBindingPropertiesValue> │ │ │ ├ ComponentType: - string (required) │ │ │ │ + string │ │ │ ├ EnvironmentName: - string │ │ │ │ + string (immutable) │ │ │ ├ Name: - string (required) │ │ │ │ + string │ │ │ ├ Overrides: - Map<string, Map<string, string>> ⇐ json (required) │ │ │ │ + Map<string, Map<string, string>> ⇐ json │ │ │ ├ Properties: - Map<string, ComponentProperty> (required) │ │ │ │ + Map<string, ComponentProperty> │ │ │ └ Variants: - Array<ComponentVariant> (required) │ │ │ + Array<ComponentVariant> │ │ ├ attributes │ │ │ ├[+] CreatedAt: string │ │ │ └[+] ModifiedAt: string │ │ └ types │ │ ├[~] type ComponentBindingPropertiesValueProperties │ │ │ └ properties │ │ │ └[+] SlotName: string │ │ ├[~] type ComponentChild │ │ │ └ properties │ │ │ └[+] SourceId: string │ │ ├[~] type ComponentEvent │ │ │ └ properties │ │ │ └[+] BindingEvent: string │ │ └[~] type Predicate │ │ └ properties │ │ └[+] OperandType: string │ ├[~] resource AWS::AmplifyUIBuilder::Form │ │ ├ properties │ │ │ ├ AppId: - string │ │ │ │ + string (immutable) │ │ │ ├ DataType: - FormDataTypeConfig (required) │ │ │ │ + FormDataTypeConfig │ │ │ ├ EnvironmentName: - string │ │ │ │ + string (immutable) │ │ │ ├ Fields: - Map<string, FieldConfig> (required) │ │ │ │ + Map<string, FieldConfig> │ │ │ ├ FormActionType: - string (required) │ │ │ │ + string │ │ │ ├ Name: - string (required) │ │ │ │ + string │ │ │ ├ SchemaVersion: - string (required) │ │ │ │ + string │ │ │ ├ SectionalElements: - Map<string, SectionalElement> (required) │ │ │ │ + Map<string, SectionalElement> │ │ │ └ Style: - FormStyle (required) │ │ │ + FormStyle │ │ └ types │ │ ├[+] type FormInputBindingPropertiesValue │ │ │ ├ documentation: Represents the data binding configuration for a form's input fields at runtime.You can use `FormInputBindingPropertiesValue` to add exposed properties to a form to allow different values to be entered when a form is reused in different places in an app. │ │ │ │ name: FormInputBindingPropertiesValue │ │ │ └ properties │ │ │ ├Type: string │ │ │ └BindingProperties: FormInputBindingPropertiesValueProperties │ │ ├[+] type FormInputBindingPropertiesValueProperties │ │ │ ├ documentation: Represents the data binding configuration for a specific property using data stored in AWS . For AWS connected properties, you can bind a property to data stored in an Amplify DataStore model. │ │ │ │ name: FormInputBindingPropertiesValueProperties │ │ │ └ properties │ │ │ └Model: string │ │ ├[~] type FormInputValueProperty │ │ │ └ properties │ │ │ ├[+] BindingProperties: FormInputValuePropertyBindingProperties │ │ │ └[+] Concat: Array<FormInputValueProperty> │ │ ├[+] type FormInputValuePropertyBindingProperties │ │ │ ├ documentation: Associates a form property to a binding property. This enables exposed properties on the top level form to propagate data to the form's property values. │ │ │ │ name: FormInputValuePropertyBindingProperties │ │ │ └ properties │ │ │ ├Property: string (required) │ │ │ └Field: string │ │ └[~] type ValueMappings │ │ └ properties │ │ └[+] BindingProperties: Map<string, FormInputBindingPropertiesValue> │ └[~] resource AWS::AmplifyUIBuilder::Theme │ ├ properties │ │ ├ AppId: - string │ │ │ + string (immutable) │ │ ├ EnvironmentName: - string │ │ │ + string (immutable) │ │ ├ Name: - string (required) │ │ │ + string │ │ └ Values: - Array<ThemeValues> (required) │ │ + Array<ThemeValues> │ └ attributes │ ├[+] CreatedAt: string │ └[+] ModifiedAt: string ├[~] service aws-apigateway │ └ resources │ ├[~] resource AWS::ApiGateway::Deployment │ │ └ types │ │ └[~] type StageDescription │ │ └ properties │ │ └ CacheClusterEnabled: (documentation changed) │ └[~] resource AWS::ApiGateway::Stage │ └ properties │ └ CacheClusterEnabled: (documentation changed) ├[~] service aws-appconfig │ └ resources │ ├[~] resource AWS::AppConfig::Environment │ │ ├ properties │ │ │ └ Monitors: - Array<Monitors> │ │ │ + Array<Monitor> ⇐ Array<Monitors> │ │ ├ attributes │ │ │ └[+] EnvironmentId: string │ │ └ types │ │ ├[+] type Monitor │ │ │ ├ documentation: Amazon CloudWatch alarms to monitor during the deployment process. │ │ │ │ name: Monitor │ │ │ └ properties │ │ │ ├AlarmArn: string (required) │ │ │ └AlarmRoleArn: string │ │ ├[~] type Monitors │ │ │ ├ - documentation: Amazon CloudWatch alarms to monitor during the deployment process. │ │ │ │ + documentation: undefined │ │ │ └ properties │ │ │ ├ AlarmArn: (documentation changed) │ │ │ └ AlarmRoleArn: (documentation changed) │ │ └[~] type Tags │ │ ├ - documentation: Metadata to assign to the environment. Tags help organize and categorize your AWS AppConfig resources. Each tag consists of a key and an optional value, both of which you define. │ │ │ + documentation: undefined │ │ └ properties │ │ ├ Key: (documentation changed) │ │ └ Value: (documentation changed) │ └[~] resource AWS::AppConfig::HostedConfigurationVersion │ ├ properties │ │ └ LatestVersionNumber: - number (immutable) │ │ + integer ⇐ number (immutable) │ └ attributes │ └[+] VersionNumber: string ├[~] service aws-appsync │ └ resources │ └[~] resource AWS::AppSync::GraphQLApi │ └ properties │ └[+] EnvironmentVariables: json ├[~] service aws-autoscaling │ └ resources │ └[~] resource AWS::AutoScaling::AutoScalingGroup │ └ types │ ├[~] type InstanceMaintenancePolicy │ │ └ properties │ │ ├ MaxHealthyPercentage: (documentation changed) │ │ └ MinHealthyPercentage: (documentation changed) │ └[~] type InstanceRequirements │ └ properties │ ├ MaxSpotPriceAsPercentageOfOptimalOnDemandPrice: (documentation changed) │ ├ OnDemandMaxPricePercentageOverLowestPrice: (documentation changed) │ └ SpotMaxPricePercentageOverLowestPrice: (documentation changed) ├[~] service aws-cassandra │ └ resources │ ├[~] resource AWS::Cassandra::Keyspace │ │ └ types │ │ └[~] type ReplicationSpecification │ │ └ - documentation: You can use `ReplicationSpecification` to configure the `ReplicationStrategy` of a keyspace in Amazon Keyspaces. │ │ The `ReplicationSpecification` property is `CreateOnly` and cannot be changed after the keyspace has been created. This property applies automatically to all tables in the keyspace. │ │ For more information, see [Multi-Region Replication](https://docs.aws.amazon.com/keyspaces/latest/devguide/multiRegion-replication.html) in the *Amazon Keyspaces Developer Guide* . │ │ + documentation: You can use `ReplicationSpecification` to configure the `ReplicationStrategy` of a keyspace in Amazon Keyspaces . │ │ The `ReplicationSpecification` property is `CreateOnly` and cannot be changed after the keyspace has been created. This property applies automatically to all tables in the keyspace. │ │ For more information, see [Multi-Region Replication](https://docs.aws.amazon.com/keyspaces/latest/devguide/multiRegion-replication.html) in the *Amazon Keyspaces Developer Guide* . │ └[~] resource AWS::Cassandra::Table │ ├ properties │ │ ├[+] AutoScalingSpecifications: AutoScalingSpecification │ │ ├ EncryptionSpecification: (documentation changed) │ │ └[+] ReplicaSpecifications: Array<ReplicaSpecification> │ └ types │ ├[+] type AutoScalingSetting │ │ ├ documentation: The optional auto scaling settings for a table with provisioned throughput capacity. │ │ │ To turn on auto scaling for a table in `throughputMode:PROVISIONED` , you must specify the following parameters. │ │ │ Configure the minimum and maximum capacity units. The auto scaling policy ensures that capacity never goes below the minimum or above the maximum range. │ │ │ - `minimumUnits` : The minimum level of throughput the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default). │ │ │ - `maximumUnits` : The maximum level of throughput the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default). │ │ │ - `scalingPolicy` : Amazon Keyspaces supports the `target tracking` scaling policy. The auto scaling target is a percentage of the provisioned capacity of the table. │ │ │ For more information, see [Managing throughput capacity automatically with Amazon Keyspaces auto scaling](https://docs.aws.amazon.com/keyspaces/latest/devguide/autoscaling.html) in the *Amazon Keyspaces Developer Guide* . │ │ │ name: AutoScalingSetting │ │ └ properties │ │ ├AutoScalingDisabled: boolean (default=false) │ │ ├MinimumUnits: integer │ │ ├MaximumUnits: integer │ │ └ScalingPolicy: ScalingPolicy │ ├[+] type AutoScalingSpecification │ │ ├ documentation: The optional auto scaling capacity settings for a table in provisioned capacity mode. │ │ │ name: AutoScalingSpecification │ │ └ properties │ │ ├WriteCapacityAutoScaling: AutoScalingSetting │ │ └ReadCapacityAutoScaling: AutoScalingSetting │ ├[~] type Column │ │ └ - documentation: The name and data type of an individual column in a table. │ │ + documentation: The name and data type of an individual column in a table. In addition to the data type, you can also use the following two keywords: │ │ - `STATIC` if the table has a clustering column. Static columns store values that are shared by all rows in the same partition. │ │ - `FROZEN` for collection data types. In frozen collections the values of the collection are serialized into a single immutable value, and Amazon Keyspaces treats them like a `BLOB` . │ ├[+] type ReplicaSpecification │ │ ├ documentation: The AWS Region specific settings of a multi-Region table. │ │ │ For a multi-Region table, you can configure the table's read capacity differently per AWS Region. You can do this by configuring the following parameters. │ │ │ - `region` : The Region where these settings are applied. (Required) │ │ │ - `readCapacityUnits` : The provisioned read capacity units. (Optional) │ │ │ - `readCapacityAutoScaling` : The read capacity auto scaling settings for the table. (Optional) │ │ │ name: ReplicaSpecification │ │ └ properties │ │ ├Region: string (required) │ │ ├ReadCapacityUnits: integer │ │ └ReadCapacityAutoScaling: AutoScalingSetting │ ├[+] type ScalingPolicy │ │ ├ documentation: Amazon Keyspaces supports the `target tracking` auto scaling policy. With this policy, Amazon Keyspaces auto scaling ensures that the table's ratio of consumed to provisioned capacity stays at or near the target value that you specify. You define the target value as a percentage between 20 and 90. │ │ │ name: ScalingPolicy │ │ └ properties │ │ └TargetTrackingScalingPolicyConfiguration: TargetTrackingScalingPolicyConfiguration │ └[+] type TargetTrackingScalingPolicyConfiguration │ ├ documentation: Amazon Keyspaces supports the `target tracking` auto scaling policy for a provisioned table. This policy scales a table based on the ratio of consumed to provisioned capacity. The auto scaling target is a percentage of the provisioned capacity of the table. │ │ - `targetTrackingScalingPolicyConfiguration` : To define the target tracking policy, you must define the target value. │ │ - `targetValue` : The target utilization rate of the table. Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `targetValue` as a percentage. A `double` between 20 and 90. (Required) │ │ - `disableScaleIn` : A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in` , set the `boolean` value to `FALSE` . This means that capacity for a table can be automatically scaled down on your behalf. (Optional) │ │ - `scaleInCooldown` : A cooldown period in seconds between scaling activities that lets the table stabilize before another scale in activity starts. If no value is provided, the default is 0. (Optional) │ │ - `scaleOutCooldown` : A cooldown period in seconds between scaling activities that lets the table stabilize before another scale out activity starts. If no value is provided, the default is 0. (Optional) │ │ name: TargetTrackingScalingPolicyConfiguration │ └ properties │ ├DisableScaleIn: boolean │ ├ScaleInCooldown: integer (default=0) │ ├ScaleOutCooldown: integer (default=0) │ └TargetValue: integer (required) ├[~] service aws-cloudfront │ └ resources │ ├[~] resource AWS::CloudFront::Distribution │ │ └ types │ │ └[~] type DefaultCacheBehavior │ │ └ properties │ │ └ FunctionAssociations: (documentation changed) │ ├[~] resource AWS::CloudFront::Function │ │ └ types │ │ ├[~] type FunctionConfig │ │ │ └ properties │ │ │ └ KeyValueStoreAssociations: (documentation changed) │ │ └[~] type KeyValueStoreAssociation │ │ ├ - documentation: The Key Value Store association. │ │ │ + documentation: The key value store association. │ │ └ properties │ │ └ KeyValueStoreARN: (documentation changed) │ ├[~] resource AWS::CloudFront::KeyValueStore │ │ ├ - documentation: The Key Value Store. Use this to separate data from function code, allowing you to update data without having to publish a new version of a function. The Key Value Store holds keys and their corresponding values. │ │ │ + documentation: The key value store. Use this to separate data from function code, allowing you to update data without having to publish a new version of a function. The key value store holds keys and their corresponding values. │ │ ├ properties │ │ │ ├ Comment: (documentation changed) │ │ │ ├ ImportSource: (documentation changed) │ │ │ └ Name: (documentation changed) │ │ ├ attributes │ │ │ ├ Arn: (documentation changed) │ │ │ ├ Id: (documentation changed) │ │ │ └ Status: (documentation changed) │ │ └ types │ │ └[~] type ImportSource │ │ ├ - documentation: The import source for the Key Value Store. │ │ │ + documentation: The import source for the key value store. │ │ └ properties │ │ ├ SourceArn: (documentation changed) │ │ └ SourceType: (documentation changed) │ ├[~] resource AWS::CloudFront::OriginAccessControl │ │ └ types │ │ └[~] type OriginAccessControlConfig │ │ └ properties │ │ └ Name: (documentation changed) │ ├[~] resource AWS::CloudFront::ResponseHeadersPolicy │ │ └ types │ │ └[~] type SecurityHeadersConfig │ │ └ properties │ │ └ StrictTransportSecurity: (documentation changed) │ └[~] resource AWS::CloudFront::StreamingDistribution │ └ attributes │ └ Id: (documentation changed) ├[~] service aws-codebuild │ └ resources │ └[~] resource AWS::CodeBuild::Project │ └ types │ └[~] type ProjectFleet │ ├ - documentation: undefined │ │ + documentation: Information about the compute fleet of the build project. For more information, see [Working with reserved capacity in AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/fleets.html) . │ └ properties │ └ FleetArn: (documentation changed) ├[~] service aws-codestarnotifications │ └ resources │ └[~] resource AWS::CodeStarNotifications::NotificationRule │ ├ - documentation: Creates a notification rule for a resource. The rule specifies the events you want notifications about and the targets (such as AWS Chatbot topics or AWS Chatbot clients configured for Slack) where you want to receive them. │ │ + documentation: Creates a notification rule for a resource. The rule specifies the events you want notifications about and the targets (such as Amazon Simple Notification Service topics or AWS Chatbot clients configured for Slack) where you want to receive them. │ ├ properties │ │ ├ CreatedBy: (documentation changed) │ │ ├ EventTypeId: (documentation changed) │ │ ├ TargetAddress: (documentation changed) │ │ └ Targets: (documentation changed) │ └ types │ └[~] type Target │ └ properties │ └ TargetType: (documentation changed) ├[~] service aws-cognito │ └ resources │ ├[~] resource AWS::Cognito::IdentityPool │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::Cognito::IdentityPoolRoleAttachment │ │ └ types │ │ └[~] type RoleMapping │ │ ├ - documentation: `RoleMapping` is a property of the [AWS::Cognito::IdentityPoolRoleAttachment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-identitypoolroleattachment.html) resource that defines the role-mapping attributes of an Amazon Cognito identity pool. │ │ │ + documentation: One of a set of `RoleMappings` , a property of the [AWS::Cognito::IdentityPoolRoleAttachment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-identitypoolroleattachment.html) resource that defines the role-mapping attributes of an Amazon Cognito identity pool. │ │ └ properties │ │ ├ AmbiguousRoleResolution: (documentation changed) │ │ └ Type: (documentation changed) │ ├[~] resource AWS::Cognito::UserPool │ │ ├ properties │ │ │ └ DeletionProtection: (documentation changed) │ │ └ types │ │ ├[~] type LambdaConfig │ │ │ └ properties │ │ │ └ PreTokenGenerationConfig: (documentation changed) │ │ └[~] type PreTokenGenerationConfig │ │ ├ - documentation: undefined │ │ │ + documentation: The properties of a pre token generation Lambda trigger. │ │ └ properties │ │ ├ LambdaArn: (documentation changed) │ │ └ LambdaVersion: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolClient │ │ └ attributes │ │ └ ClientId: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolDomain │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolIdentityProvider │ │ ├ properties │ │ │ ├ AttributeMapping: - Map<string, string> ⇐ json │ │ │ │ + json │ │ │ └ ProviderDetails: - Map<string, string> ⇐ json (required) │ │ │ + json │ │ │ (documentation changed) │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolResourceServer │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolRiskConfigurationAttachment │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::Cognito::UserPoolUICustomizationAttachment │ │ └ attributes │ │ └ Id: (documentation changed) │ └[~] resource AWS::Cognito::UserPoolUser │ └ properties │ └ ClientMetadata: (documentation changed) ├[~] service aws-datasync │ └ resources │ └[~] resource AWS::DataSync::Task │ └ properties │ └ TaskReportConfig: (documentation changed) ├[~] service aws-dynamodb │ └ resources │ ├[~] resource AWS::DynamoDB::GlobalTable │ │ └ types │ │ └[~] type KinesisStreamSpecification │ │ └ properties │ │ └[+] ApproximateCreationDateTimePrecision: string │ └[~] resource AWS::DynamoDB::Table │ └ types │ └[~] type KinesisStreamSpecification │ └ properties │ └[+] ApproximateCreationDateTimePrecision: string ├[~] service aws-ec2 │ └ resources │ ├[~] resource AWS::EC2::ClientVpnEndpoint │ │ ├ properties │ │ │ └[+] ClientRouteMonitoringOptions: ClientRouteMonitoringOptions │ │ └ types │ │ └[+] type ClientRouteMonitoringOptions │ │ ├ name: ClientRouteMonitoringOptions │ │ └ properties │ │ └Enabled: boolean │ ├[~] resource AWS::EC2::EC2Fleet │ │ └ types │ │ └[~] type InstanceRequirementsRequest │ │ └ properties │ │ ├ OnDemandMaxPricePercentageOverLowestPrice: (documentation changed) │ │ └ SpotMaxPricePercentageOverLowestPrice: (documentation changed) │ ├[~] resource AWS::EC2::Instance │ │ └ types │ │ ├[~] type ElasticGpuSpecification │ │ │ └ - documentation: Specifies the type of Elastic GPU. An Elastic GPU is a GPU resource that you can attach to your Amazon EC2 instance to accelerate the graphics performance of your applications. For more information, see [Amazon EC2 Elastic GPUs](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/elastic-graphics.html) in the *Amazon EC2 User Guide for Windows Instances* . │ │ │ `ElasticGpuSpecification` is a property of the [AWS::EC2::Instance](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html) resource. │ │ │ + documentation: > Amazon Elastic Graphics reached end of life on January 8, 2024. For workloads that require graphics acceleration, we recommend that you use Amazon EC2 G4ad, G4dn, or G5 instances. │ │ │ Specifies the type of Elastic GPU. An Elastic GPU is a GPU resource that you can attach to your Amazon EC2 instance to accelerate the graphics performance of your applications. For more information, see [Amazon EC2 Elastic GPUs](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/elastic-graphics.html) in the *Amazon EC2 User Guide for Windows Instances* . │ │ │ `ElasticGpuSpecification` is a property of the [AWS::EC2::Instance](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html) resource. │ │ └[~] type NetworkInterface │ │ └ properties │ │ └ AssociatePublicIpAddress: (documentation changed) │ ├[~] resource AWS::EC2::LaunchTemplate │ │ └ types │ │ ├[~] type ElasticGpuSpecification │ │ │ └ - documentation: Specifies a specification for an Elastic GPU for an Amazon EC2 launch template. │ │ │ `ElasticGpuSpecification` is a property of [AWS::EC2::LaunchTemplate LaunchTemplateData](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-launchtemplate-launchtemplatedata.html) . │ │ │ + documentation: > Amazon Elastic Graphics reached end of life on January 8, 2024. For workloads that require graphics acceleration, we recommend that you use Amazon EC2 G4ad, G4dn, or G5 instances. │ │ │ Specifies a specification for an Elastic GPU for an Amazon EC2 launch template. │ │ │ `ElasticGpuSpecification` is a property of [AWS::EC2::LaunchTemplate LaunchTemplateData](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-launchtemplate-launchtemplatedata.html) . │ │ ├[~] type InstanceRequirements │ │ │ └ properties │ │ │ ├[+] MaxSpotPriceAsPercentageOfOptimalOnDemandPrice: integer │ │ │ ├ OnDemandMaxPricePercentageOverLowestPrice: (documentation changed) │ │ │ └ SpotMaxPricePercentageOverLowestPrice: (documentation changed) │ │ └[~] type NetworkInterface │ │ └ properties │ │ └ AssociatePublicIpAddress: (documentation changed) │ ├[~] resource AWS::EC2::SecurityGroupIngress │ │ └ attributes │ │ └ Id: (documentation changed) │ ├[~] resource AWS::EC2::SpotFleet │ │ └ types │ │ ├[~] type InstanceNetworkInterfaceSpecification │ │ │ └ properties │ │ │ └ AssociatePublicIpAddress: (documentation changed) │ │ └[~] type InstanceRequirementsRequest │ │ └ properties │ │ ├ OnDemandMaxPricePercentageOverLowestPrice: (documentation changed) │ │ └ SpotMaxPricePercentageOverLowestPrice: (documentation changed) │ ├[~] resource AWS::EC2::Subnet │ │ └ properties │ │ └ MapPublicIpOnLaunch: (documentation changed) │ ├[~] resource AWS::EC2::VPC │ │ └ - documentation: Specifies a virtual private cloud (VPC). │ │ You can optionally request an IPv6 CIDR block for the VPC. You can request an Amazon-provided IPv6 CIDR block from Amazon's pool of IPv6 addresses, or an IPv6 CIDR block from an IPv6 address pool that you provisioned through bring your own IP addresses (BYOIP). │ │ For more information, see [Virtual private clouds (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html) in the *Amazon VPC User Guide* . │ │ + documentation: Specifies a virtual private cloud (VPC). │ │ To add an IPv6 CIDR block to the VPC, see [AWS::EC2::VPCCidrBlock](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpccidrblock.html) . │ │ For more information, see [Virtual private clouds (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html) in the *Amazon VPC User Guide* . │ └[~] resource AWS::EC2::VPCCidrBlock │ └ - documentation: Associates a CIDR block with your VPC. You can only associate a single IPv6 CIDR block with your VPC. │ For more information about associating CIDR blocks with your VPC and applicable restrictions, see [VPC and Subnet Sizing](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#VPC_Sizing) in the *Amazon VPC User Guide* . │ + documentation: Associates a CIDR block with your VPC. │ You can optionally request an IPv6 CIDR block for the VPC. You can request an Amazon-provided IPv6 CIDR block from Amazon's pool of IPv6 addresses, or an IPv6 CIDR block from an IPv6 address pool that you provisioned through bring your own IP addresses (BYOIP). │ For more information, see [VPC CIDR blocks](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-cidr-blocks.html) in the *Amazon VPC User Guide* . ├[~] service aws-ecs │ └ resources │ ├[~] resource AWS::ECS::Service │ │ └ types │ │ └[~] type LoadBalancer │ │ └ properties │ │ └ ContainerName: (documentation changed) │ ├[~] resource AWS::ECS::TaskDefinition │ │ └ types │ │ ├[~] type ContainerDefinition │ │ │ └ properties │ │ │ ├[+] CredentialSpecs: Array<string> │ │ │ └ SystemControls: (documentation changed) │ │ ├[~] type EphemeralStorage │ │ │ └ - documentation: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate . For more information, see [Fargate task storage](https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_data_volumes.html) in the *Amazon ECS User Guide for AWS Fargate* . │ │ │ > For tasks using the Fargate launch type, the task requires the following platforms: │ │ │ > │ │ │ > - Linux platform version `1.4.0` or later. │ │ │ > - Windows platform version `1.0.0` or later. │ │ │ + documentation: The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate . For more information, see [Using data volumes in tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html) in the *Amazon ECS Developer Guide;* . │ │ │ > For tasks using the Fargate launch type, the task requires the following platforms: │ │ │ > │ │ │ > - Linux platform version `1.4.0` or later. │ │ │ > - Windows platform version `1.0.0` or later. │ │ └[~] type SystemControl │ │ └ - documentation: A list of namespaced kernel parameters to set in the container. This parameter maps to `Sysctls` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--sysctl` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) . │ │ We don't recommend that you specify network-related `systemControls` parameters for multiple containers in a single task. This task also uses either the `awsvpc` or `host` network mode. It does it for the following reasons. │ │ - For tasks that use the `awsvpc` network mode, if you set `systemControls` for any container, it applies to all containers in the task. If you set different `systemControls` for multiple containers in a single task, the container that's started last determines which `systemControls` take effect. │ │ - For tasks that use the `host` network mode, the `systemControls` parameter applies to the container instance's kernel parameter and that of all containers of any tasks running on that container instance. │ │ + documentation: A list of namespaced kernel parameters to set in the container. This parameter maps to `Sysctls` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.35/) and the `--sysctl` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/#security-configuration) . For example, you can configure `net.ipv4.tcp_keepalive_time` setting to maintain longer lived connections. │ │ We don't recommend that you specify network-related `systemControls` parameters for multiple containers in a single task that also uses either the `awsvpc` or `host` network mode. Doing this has the following disadvantages: │ │ - For tasks that use the `awsvpc` network mode including Fargate, if you set `systemControls` for any container, it applies to all containers in the task. If you set different `systemControls` for multiple containers in a single task, the container that's started last determines which `systemControls` take effect. │ │ - For tasks that use the `host` network mode, the network namespace `systemControls` aren't supported. │ │ If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see [IPC mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#task_definition_ipcmode) . │ │ - For tasks that use the `host` IPC mode, IPC namespace `systemControls` aren't supported. │ │ - For tasks that use the `task` IPC mode, IPC namespace `systemControls` values apply to all containers within a task. │ │ > This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate. │ └[~] resource AWS::ECS::TaskSet │ └ types │ └[~] type LoadBalancer │ └ properties │ └ ContainerName: (documentation changed) ├[~] service aws-efs │ └ resources │ └[~] resource AWS::EFS::FileSystem │ └ properties │ └ PerformanceMode: (documentation changed) ├[~] service aws-elasticloadbalancingv2 │ └ resources │ ├[~] resource AWS::ElasticLoadBalancingV2::LoadBalancer │ │ └ properties │ │ ├ SubnetMappings: (documentation changed) │ │ └ Subnets: (documentation changed) │ └[~] resource AWS::ElasticLoadBalancingV2::TargetGroup │ └ types │ └[~] type TargetGroupAttribute │ └ properties │ └ Key: (documentation changed) ├[~] service aws-fis │ └ resources │ └[~] resource AWS::FIS::ExperimentTemplate │ ├ - documentation: Describes an experiment template. │ │ + documentation: Specifies an experiment template. │ │ An experiment template includes the following components: │ │ - *Targets* : A target can be a specific resource in your AWS environment, or one or more resources that match criteria that you specify, for example, resources that have specific tags. │ │ - *Actions* : The actions to carry out on the target. You can specify multiple actions, the duration of each action, and when to start each action during an experiment. │ │ - *Stop conditions* : If a stop condition is triggered while an experiment is running, the experiment is automatically stopped. You can define a stop condition as a CloudWatch alarm. │ │ For more information, see [Experiment templates](https://docs.aws.amazon.com/fis/latest/userguide/experiment-templates.html) in the *AWS Fault Injection Service User Guide* . │ └ types │ ├[~] type ExperimentTemplateAction │ │ └ - documentation: Describes an action for an experiment template. │ │ + documentation: Specifies an action for an experiment template. │ │ For more information, see [Actions](https://docs.aws.amazon.com/fis/latest/userguide/actions.html) in the *AWS Fault Injection Service User Guide* . │ ├[~] type ExperimentTemplateLogConfiguration │ │ ├ - documentation: Describes the configuration for experiment logging. │ │ │ + documentation: Specifies the configuration for experiment logging. │ │ │ For more information, see [Experiment logging](https://docs.aws.amazon.com/fis/latest/userguide/monitoring-logging.html) in the *AWS Fault Injection Service User Guide* . │ │ └ properties │ │ ├ CloudWatchLogsConfiguration: (documentation changed) │ │ └ S3Configuration: (documentation changed) │ ├[~] type ExperimentTemplateStopCondition │ │ └ - documentation: Describes a stop condition for an experiment template. │ │ + documentation: Specifies a stop condition for an experiment template. │ │ For more information, see [Stop conditions](https://docs.aws.amazon.com/fis/latest/userguide/stop-conditions.html) in the *AWS Fault Injection Service User Guide* . │ ├[~] type ExperimentTemplateTarget │ │ ├ - documentation: Describes a target for an experiment template. │ │ │ + documentation: Specifies a target for an experiment. You must specify at least one Amazon Resource Name (ARN) or at least one resource tag. You cannot specify both ARNs and tags. │ │ │ For more information, see [Targets](https://docs.aws.amazon.com/fis/latest/userguide/targets.html) in the *AWS Fault Injection Service User Guide* . │ │ └ properties │ │ └ Parameters: (documentation changed) │ └[~] type ExperimentTemplateTargetFilter │ └ - documentation: Describes a filter used for the target resources in an experiment template. │ + documentation: Specifies a filter used for the target resource input in an experiment template. │ For more information, see [Resource filters](https://docs.aws.amazon.com/fis/latest/userguide/targets.html#target-filters) in the *AWS Fault Injection Service User Guide* . ├[~] service aws-fsx │ └ resources │ ├[~] resource AWS::FSx::DataRepositoryAssociation │ │ └ properties │ │ └ Tags: (documentation changed) │ ├[~] resource AWS::FSx::FileSystem │ │ ├ properties │ │ │ ├ StorageCapacity: (documentation changed) │ │ │ └ Tags: (documentation changed) │ │ └ types │ │ ├[~] type LustreConfiguration │ │ │ └ properties │ │ │ └ CopyTagsToBackups: (documentation changed) │ │ └[~] type OntapConfiguration │ │ └ properties │ │ ├ HAPairs: (documentation changed) │ │ └ ThroughputCapacityPerHAPair: (documentation changed) │ ├[~] resource AWS::FSx::Snapshot │ │ └ properties │ │ └ Tags: (documentation changed) │ └[~] resource AWS::FSx::StorageVirtualMachine │ ├ properties │ │ ├ RootVolumeSecurityStyle: (documentation changed) │ │ └ Tags: (documentation changed) │ └ types │ ├[~] type ActiveDirectoryConfiguration │ │ ├ - documentation: Describes the self-managed Microsoft Active Directory to which you want to join the SVM. Joining an Active Directory provides user authentication and access control for SMB clients, including Microsoft Windows and macOS client accessing the file system. │ │ │ + documentation: Describes the self-managed Microsoft Active Directory to which you want to join the SVM. Joining an Active Directory provides user authentication and access control for SMB clients, including Microsoft Windows and macOS clients accessing the file system. │ │ └ properties │ │ └ SelfManagedActiveDirectoryConfiguration: (documentation changed) │ └[~] type SelfManagedActiveDirectoryConfiguration │ └ - documentation: The configuration that Amazon FSx uses to join a FSx for Windows File Server file system or an FSx for ONTAP storage virtual machine (SVM) to a self-managed (including on-premises) Microsoft Active Directory (AD) directory. For more information, see [Using Amazon FSx for Windows with your self-managed Microsoft Active Directory](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html) or [Managing FSx for ONTAP SVMs](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-svms.html) . │ + documentation: The configuration that Amazon FSx uses to join the ONTAP storage virtual machine (SVM) to your self-managed (including on-premises) Microsoft Active Directory directory. ├[~] service aws-glue │ └ resources │ └[+] resource AWS::Glue::TableOptimizer │ ├ name: TableOptimizer │ │ cloudFormationType: AWS::Glue::TableOptimizer │ │ documentation: Resource Type definition for AWS::Glue::TableOptimizer │ ├ properties │ │ ├DatabaseName: string (required, immutable) │ │ ├TableName: string (required, immutable) │ │ ├Type: string (required, immutable) │ │ ├TableOptimizerConfiguration: TableOptimizerConfiguration (required) │ │ └CatalogId: string (required, immutable) │ ├ attributes │ │ └Id: string │ └ types │ └type TableOptimizerConfiguration │ ├ name: TableOptimizerConfiguration │ └ properties │ ├Enabled: boolean │ └RoleArn: string ├[~] service aws-guardduty │ └ resources │ └[~] resource AWS::GuardDuty::Filter │ └ attributes │ └[-] Id: string ├[~] service aws-inspectorv2 │ └ resources │ └[+] resource AWS::InspectorV2::CisScanConfiguration │ ├ name: CisScanConfiguration │ │ cloudFormationType: AWS::InspectorV2::CisScanConfiguration │ │ documentation: The CIS scan configuration. │ │ tagInformation: {"tagPropertyName":"Tags","variant":"map"} │ ├ properties │ │ ├ScanName: string │ │ ├SecurityLevel: string │ │ ├Schedule: Schedule │ │ ├Targets: CisTargets │ │ └Tags: Map<string, string> │ ├ attributes │ │ └Arn: string │ └ types │ ├type Schedule │ │├ documentation: The schedule the CIS scan configuration runs on. Each CIS scan configuration has exactly one type of schedule. │ ││ name: Schedule │ │└ properties │ │ ├OneTime: json │ │ ├Daily: DailySchedule │ │ ├Weekly: WeeklySchedule │ │ └Monthly: MonthlySchedule │ ├type DailySchedule │ │├ documentation: A daily schedule. │ ││ name: DailySchedule │ │└ properties │ │ └StartTime: Time (required) │ ├type Time │ │├ documentation: The time. │ ││ name: Time │ │└ properties │ │ ├TimeOfDay: string (required) │ │ └TimeZone: string (required) │ ├type WeeklySchedule │ │├ documentation: A weekly schedule. │ ││ name: WeeklySchedule │ │└ properties │ │ ├StartTime: Time (required) │ │ └Days: Array<string> (required) │ ├type MonthlySchedule │ │├ documentation: A monthly schedule. │ ││ name: MonthlySchedule │ │└ properties │ │ ├StartTime: Time (required) │ │ └Day: string (required) │ └type CisTargets │ ├ documentation: The CIS targets. │ │ name: CisTargets │ └ properties │ ├AccountIds: Array<string> (required) │ └TargetResourceTags: Map<string, Array<string>> ├[~] service aws-internetmonitor │ └ resources │ └[~] resource AWS::InternetMonitor::Monitor │ └ types │ ├[~] type InternetMeasurementsLogDelivery │ │ └ properties │ │ └ S3Config: (documentation changed) │ └[~] type S3Config │ ├ - documentation: The configuration for publishing Amazon CloudWatch Internet Monitor internet measurements to Amazon S3. The configuration includes the bucket name and (optionally) bucket prefix for the S3 bucket to store the measurements, and the delivery status. The delivery status is `ENABLED` if you choose to deliver internet measurements to S3 logs, and `DISABLED` otherwise. │ │ The measurements are also published to Amazon CloudWatch Logs. │ │ + documentation: The configuration for publishing Amazon CloudWatch Internet Monitor internet measurements to Amazon S3. The configuration includes the bucket name and (optionally) prefix for the S3 bucket to store the measurements, and the delivery status. The delivery status is `ENABLED` or `DISABLED` , depending on whether you choose to deliver internet measurements to S3 logs. │ └ properties │ ├ BucketName: (documentation changed) │ ├ BucketPrefix: (documentation changed) │ └ LogDeliveryStatus: (documentation changed) ├[~] service aws-iot │ └ resources │ └[~] resource AWS::IoT::DomainConfiguration │ ├ properties │ │ └[+] ServerCertificateConfig: ServerCertificateConfig │ └ types │ └[+] type ServerCertificateConfig │ ├ name: ServerCertificateConfig │ └ properties │ └EnableOCSPCheck: boolean ├[~] service aws-iotwireless │ └ resources │ ├[~] resource AWS::IoTWireless::PartnerAccount │ │ └ properties │ │ └ SidewalkResponse: (documentation changed) │ └[~] resource AWS::IoTWireless::WirelessDevice │ └ types │ ├[~] type AbpV10x │ │ ├ - documentation: undefined │ │ │ + documentation: ABP device object for LoRaWAN specification v1.0.x │ │ └ properties │ │ ├ DevAddr: (documentation changed) │ │ └ SessionKeys: (documentation changed) │ ├[~] type LoRaWANDevice │ │ └ properties │ │ └ AbpV10x: (documentation changed) │ ├[~] type OtaaV10x │ │ └ properties │ │ ├ AppEui: (documentation changed) │ │ └ AppKey: (documentation changed) │ └[~] type SessionKeysAbpV10x │ ├ - documentation: undefined │ │ + documentation: Session keys for ABP v1.0.x. │ └ properties │ ├ AppSKey: (documentation changed) │ └ NwkSKey: (documentation changed) ├[~] service aws-lambda │ └ resources │ ├[~] resource AWS::Lambda::EventInvokeConfig │ │ └ types │ │ └[~] type OnFailure │ │ └ properties │ │ └ Destination: (documentation changed) │ └[~] resource AWS::Lambda::EventSourceMapping │ ├ properties │ │ └ DestinationConfig: (documentation changed) │ └ types │ └[~] type OnFailure │ └ properties │ └ Destination: (documentation changed) ├[~] service aws-location │ └ resources │ └[~] resource AWS::Location::Map │ └ types │ └[~] type MapConfiguration │ └ properties │ └ CustomLayers: (documentation changed) ├[~] service aws-logs │ └ resources │ ├[~] resource AWS::Logs::AccountPolicy │ │ └ - documentation: Creates or updates an aaccount-level data protection policy or subscription filter policy that applies to all log groups or a subset of log groups in the account. │ │ *Data protection policy* │ │ A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. │ │ > Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. │ │ If you create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked. │ │ By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the `logs:Unmask` permission can use a [GetLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html) or [FilterLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_FilterLogEvents.html) operation with the `unmask` parameter set to `true` to view the unmasked log events. Users with the `logs:Unmask` can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the `unmask` query command. │ │ For more information, including a list of types of data that can be audited and masked, see [Protect sensitive log data with masking](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html) . │ │ To create an account-level policy, you must be signed on with the `logs:PutDataProtectionPolicy` and `logs:PutAccountPolicy` permissions. │ │ An account-level policy applies to all log groups in the account. You can also create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked. │ │ *Subscription filter policy* │ │ A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other AWS services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams , Kinesis Data Firehose , and Lambda . When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. │ │ The following destinations are supported for subscription filters: │ │ - An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. │ │ - An Kinesis Data Firehose data stream in the same account as the subscription policy, for same-account delivery. │ │ - A Lambda function in the same account as the subscription policy, for same-account delivery. │ │ - A logical destination in a different account created with [PutDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestination.html) , for cross-account delivery. Kinesis Data Streams and Kinesis Data Firehose are supported as logical destinations. │ │ Each account can have one account-level subscription filter policy. If you are updating an existing filter, you must specify the correct name in `PolicyName` . To perform a `PutAccountPolicy` subscription filter operation for any destination except a Lambda function, you must also have the `iam:PassRole` permission. │ │ + documentation: Creates or updates an account-level data protection policy or subscription filter policy that applies to all log groups or a subset of log groups in the account. │ │ *Data protection policy* │ │ A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. │ │ > Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked. │ │ If you create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked. │ │ By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the `logs:Unmask` permission can use a [GetLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html) or [FilterLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_FilterLogEvents.html) operation with the `unmask` parameter set to `true` to view the unmasked log events. Users with the `logs:Unmask` can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the `unmask` query command. │ │ For more information, including a list of types of data that can be audited and masked, see [Protect sensitive log data with masking](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html) . │ │ To create an account-level policy, you must be signed on with the `logs:PutDataProtectionPolicy` and `logs:PutAccountPolicy` permissions. │ │ An account-level policy applies to all log groups in the account. You can also create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked. │ │ *Subscription filter policy* │ │ A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other AWS services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams , Kinesis Data Firehose , and Lambda . When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. │ │ The following destinations are supported for subscription filters: │ │ - An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery. │ │ - An Kinesis Data Firehose data stream in the same account as the subscription policy, for same-account delivery. │ │ - A Lambda function in the same account as the subscription policy, for same-account delivery. │ │ - A logical destination in a different account created with [PutDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDestination.html) , for cross-account delivery. Kinesis Data Streams and Kinesis Data Firehose are supported as logical destinations. │ │ Each account can have one account-level subscription filter policy. If you are updating an existing filter, you must specify the correct name in `PolicyName` . To perform a `PutAccountPolicy` subscription filter operation for any destination except a Lambda function, you must also have the `iam:PassRole` permission. │ └[~] resource AWS::Logs::QueryDefinition │ └ properties │ └ Name: (documentation changed) ├[~] service aws-networkmanager │ └ resources │ └[~] resource AWS::NetworkManager::Device │ └ attributes │ └ CreatedAt: (documentation changed) ├[~] service aws-opensearchserverless │ └ resources │ └[~] resource AWS::OpenSearchServerless::Collection │ └ properties │ └ StandbyReplicas: (documentation changed) ├[~] service aws-osis │ └ resources │ └[~] resource AWS::OSIS::Pipeline │ ├ properties │ │ ├ BufferOptions: (documentation changed) │ │ └ EncryptionAtRestOptions: (documentation changed) │ └ types │ ├[~] type BufferOptions │ │ └ - documentation: Options that specify the configuration of a persistent buffer. To configure how OpenSearch Ingestion encrypts this data, set the EncryptionAtRestOptions. │ │ + documentation: Options that specify the configuration of a persistent buffer. To configure how OpenSearch Ingestion encrypts this data, set the `EncryptionAtRestOptions` . For more information, see [Persistent buffering](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/osis-features-overview.html#persistent-buffering) . │ ├[~] type CloudWatchLogDestination │ │ └ properties │ │ └ LogGroup: (documentation changed) │ └[~] type EncryptionAtRestOptions │ ├ - documentation: Options to control how OpenSearch encrypts all data-at-rest. │ │ + documentation: Options to control how OpenSearch encrypts buffer data. │ └ properties │ └ KmsKeyArn: (documentation changed) ├[~] service aws-personalize │ └ resources │ └[~] resource AWS::Personalize::Solution │ └ - documentation: An object that provides information about a solution. A solution is a trained model that can be deployed as a campaign. │ + documentation: An object that provides information about a solution. A solution includes the custom recipe, customized parameters, and trained models (Solution Versions) that Amazon Personalize uses to generate recommendations. ├[~] service aws-pinpoint │ └ resources │ └[~] resource AWS::Pinpoint::EventStream │ └ properties │ └ DestinationStreamArn: (documentation changed) ├[~] service aws-rds │ └ resources │ ├[~] resource AWS::RDS::DBCluster │ │ ├ properties │ │ │ ├ ScalingConfiguration: (documentation changed) │ │ │ └ ServerlessV2ScalingConfiguration: (documentation changed) │ │ └ types │ │ ├[~] type ScalingConfiguration │ │ │ └ - documentation: The `ScalingConfiguration` property type specifies the scaling configuration of an Aurora Serverless DB cluster. │ │ │ For more information, see [Using Amazon Aurora Serverless](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) in the *Amazon Aurora User Guide* . │ │ │ This property is only supported for Aurora Serverless v1. For Aurora Serverless v2, use `ServerlessV2ScalingConfiguration` property. │ │ │ Valid for: Aurora DB clusters only │ │ │ + documentation: The `ScalingConfiguration` property type specifies the scaling configuration of an Aurora Serverless DB cluster. │ │ │ For more information, see [Using Amazon Aurora Serverless](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) in the *Amazon Aurora User Guide* . │ │ │ This property is only supported for Aurora Serverless v1. For Aurora Serverless v2, Use the `ServerlessV2ScalingConfiguration` property. │ │ │ Valid for: A…
Similar to #27930, this PR adds eks with k8s 1.29 support. Addresses #28872 thread. Closes #28983. ### **!! Depends on cdklabs/awscdk-asset-kubectl#546 being merged in first. !!** /cc @kaizencc @pahud ### Reason for this change K8s 1.29 on EKS has been released on 1/23/2024. See: https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-kubernetes-version-1-29/ ### Description of changes Added support for eks 1.29. ### Description of how you validated changes Deployed an EKS cluster with k8s 1.29. ![image](https://github.com/aws/aws-cdk/assets/31543/ba770020-2087-498a-a1eb-3e890df05062) ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
### Issue # (if applicable) Closes #. ### Reason for this change This PR adds a new alpha module to for EvenBridge pipes sources. This is the base setup for future work and additional sources. ### Description of changes The initial source is the SQS source. ### Description of how you validated changes - [x] Unittests - [x] Integration test ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
aws-cdk-automation
added
auto-approve
pr/no-squash
This PR should be merged instead of squash-merging it
labels
Feb 9, 2024
aws-cdk-automation
had a problem deploying
to
test-pipeline
February 9, 2024 23:16
— with
GitHub Actions
Failure
TheRealAmazonKendra
had a problem deploying
to
test-pipeline
February 9, 2024 23:20
— with
GitHub Actions
Failure
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Thank you for contributing! Your pull request will be automatically updated and merged without squashing (do not update manually, and be sure to allow changes to be pushed to your fork). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See CHANGELOG