Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(release): 2.84.0 #25963

Merged
merged 31 commits into from
Jun 13, 2023
Merged

chore(release): 2.84.0 #25963

merged 31 commits into from
Jun 13, 2023

Conversation

aws-cdk-automation
Copy link
Collaborator

@aws-cdk-automation aws-cdk-automation commented Jun 13, 2023

See CHANGELOG

mergify bot and others added 30 commits June 7, 2023 15:57
## Description

This change enables IPv6 for EKS clusters

## Reasoning

* IPv6-based EKS clusters will enable service owners to minimize or even eliminate the perils of IPv4 CIDR micromanagement
* IPv6 will enable very-large-scale EKS clusters
* My working group ( Amazon SDO/ECST ) recently attempted to enable IPv6 using L1 Cfn EKS constructs, but failed after discovering a CDKv2 issue which results in a master-less EKS cluster.  Rather than investing in fixing this interaction we agreed to contribute to aws-eks (this PR)

## Design

* This change treats IPv4 as the default networking configuration
* A new enum `IpFamily` is introduced to direct users to specify `IP_V4` or `IP_V6`
* ~~This change adds a new Sam layer dependency~~ Dependency removed after validation it was no longer necessary

## Testing

I consulted with some team members about how to best approach testing this change, and I concluded that I should duplicate the eks-cluster test definition.  I decided that this was a better approach than redefining the existing cluster test to use IPv6 for a couple of reasons:

1. EKS still requires IPv4 under the hood
2. IPv6 CIDR and subnet association isn't exactly straightforward.  My example in eks-cluster-ipv6 is the simplest one I could come up with
3. There's additional permissions and routing configuration that's necessary to get the cluster tests to succeed.  The differences were sufficient to motivate splitting out the test, in my opinion.

I ran into several issues running the test suite, primarily related to out-of-memory conditions which no amount of RAM appeared to help.  `NODE_OPTIONS--max-old-space-size=8192` did not improve this issue, nor did increasing it to 12GB.  Edit: This ended up being a simple fix, but annoying to dig out.  The fix is `export NODE_OPTIONS=--max-old-space-size=8192`.  Setting this up in my .rc file did not stick, either.  MacOS Ventura for those keeping score at home.

The bulk of my testing was performed using a sample stack definition (below), but I was unable to run the manual testing described in `aws-eks/test/MANUAL_TEST.md` due to no access to the underlying node instances.  Edit, I can run the MANUAL_TESTS now if that's deemed necessary.  

Updated:  This sample stack creates an ipv6 enabled cluster with an example nginx service running.  

Sample:

```ts
import {
  App, Duration, Fn, Stack,
  aws_ec2 as ec2,
  aws_eks as eks,
  aws_iam as iam,
} from 'aws-cdk-lib';
import { getClusterVersionConfig } from './integ-tests-kubernetes-version';

const app = new App();
const env = { region: 'us-east-1', account: '' };
const stack = new Stack(app, 'my-v6-test-stack-1', { env });

const vpc = new ec2.Vpc(stack, 'Vpc', { maxAzs: 3, natGateways: 1, restrictDefaultSecurityGroup: false });
const ipv6cidr = new ec2.CfnVPCCidrBlock(stack, 'CIDR6', {
  vpcId: vpc.vpcId,
  amazonProvidedIpv6CidrBlock: true,
});

let subnetcount = 0;
let subnets = [...vpc.publicSubnets, ...vpc.privateSubnets];
for ( let subnet of subnets) {
  // Wait for the ipv6 cidr to complete
  subnet.node.addDependency(ipv6cidr);
  _associate_subnet_with_v6_cidr(subnetcount, subnet);
  subnetcount++;
}

const roles = _create_roles();

const cluster = new eks.Cluster(stack, 'Cluster', {
  ...getClusterVersionConfig(stack),
  vpc: vpc,
  clusterName: 'some-eks-cluster',
  defaultCapacity: 0,
  endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE,
  ipFamily: eks.IpFamily.IP_V6,
  mastersRole: roles.masters,
  securityGroup: _create_eks_security_group(),
  vpcSubnets: [{ subnets: subnets }],
});

// add a extra nodegroup
cluster.addNodegroupCapacity('some-node-group', {
  instanceTypes: [new ec2.InstanceType('m5.large')],
  minSize: 1,
  nodeRole: roles.nodes,
});

cluster.kubectlSecurityGroup?.addEgressRule(
  ec2.Peer.anyIpv6(), ec2.Port.allTraffic(),
);

// deploy an nginx ingress in a namespace
const nginxNamespace = cluster.addManifest('nginx-namespace', {
  apiVersion: 'v1',
  kind: 'Namespace',
  metadata: {
    name: 'nginx',
  },
});

const nginxIngress = cluster.addHelmChart('nginx-ingress', {
  chart: 'nginx-ingress',
  repository: 'https://helm.nginx.com/stable',
  namespace: 'nginx',
  wait: true,
  createNamespace: false,
  timeout: Duration.minutes(5),
});

// make sure namespace is deployed before the chart
nginxIngress.node.addDependency(nginxNamespace);

function _associate_subnet_with_v6_cidr(count: number, subnet: ec2.ISubnet) {
  const cfnSubnet = subnet.node.defaultChild as ec2.CfnSubnet;
  cfnSubnet.ipv6CidrBlock = Fn.select(count, Fn.cidr(Fn.select(0, vpc.vpcIpv6CidrBlocks), 256, (128 - 64).toString()));
  cfnSubnet.assignIpv6AddressOnCreation = true;
}

export function _create_eks_security_group(): ec2.SecurityGroup {
  let sg = new ec2.SecurityGroup(stack, 'eks-sg', {
    allowAllIpv6Outbound: true,
    allowAllOutbound: true,
    vpc,
  });
  sg.addIngressRule(
    ec2.Peer.ipv4('10.0.0.0/8'), ec2.Port.allTraffic(),
  );
  sg.addIngressRule(
    ec2.Peer.ipv6(Fn.select(0, vpc.vpcIpv6CidrBlocks)), ec2.Port.allTraffic(),
  );
  return sg;
}

export namespace Kubernetes {
  export interface RoleDescriptors {
    masters: iam.Role,
    nodes: iam.Role,
  }
}

function _create_roles(): Kubernetes.RoleDescriptors {
  const clusterAdminStatement = new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
      actions: [
        'eks:*',
        'iam:ListRoles',
      ],
      resources: ['*'],
    })],
  });

  const eksClusterAdminRole = new iam.Role(stack, 'AdminRole', {
    roleName: 'some-eks-master-admin',
    assumedBy: new iam.AccountRootPrincipal(),
    inlinePolicies: { clusterAdminStatement },
  });

  const assumeAnyRolePolicy = new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
      actions: [
        'sts:AssumeRole',
      ],
      resources: ['*'],
    })],
  });

  const ipv6Management = new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
      resources: ['arn:aws:ec2:*:*:network-interface/*'],
      actions: [
        'ec2:AssignIpv6Addresses',
        'ec2:UnassignIpv6Addresses',
      ],
    })],
  });

  const eksClusterNodeGroupRole = new iam.Role(stack, 'NodeGroupRole', {
    roleName: 'some-node-group-role',
    assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
    managedPolicies: [
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy'),
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly'),
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy'),
      iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore'),
      iam.ManagedPolicy.fromAwsManagedPolicyName('CloudWatchAgentServerPolicy'),
    ],
    inlinePolicies: {
      assumeAnyRolePolicy,
      ipv6Management,
    },
  });

  return { masters: eksClusterAdminRole, nodes: eksClusterNodeGroupRole };
}
```

## Issues

Edit: Fixed

Integration tests, specifically the new one I contributed, failed with an issue in describing a Fargate profile:

```
2023-06-01T16:24:30.127Z    6f9b8583-8440-4f13-a48f-28e09a261d40    INFO    {
    "describeFargateProfile": {
        "clusterName": "Cluster9EE0221C-f458e6dc5f544e9b9db928f6686c14d5",
        "fargateProfileName": "ClusterfargateprofiledefaultEF-1628f1c3e6ea41ebb3b0c224de5698b4"
    }
}
---------------------------
2023-06-01T16:24:30.138Z    6f9b8583-8440-4f13-a48f-28e09a261d40    INFO    {
    "describeFargateProfileError": {}
}
---------------------------
2023-06-01T16:24:30.139Z    6f9b8583-8440-4f13-a48f-28e09a261d40    ERROR    Invoke Error     {
    "errorType": "TypeError",
    "errorMessage": "getEksClient(...).describeFargateProfile is not a function",
    "stack": [
        "TypeError: getEksClient(...).describeFargateProfile is not a function",
        "    at Object.describeFargateProfile (/var/task/index.js:27:51)",
        "    at FargateProfileResourceHandler.queryStatus (/var/task/fargate.js:83:67)",
        "    at FargateProfileResourceHandler.isUpdateComplete (/var/task/fargate.js:49:35)",
        "    at FargateProfileResourceHandler.isCreateComplete (/var/task/fargate.js:46:21)",
        "    at FargateProfileResourceHandler.isComplete (/var/task/common.js:31:40)",
        "    at Runtime.isComplete [as handler] (/var/task/index.js:50:21)",
        "    at Runtime.handleOnceNonStreaming (/var/runtime/Runtime.js:74:25)"
    ]
}
```

I am uncertain if this is an existing issue or one introduced by this change, or something related to my local build.  Again, I had abundant issues related to building aws-cdk and the test suites depending on Jupiter's position in the sky.

## Collaborators 
Most of the work in this change was performed by @wlami and @jagu-sayan (thank you!)

Fixes #18423

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR adds a `RecoveryPointTags` property to `BackupPlanRule`.

Closes #25671

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…reference (#25529)

## Issue
When creating a role, the following warning message appeared:
```
Policy large: 11 exceeds 10 managed policies attached to a Role, this requires a quota increase
```

This was caused by the same managed policy being added multiple times.

Although there was only one managed policy in the created template, it appears that the managedPolicies field of the Role class has multiple instances of the same managed policy added to it.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…n for Lambda (#25725)

This PR provides support for the AWS Parameters and Secrets Extension for Lambda functions. This extension will allow users to retrieve and cache AWS Secrets Manager secrets and AWS Parameter Store parameters in Lambda functions without using an SDK.

Closes #23187 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The problem would manifest as this error message:

```
❌ Deployment failed: Error: Duplicate use of node id: 07a6878c7a2ec9b49ef3c0ece94cef1c2dd20fba34ca9650dfa6e7e00f2b9961:current_account-current_region-build
```

The problem was that we were using the full asset "destination identifier" for both the build and publish steps, but then were trying to use the `source` object to deduplicate build steps.

A more robust solution is to only use the asset identifier (excluding the destination identifier) for the build step, which includes all data necessary to deduplicate the asset. No need to look at the source at all anymore.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ecrets Extension for Lambda" (#25919)

Reverts #25725

This breaks the go build

```


Error: Command (go build -modfile local.go.mod ./...) failed with status 1:
--
3592 | #STDERR> package github.com/aws/aws-cdk-go/awscdk/v2/awsapigateway
3593 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awscognito
3594 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awslambda
3595 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awssecretsmanager
3596 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awslambda: import cycle not allowed
3597 | #STDERR> package github.com/aws/aws-cdk-go/awscdk/v2/awsapigateway
3598 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awscognito
3599 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awslambda
3600 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awssecretsmanager
3601 | #STDERR>    imports github.com/aws/aws-cdk-go/awscdk/v2/awslambda: import cycle not allowed
```


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…n for Lambda (#25928)

This PR provides support for the AWS Parameters and Secrets Extension for Lambda functions. This extension will allow users to retrieve and cache AWS Secrets Manager secrets and AWS Parameter Store parameters in Lambda functions without using an SDK.

Note: Previous PR results in the go build breaking. This removed the circular dependency causing the go build to break:

```
[jsii-pacmak] [INFO] Found 1 modules to package
[jsii-pacmak] [INFO] Packaging NPM bundles
[jsii-pacmak] [INFO] Loading jsii assemblies and translations
[jsii-pacmak] [INFO] Packaging 'go' for aws-cdk-lib
[jsii-pacmak] [INFO] go finished
[jsii-pacmak] [INFO] Packaged. go (54.9s) | npm pack (5.4s) | load jsii (0.5s) | cleanup (0.0s)
```

Closes #23187

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The imported eks clusters are having invalid service token and can't deploy new k8s manifests or helm charts.  This PR fixes that issue.

- [x] Update README and doc string. `functionArn` should be the custom resource provider's service token rather than the kubectl provider lambda arn. No breaking change in this PR.
- [x] Add a new integration test to ensure the imported cluster can always create manifests and helm charts.

Closes #25835

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This change removes the expression `export PATH=$(npm bin):$PATH`, which had formerly been used in scripts to ensure `node_modules` is in `PATH`.

`npm bin` was [removed in npm 9](npm/cli#5459). `npm exec` or `npx` should be used instead. `build.sh` already uses `npx`. This change revises `scripts/gen.sh` to use `npx` as well.

Prior to this change, within shells executing `build.sh` or `scripts/gen.sh`, `PATH` actually contains error text if npm 9+ is used.

```
~/repos/aws-cdk $ docker run --rm -v $PWD:$PWD -w $PWD node:hydrogen-alpine sh -c 'node --version && npm --version && export PATH=$(npm bin):$PATH && echo $PATH' # output when npm bin is unavailable
v18.16.0
9.5.1
Unknown command: "bin" To see a list of supported npm commands, run: npm help:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

~/repos/aws-cdk $ docker run --rm -v $PWD:$PWD -w $PWD node:gallium-alpine sh -c 'node --version && npm --version && export PATH=$(npm bin):$PATH && echo $PATH' # output when npm bin is available
v16.20.0
8.19.4
/Users/douglasnaphas/repos/aws-cdk/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
```

It didn't make `build.sh` fail, because `lerna` has been run via `npx` since #24217, and `build.sh` doesn't need anything from `node_modules` to be added to `PATH`. `export PATH=$(npm bin):$PATH` succeeds even though `npm bin` fails, per `export`'s normal behavior.

Prior to this change, `scripts/gen.sh` failed with

```
./scripts/gen.sh: line 18: lerna: command not found
```

when I ran it. After this change, `scripts/gen.sh` ran successfully.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR adds the same validation for the App Runner's CPU and Memory values as [CloudFormation's input patterns](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apprunner-service-instanceconfiguration.html#cfn-apprunner-service-instanceconfiguration-cpu).

Closes #25872

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…pute type with Linux GPU build image (#25880)

This fix allows specifying the `BUILD_GENERAL1_SMALL` compute type when using a Linux GPU to build an image as defined in the [docs](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html).

Closes #25857.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This feature supports `assignPublicIp` to `EcsTask`.

It specifies whether the task's elastic network interface receives a public IP address.
You can enable it only when `LaunchType` is `FARGATE`.
In this commit, the choice logic of the `LaunchType` keeps the backwards compatibility.

Closes #9233 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…queryexecution task (#25911)

The policy was originally adjusted in #22314 to use `formatArn()` to account for other partitions. However, there were no tests for these policies before this fix, so it was not identified that the policy would start getting incorrectly generated without being able to act on the resources inside a bucket.

Closes #25875 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…replacement function (#25762)

Closes #25748.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…25802)

now that `aws-cdk-lib` exports the `core` sub-package, the alpha packages should import from there instead of `aws-cdk-lib`

otherwise any usage of these alpha packages will cause node to traverse every single file in `aws-cdk-lib`
There is an issue in the stack name generation process where the prefix generated from assembly's stage name is not taken into account when shortening a stack name to meet the requirement of being equal or less than 128 characters longs.

This can lead to generating an invalid stack name greater than 128 characters: since the stack name is shortened to 128 characters, when the prefix is added, the limit is exceeded.

Current solution:
- Adding a feature flag
  - With the feature flag on, the prefix is processed within the generateUniqueName function.
  - With the feature flag off, stack name generation is not changed

Fixes #23628


NOTE: This PR was previously opened, but it was merged before I was able to add a feature flag, which ended up adding breaking changes and the changes of the PR were rolled back. Old PR: #24443
…nment (#25944)

computeEnvironmentName property was missing in FargateComputeEnvironment due to which ComputeEnvironmentName property set on the resulting AWS::Batch::ComputeEnvironment resource in the outputted CloudFormation.
Updated managed-compute-environment to reflect the fix made in FargateComputeEnvironment.

Closes #25794.
As mentioned in #25855, a doc of the s3EncryptionEnabled in the ExecuteCommandLogConfiguration interface is wrong.

Closes #25855

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
For example, I realized that the role given to perform a push from Github Actions to ECR is excessive if using grantPullPush. The Readme was temporarily updated to fulfill the conditions of a 'feat' commit.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Co-authored-by: Rico Hermans <rix0rrr@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Romain Marcadier <rmuller@amazon.fr>
…ervice connect (#25891)

This PR should fix #25616, where service connect accidentally creates a duplicate HTTP namespace when a customer sets a service connect default namespace on the cluster. 

Closes #25616 

However, I think that a broader fix for this issue should include deprecation of the `namespace` parameter in `ServiceConnectProps` in favor of a `cloudmapNamespace: INamespace` parameter; that way, we can force resolution by ARN under the hood of the construct and never trigger the namespace duplication behavior. 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…y when using Token (#25922)

Related to #25920.

In `AutoScalingGroup`, `maxCapacity` defaults to `Math.max(minCapacity, 1)` even when `minCapacity` is a Token.
Because Token number is always negative number, `maxCapacity` will be set to `1` when `maxCapacity` is `undefined` and `minCapacity` is a Token.

see also #25795 (comment)

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…25961)

Adding support for the new encryption mode DSSE (aws:kms:dsse). DSSE is a new encryption mode which does double encryption with the kms generated data key.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@aws-cdk-automation aws-cdk-automation added auto-approve pr/no-squash This PR should be merged instead of squash-merging it labels Jun 13, 2023
@gitpod-io
Copy link

gitpod-io bot commented Jun 13, 2023

@github-actions github-actions bot added the p2 label Jun 13, 2023
@aws-cdk-automation aws-cdk-automation requested a review from a team June 13, 2023 22:03
@aws-cdk-automation
Copy link
Collaborator Author

AWS CodeBuild CI Report

  • CodeBuild project: AutoBuildv2Project1C6BFA3F-wQm2hXv2jqQv
  • Commit ID: 37460ae
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@mergify
Copy link
Contributor

mergify bot commented Jun 13, 2023

Thank you for contributing! Your pull request will be automatically updated and merged without squashing (do not update manually, and be sure to allow changes to be pushed to your fork).

@mergify mergify bot merged commit f7c792f into v2-release Jun 13, 2023
@mergify mergify bot deleted the bump/2.84.0 branch June 13, 2023 22:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-approve p2 pr/no-squash This PR should be merged instead of squash-merging it
Projects
None yet
Development

Successfully merging this pull request may close these issues.