Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ecs: aws-ecs:enableImdsBlockingDeprecatedFeature warning showing up even when not in use #33684

Closed
1 task
rantoniuk opened this issue Mar 4, 2025 · 3 comments
Closed
1 task
Labels
@aws-cdk/aws-ecs Related to Amazon Elastic Container bug This issue is a bug. p3 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.

Comments

@rantoniuk
Copy link

rantoniuk commented Mar 4, 2025

Describe the bug

As a result of #32609 and #32763 I started to see:

[Warning at /Ecs-Stack/EcsCluster] Blocking container access to instance role will be deprecated. Use the @aws-cdk/aws-ecs:enableImdsBlockingDeprecatedFeature feature flagto keep this feature temporarily. See #32609 [ack: @aws-cdk/aws-ecs:deprecatedImdsBlocking]

even though I don't use the canContainersAccessInstanceRole flag anywhere in my CDK stacks. This should not appear here, similarly to #33505.

Interestingly, when I enable the "@aws-cdk/aws-ecs:disableEcsImdsBlocking": true flag, I get the following diff:

Stack Ecs-Stack
Resources
[~] AWS::EC2::LaunchTemplate EcsClusterLt EcsClusterLtAFAB3146
 └─ [~] LaunchTemplateData
     └─ [~] .UserData:
         └─ [~] .Fn::Base64:
             └─ [~] .Fn::Join:
                 └─ @@ -5,6 +5,6 @@
                    [ ]     {
                    [ ]       "Ref": "EcsCluster97242B84"
                    [ ]     },
                    [-]     " >> /etc/ecs/ecs.config\nsudo iptables --insert FORWARD 1 --in-interface docker+ --destination 169.254.169.254/32 --jump DROP\nsudo service iptables save\necho ECS_AWSVPC_BLOCK_IMDS=true >> /etc/ecs/ecs.config"
                    [+]     " >> /etc/ecs/ecs.config"
                    [ ]   ]
                    [ ] ]


which means even though I never used the canContainersAccessInstanceRole flag anywhere, CDK was using it under the hood somewhere.

Regression Issue

  • Select this option if this issue appears to be a regression.

Last Known Working CDK Version

No response

Expected Behavior

No warning is shown if the property is not used.

Current Behavior

The warning shows up anyway.

Reproduction Steps

export class EcsStack extends Stack {
  readonly cluster: Cluster;
  readonly execRole: iam.IRole;
  readonly gpuAutoScalingGroup: IAutoScalingGroup;

  constructor(scope: Construct, id: string, props: EcsStackProps) {
    super(scope, id, props);

    this.cluster = new Cluster(this, 'EcsCluster', {
      clusterName: 'EcsCluster',
      vpc: props.vpc,
      containerInsightsV2: ContainerInsights.ENABLED,
    });

    const launchTemplate = new ec2.LaunchTemplate(this, 'EcsClusterLt', {
      launchTemplateName: 'ecs-gpu-lt',
      // machineImage: EcsOptimizedImage.amazonLinux(),
      machineImage: ec2.MachineImage.genericLinux({
        // ecs optimised image with gpu support
        'us-west-2': 'ami-027492973b111510a',
      }),
      instanceType: new ec2.InstanceType('g4dn.xlarge'),
      requireImdsv2: true,
    });

    // Add GPU autoscaling capacity provider to the cluster
    const gpuAutoScalingGroup = new AutoScalingGroup(this, 'EcsGpuASG', {
      autoScalingGroupName: 'EcsGpuASG',
      vpc: props.vpc,
      launchTemplate,
    });

    const gpuCapacityProvider = new AsgCapacityProvider(this, 'EcsGpuCapacityProvider', {
      autoScalingGroup: gpuAutoScalingGroup,
      capacityProviderName: 'gpuCapacityProvider',
    });

    this.cluster.addAsgCapacityProvider(gpuCapacityProvider);

}

Possible Solution

No response

Additional Information/Context

No response

CDK CLI Version

2.1002.0 (build 09ef5a0)

Framework Version

No response

Node.js Version

v22.14.0

OS

macos

Language

TypeScript

Language Version

No response

Other information

cdk@0.1.0 /Users/warden/Work/bh2/infra
├── @biomejs/biome@1.9.4
├── @types/babel__traverse@7.20.6
├── @types/js-yaml@4.0.9
├── @types/node@22.13.9
├── @typescript-eslint/eslint-plugin@8.26.0
├── @typescript-eslint/parser@8.26.0
├── aws-cdk-lib@2.181.1
├── aws-cdk@2.1002.0
├── cdk-nag@2.35.36
├── cloudwatch-retention-setter@0.0.15
├── constructs@10.4.2
├── js-yaml@4.1.0
├── lefthook@1.11.2
├── source-map-support@0.5.21
└── typescript@5.8.2

@rantoniuk rantoniuk added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Mar 4, 2025
@github-actions github-actions bot added the @aws-cdk/aws-ecs Related to Amazon Elastic Container label Mar 4, 2025
@pahud
Copy link
Contributor

pahud commented Mar 4, 2025

Yes, this is a known issue and here's what's happening:

  1. By default, CDK applies IMDS blocking on ECS container instances for security (principle of least privilege). This happens because:

    • When canContainersAccessInstanceRole is undefined (your case) OR false
    • AND @aws-cdk/aws-ecs:disableEcsImdsBlocking flag is not enabled
  2. The code automatically adds iptables rules to block container access to IMDS (169.254.169.254) and shows the deprecation warning.

if (options.canContainersAccessInstanceRole === false ||
options.canContainersAccessInstanceRole === undefined) {
if (!FeatureFlags.of(this).isEnabled(Disable_ECS_IMDS_Blocking) &&
FeatureFlags.of(this).isEnabled(Enable_IMDS_Blocking_Deprecated_Feature)) {
// new commands from https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html#task-iam-role-considerations
autoScalingGroup.addUserData('sudo yum install -y iptables-services; sudo iptables --insert DOCKER-USER 1 --in-interface docker+ --destination 169.254.169.254/32 --jump DROP');
autoScalingGroup.addUserData('sudo iptables-save | sudo tee /etc/sysconfig/iptables && sudo systemctl enable --now iptables');
} else if (!FeatureFlags.of(this).isEnabled(Disable_ECS_IMDS_Blocking) &&
!FeatureFlags.of(this).isEnabled(Enable_IMDS_Blocking_Deprecated_Feature)) {
// old commands
autoScalingGroup.addUserData('sudo iptables --insert FORWARD 1 --in-interface docker+ --destination 169.254.169.254/32 --jump DROP');
autoScalingGroup.addUserData('sudo service iptables save');
Annotations.of(this).addWarningV2('@aws-cdk/aws-ecs:deprecatedImdsBlocking',
'Blocking container access to instance role will be deprecated. Use the @aws-cdk/aws-ecs:enableImdsBlockingDeprecatedFeature feature flag' +
'to keep this feature temporarily. See https://github.com/aws/aws-cdk/discussions/32609');
}
// The following is only for AwsVpc networking mode, but doesn't hurt for the other modes.
autoScalingGroup.addUserData('echo ECS_AWSVPC_BLOCK_IMDS=true >> /etc/ecs/ecs.config');
}
}

  1. This explains the diff you saw when enabling @aws-cdk/aws-ecs:disableEcsImdsBlocking:
    • Before: Had iptables rules to block IMDS
    • After: Removed those rules

This is part of AWS's plan to deprecate container instance role access blocking per: #32609

To resolve:

  • To keep blocking with new mechanism: Set @aws-cdk/aws-ecs:enableImdsBlockingDeprecatedFeature: true
  • To disable blocking: Set @aws-cdk/aws-ecs:disableEcsImdsBlocking: true

Let me know if it works for you.

@pahud pahud added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. p3 and removed needs-triage This issue or PR still needs to be triaged. labels Mar 4, 2025
@rantoniuk
Copy link
Author

Got it - it's a result of the updated set of recommended CDK flags.

@rantoniuk rantoniuk closed this as not planned Won't fix, can't repro, duplicate, stale Mar 4, 2025
Copy link

github-actions bot commented Mar 4, 2025

Comments on closed issues and PRs are hard for our team to see.
If you need help, please open a new issue that references this one.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 4, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
@aws-cdk/aws-ecs Related to Amazon Elastic Container bug This issue is a bug. p3 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.
Projects
None yet
Development

No branches or pull requests

2 participants