Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detatch hanging ENIs created by DMS Instance #7600

Open
ebower12 opened this issue Feb 19, 2019 · 8 comments
Open

Detatch hanging ENIs created by DMS Instance #7600

ebower12 opened this issue Feb 19, 2019 · 8 comments
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/dms Issues and PRs that pertain to the dms service.

Comments

@ebower12
Copy link

ebower12 commented Feb 19, 2019

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

  • Terraform v0.11.10
  • provider.aws: version = "~> 1.59"
  • provider.external: version = "~> 1.0"
  • provider.null: version = "~> 2.0"

Affected Resource(s)

aws_subnet
aws_dms_replication_instance
aws_dms_replication_subnet_group

Terraform Config File(s)

resource "aws_dms_replication_instance" "gateway-replication" {
  replication_instance_id      = "${var.env}-gateway-replication"
  replication_instance_class   = "dms.r4.large"
  availability_zone            = "us-west-2c"
  allocated_storage            = 500
  engine_version               = "3.1.2"
  auto_minor_version_upgrade   = true
  publicly_accessible          = true
  preferred_maintenance_window = "fri:04:44-fri:05:14"
  replication_subnet_group_id  = "${aws_dms_replication_subnet_group.gateway-replication-subnet-group.id}"
}

resource "aws_dms_replication_subnet_group" "gateway-replication-subnet-group" {
  replication_subnet_group_id          = "${var.env}-gateway-replication-subnet-group"
  replication_subnet_group_description = "Default group created for VPC ${var.vpc-id}"
  subnet_ids                           = [
    "${var.public-subnet-a}",
    "${var.public-subnet-b}",
    "${var.public-subnet-c}",
    "${var.private-subnet-a}",
    "${var.private-subnet-b}",
    "${var.private-subnet-c}",
    "${var.partner-subnet-a}",
    "${var.partner-subnet-b}",
    "${var.partner-subnet-c}",
  ]
}

#--------------------------------------------------------------------------------
# VPC
#--------------------------------------------------------------------------------

resource "aws_vpc" "datalake-vpc" {
  cidr_block           = "100.0.0.0/22"
  enable_dns_hostnames = true
}

resource "aws_internet_gateway" "default" {
  vpc_id = "${aws_vpc.datalake-vpc.id}"
}

#--------------------------------------------------------------------------------
# Public Subnets
#--------------------------------------------------------------------------------

resource "aws_subnet" "public-subnet-a" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.0.0/27"
  availability_zone = "us-west-2a"
}

resource "aws_subnet" "public-subnet-b" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.1.32/27"
  availability_zone = "us-west-2b"
}

resource "aws_subnet" "public-subnet-c" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.2.64/27"
  availability_zone = "us-west-2c"
}

#--------------------------------------------------------------------------------
# Private Subnets
#--------------------------------------------------------------------------------

resource "aws_subnet" "private-subnet-a" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.0.96/27"
  availability_zone = "us-west-2a"
}

resource "aws_subnet" "private-subnet-b" {
  vpc_id                  = "${aws_vpc.datalake-vpc.id}"
  cidr_block              = "100.0.1.128/27"
  availability_zone       = "us-west-2b"
  map_public_ip_on_launch = true
}

resource "aws_subnet" "private-subnet-c" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.2.160/27"
  availability_zone = "us-west-2c"
}

#--------------------------------------------------------------------------------
# Partner Subnets
#--------------------------------------------------------------------------------

resource "aws_subnet" "partner-subnet-a" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.0.192/27"
  availability_zone = "us-west-2a"
}

resource "aws_subnet" "partner-subnet-b" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.1.224/27"
  availability_zone = "us-west-2b"
}

resource "aws_subnet" "partner-subnet-c" {
  vpc_id            = "${aws_vpc.datalake-vpc.id}"
  cidr_block        = "100.0.3.0/27"
  availability_zone = "us-west-2c"
}

#--------------------------------------------------------------------------------
# Security Groups
#--------------------------------------------------------------------------------

resource "aws_security_group" "ec2-security-group" {
  name          = "${var.env}-ec2-security-group"
  description   = "Security group for EC2 VPC Endpoint"
  vpc_id        = "${aws_vpc.datalake-vpc.id}"

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "lambda-security-group" {
  name          = "${var.env}-lambda-security-group"
  description   = "Security group for Lambda VPC Endpoint"
  vpc_id        = "${aws_vpc.datalake-vpc.id}"

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "dms-security-group" {
  name          = "${var.env}-dms-security-group"
  description   = "Security group for DMS VPC Endpoint"
  vpc_id        = "${aws_vpc.datalake-vpc.id}"

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "rds-security-group" {
  name          = "${var.env}-rds-security-group"
  description   = "Security group for RDS VPC Endpoint"
  vpc_id        = "${aws_vpc.datalake-vpc.id}"

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Debug Output

None

Panic Output

None

Expected Behavior

All subnets are destroyed with no issue

Actual Behavior

This is essentially a duplicate of #829 because I'm hitting the same issue but with a different service. The DMS instance stands up it's own ENI the same way Lambdas do, so on destroy Terraform hangs waiting for a successful response from the subnet the ENI is in until timing out. If I go into the console while the destroy is running and manually delete the ENI Terraform successfully destroys the subnet and continues on with no issue.

I've tried giving the DMS instance the necessary EC2 permissions as per the comments in #829 and doing the destroy in multiple steps, destroying the DMS instance first and then destroying the rest of the infrastructure separately, but the ENI is still not destroyed. If I don't build the subnet group along with the DMS instance the ENI is still created, but apparently not attached to the subnet because the destroy has no issues in that case. I'm hoping the solution for this will be about the same as it was with the Lambdas.

Steps to Reproduce

  1. stand up a DMS instance inside a VPC with a subnet group attached
  2. terraform destroy

Important Factoids

None

References

@ebower12 ebower12 added the enhancement Requests to existing resources that expand the functionality or scope. label Feb 19, 2019
@lukeamsm
Copy link

lukeamsm commented Dec 9, 2019

Having the exact same issue but seeing it with Security Groups attached to DMS Replication Instances not being deleted, due to the dissassociated ENIs not being removed by Terraform.

@jeremyyeo
Copy link

Just encountering this... almost 2 years later.

@Julianzes
Copy link

Also encountering the same issue with the security group not being able to delete because there is a ENI attached that does not get deleted by terraform.

@crose-varde
Copy link

Why is this labeled an "enhancement?" It seems like a bug to me. You can't terraform destroy security groups that have been attached to DMS replication instances.

@jjmontgo
Copy link

Also encountering this issue. Had to remove the ENI manually in the console.

@ewbankkit ewbankkit added the service/dms Issues and PRs that pertain to the dms service. label Jun 2, 2022
@Engrave-zz
Copy link

this is still an issue, encountering it as well

@petewilcock
Copy link

Bump @ewbankkit - can you reclassify this as a bug please and push for a fix? Looks like reported over 4 years ago and still exists. This is one of those which has fallen through the cracks due to the original misreport as a feature request.

To summarise, DMS creates and associates network interface, but destruction leaves the ENI behind. If you create and associate a security group at the same time, it can't delete the security group and throws a dependent object still exists error which is because of the ENI not getting deleted. Dependency graph and ordering must be broken.

Super annoying but must be a relatively trivial fix? 馃檹

@BigMountainTiger
Copy link

BigMountainTiger commented Jul 28, 2023

This should be a bug

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/dms Issues and PRs that pertain to the dms service.
Projects
None yet
Development

No branches or pull requests

10 participants