Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems converging a base environment in us-west-1 #195

Closed
ghost opened this issue Oct 19, 2017 · 6 comments
Closed

Problems converging a base environment in us-west-1 #195

ghost opened this issue Oct 19, 2017 · 6 comments
Labels
Projects

Comments

@ghost
Copy link

ghost commented Oct 19, 2017

Defined a bare bones environment in mu.yml and tried to run "mu env up acceptance" in us-west-1:

ElbSubnetAZ2 (AWS::EC2::Subnet) CREATE_FAILED Value (us-west-1b) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-west-1a, us-west-1c.

perhaps the call to GetAZs isn't in tune with the actual zones that are usable?

@ghost
Copy link
Author

ghost commented Oct 19, 2017

---
namespace: mu
service:
  name: mu-example
  port: 8080
  pathPatterns:
  - /*
  pipeline:
    source:
      provider: GitHub
      repo: xxxxxx

environments:

  # The unique name of the environment  (required)
  - name: acceptance
    provider: ecs                   # The type of environment to use, ec2 or ecs (default: ecs)


    ### Attributes for the ECS container instances
    cluster:
      #imageId: ami-xxxxxx           # The AMI to use for the ECS container instances (default: latest ECS optimized AMI)
      instanceType: t2.micro        # The instance type to use for the ECS container instances (default: t2.micro)
      instanceTenancy: default      # Whether to use default or dedicated tenancy (default: default)
      desiredCapacity: 1            # Desired number of ECS container instances (default 1)
      maxSize: 2                    # Max size to scale the ECS ASG to (default: 2)
      sshAllow: 0.0.0.0/0           # CIDR block to allow SSH access from (default: 0.0.0.0/0)
      scaleOutThreshold: 80         # Threshold for % memory utilization to scale out ECS container instances (default: 80)
      scaleInThreshold: 30          # Threshold for % memory utilization to scale in ECS container instances (default: 30)

    loadbalancer:
      internal: false # Whether to create an internal ELB or not (default: false)

    discovery:
      provider: consul              # Which provider to use for service discovery.  Currently, only consul is supported (default: none)

@cplee
Copy link
Contributor

cplee commented Oct 20, 2017

@erickascic Looks like us-west-1 has an AZ that is at capacity for VPCs. I'd like to implement something similar to this post but I need to confirm something first. Can you share the output from the following command in the account you are seeing this issue?

aws ec2 describe-availability-zones --region us-west-1

@ghost
Copy link
Author

ghost commented Oct 20, 2017

aws ec2 describe-availability-zones --region us-west-1
{
"AvailabilityZones": [
{
"State": "available",
"ZoneName": "us-west-1a",
"Messages": [],
"RegionName": "us-west-1"
},
{
"State": "available",
"ZoneName": "us-west-1b",
"Messages": [],
"RegionName": "us-west-1"
},
{
"State": "available",
"ZoneName": "us-west-1c",
"Messages": [],
"RegionName": "us-west-1"
}
]
}

@cplee
Copy link
Contributor

cplee commented Oct 20, 2017

SOLUTION:

  • allow passing the list of AZs to vpc.yml template from mu.yml
  • allow specifying same AZ multiple times, but require 3 AZs
  • default to current behavior of FN:GetAZs

Additionally, allow configuring more than 1 NAT for the VPC

@timbaileyjones
Copy link
Contributor

timbaileyjones commented Nov 24, 2017

I discovered a problem when deploying to regions with fewer than three AZs (ca-central-1 only has 2 AZs for example. UPDATE: confirmed in eu-west-2, which also has just 2 AZs)

AwaitFinalStatus ERROR  InstanceSubnetAZ3 (AWS::EC2::Subnet) CREATE_FAILED Template error: Fn::Select cannot select nonexistent value at index 2

I'm posting it here since it looks like @cplee's solution might solve this case also.

The Fn::Select referenced is in templates/assets/vpc.yml. Fn::GetAZ is only returning elements 0 and 1, making 2 out-of-bounds.

  328	      AvailabilityZone:
  329	        Fn::Select:
  330	        - 2
  331	        - Fn::GetAZs: ""

Regions having less than 2 AZs are follows:
ap-northeast-2, ap-south-1, ap-southeast-1, ca-central-1, eu-west-2

I even ran into this assumption of at least three AZs when running mu env term acceptance:

AwaitFinalStatus ▶ ERROR    InstanceSubnetAZ3 (AWS::EC2::Subnet) CREATE_FAILED Template error: Fn::Select  cannot select nonexistent value at index 2
AwaitFinalStatus ▶ ERROR    ElbSubnetAZ3 (AWS::EC2::Subnet) CREATE_FAILED Template error: Fn::Select  cannot select nonexistent value at index 2
AwaitFinalStatus ▶ ERROR    VPCInternetGateway (AWS::EC2::VPCGatewayAttachment) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    InstanceSubnetAZ2 (AWS::EC2::Subnet) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    ElbNetworkAcl (AWS::EC2::NetworkAcl) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    BastionSG (AWS::EC2::SecurityGroup) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    NatNetworkAcl (AWS::EC2::NetworkAcl) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    ElbRouteTable (AWS::EC2::RouteTable) CREATE_FAILED Resource creation cancelled
AwaitFinalStatus ▶ ERROR    ElbSubnetAZ1 (AWS::EC2::Subnet) CREATE_FAILED Resource creation cancelled

@cplee cplee added this to To do in 1.4.1 Dec 7, 2017
@cplee cplee added the bug label Dec 7, 2017
@cplee cplee moved this from To do to In progress in 1.4.1 Jan 24, 2018
@cplee cplee moved this from In progress to To do in 1.4.1 Jan 24, 2018
@cplee cplee added this to To do in 1.5.1 via automation Jan 24, 2018
@cplee cplee removed this from To do in 1.4.1 Jan 24, 2018
@hobbs
Copy link

hobbs commented Apr 23, 2018

is there a workaround for this?

@cplee cplee moved this from To do to Under Review in 1.5.1 Jul 31, 2018
@cplee cplee closed this as completed in 67eab52 Aug 1, 2018
1.5.1 automation moved this from Under Review to Done Aug 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
1.5.1
  
Done
Development

No branches or pull requests

3 participants