Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple NAT Gateways created for single AZ cluster #1411

Closed
jewzaam opened this issue Mar 14, 2019 · 6 comments
Closed

Multiple NAT Gateways created for single AZ cluster #1411

jewzaam opened this issue Mar 14, 2019 · 6 comments

Comments

@jewzaam
Copy link
Member

jewzaam commented Mar 14, 2019

Version

$ docker run -it --rm quay.io/twiest/installer:20190227 version
/bin/openshift-install v0.13.0

Platform (aws|libvirt|openstack):

AWS

What happened?

Provisioned a cluster with 1 master and 1 worker via Hive in us-west-2. Specified a single zone on each: us-west-2a.

The cluster is provisioned and works. But I see NAT gateway in zones us-west-2{a,b,c,d}.

Enter text here.

What you expected to happen?

Single NAT gateway in us-west-2a.

How to reproduce it (as minimally and precisely as possible)?

I have not boiled this down to using the installer directly, but talked to @dgoodwin and he said it's something installer should look at.

Steps for hive:

  1. create pull secret
  2. create aws secret
  3. create ssh secret
  4. create ClusterDeployment

Relevant bits of ClusterDeployment:

apiVersion: hive.openshift.io/v1alpha1
kind: ClusterDeployment
spec:
  compute:
  - name: worker
    platform:
      aws:
        rootVolume:
          iops: 100
          size: 32
          type: gp2
        type: m5.xlarge
        zones:
        - us-west-2a
    replicas: 1
  controlPlane:
    name: master
    platform:
      aws:
        rootVolume:
          iops: 100
          size: 32
          type: gp2
        type: m5.xlarge
        zones:
        - us-west-2a
    replicas: 1

Anything else we need to know?

Behavior observed results in more NAT Gateways than zones used by infrastructure. Expect there may be some reliability gains but loss of the single zone means a loss of the cluster. I see the additional NAT Gateways as unnecessary expense for customers, almost $100 / month in this scenario.

@aditi10
Copy link

aditi10 commented Mar 19, 2019

@jewzaam It is observed that when we install openshift with installer

bin/openshift-install --dir=cluster-0 create cluster

It creates 3 Nat gateways in different availability zone. If this is the case there are high chances that same issue will be observed when its installed via Hive

@jewzaam
Copy link
Member Author

jewzaam commented Mar 19, 2019

Will this always be the case? Is there an option to provide to installer to control creation of nat gateways?

@wking
Copy link
Member

wking commented Mar 28, 2019

Fix in flight with #1481.

@wking
Copy link
Member

wking commented Apr 1, 2019

#1481 has landed and will hopefully go out with the next release. It doesn't change the default installer behavior, but it does allow you to reduce per-zone resource consumption by explicitly specifying zones for your machine pools. For example, see openshift/release#3285 doing that for our CI jobs. #1487 is open about potentially changing the default behavior to reduce per-zone resource consumption as well, but #1481 is already enough for folks who are willing to supply their own install-config.yaml.

@abhinavdahiya
Copy link
Contributor

#1481 was merged and shipped in latest release. Restricting control plane and compute to specific regions allows you to restrict the number of network resources in AWS.

/close

@openshift-ci-robot
Copy link
Contributor

@abhinavdahiya: Closing this issue.

In response to this:

#1481 was merged and shipped in latest release. Restricting control plane and compute to specific regions allows you to restrict the number of network resources in AWS.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants