Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hh: enforce hard antiaffinity on web nodes #17

Closed
cirocosta opened this issue Mar 4, 2019 · 1 comment
Closed

hh: enforce hard antiaffinity on web nodes #17

cirocosta opened this issue Mar 4, 2019 · 1 comment
Labels
enhancement New feature or request

Comments

@cirocosta
Copy link
Member

Hey,

Having jobs that stream in and out a lot, we might end up having problems with the web pods if those end up being scheduled to run in the same machine as the whole network bandwidth of the machine might be consumed by them, while in reality, those two could be serving the full bandwidth of two VMs.

For instance, consider the following case:

screen shot 2019-03-03 at 9 09 48 pm

There we have two web nodes in the same VM consuming the whole 1Gib of network bandwidth that the instance has (2* 250Mbit TX + 2*250MBit RX).

If those workloads were split in two instances, we could go even higher.

@cirocosta cirocosta added the enhancement New feature or request label Mar 4, 2019
@cirocosta
Copy link
Member Author

It's there!

web:
replicas: 2
nodeSelector: { cloud.google.com/gke-nodepool: generic-1 }
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: hush-house-web
release: hush-house

Funny thing: the throttling there was actually at the disk of the receiver that would slow down the whole streaming of the contents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant