Skip to content
This repository has been archived by the owner on Jul 11, 2023. It is now read-only.

Set memory and cpu requests and limit values for all containers #173

Merged
merged 1 commit into from
Sep 14, 2020

Conversation

paulcwarren
Copy link
Member

@paulcwarren paulcwarren commented Aug 28, 2020

Cf-for-K8s is currently working on a scaling and Quality of Service (QoS) set of stories.

We are targeting 1.0 to be configured out-of-the-box as a "developer" edition aimed at those users who want to kick the tires. As part of this, we would like to set limits on mem/cpu.

Since a "developer" edition may not be preferred by everyone, we want each component to be configurable to scale both horizontally (replicas) and vertically (mem/cpu). This will also allow users to deliver a Guaranteed QoS when required (although we are recommending that all of our pods and containers use the Burstable QoS) As part of this we would like to ask you to do several things:

  1. consider which of your pods/containers you would like to expose for
    scaling properties for for both horizontal (deployment replicas) and vertical (mem/cpu request/limits).

  2. expose said configuration properties.

  3. sets mem and cpu values for all containers in order to provide as much meta-data to the k8s as possible so that its scheduler can do as good a job as possible. This PR is an initial attempt at setting these values, although we know you are much more likely to have insight into your components mem/cpu requirements than our guess.

If you have any questions or concerns, please let us know! Thanks!

#174462927

Co-Authored-By: Angela Chin achin@pivotal.io

@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/174559728

The labels on this github issue will be updated when the story is started.

@kieron-dev kieron-dev merged commit 3c6231a into cloudfoundry:develop Sep 14, 2020
@kieron-dev
Copy link
Contributor

Thanks!

This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants