Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support of resources settings #131

Closed
sleshchenko opened this issue Sep 17, 2020 · 14 comments
Closed

Better support of resources settings #131

sleshchenko opened this issue Sep 17, 2020 · 14 comments
Assignees
Labels
area/api Enhancement or issue related to the api/devfile specification
Projects
Milestone

Comments

@sleshchenko
Copy link
Member

sleshchenko commented Sep 17, 2020

Is your feature request related to a problem? Please describe.
Currently, container component has memoryLimit:

schemaVersion: 2.0.0
metadata:
  name: spring-boot-http-booster
components:
  - name: maven-tooling
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      memoryLimit: 768Mi

But according to Che experience memory limit is not enough. It's also makes sense to have memoryRequest, cpuLimit, cpuRequest.
I suppose the same issue is actual for chePlugin component as well.

Describe the solution you'd like
Support request and limit for cpu and memory (selected proposal):

schemaVersion: 2.0.0
metadata:
  name: spring-boot-http-booster
components:
  - name: maven-tooling
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      memoryLimit: 768Mi
      memoryRequest: 256Mi
      cpuLimit: 0.4
      cpuRequest: 0.2

Describe alternatives you've considered
It may make sense to go k8s style format

schemaVersion: 2.0.0
metadata:
  name: spring-boot-http-booster
components:
  - name: maven-tooling
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      resources:
        request:
          memory: 256Mi
          cpu: 0.2
        limit:
          memory: 756Mi
          cpu: 0.4

Additional context
Add any other context or screenshots about the feature request here.

@johnmcollier
Copy link
Member

I think the k8s style is the cleaner of the two options, but would it be considered a breaking change? Since we're moving out memoryLimit to under limit.memory?

@elsony
Copy link
Contributor

elsony commented Sep 17, 2020

I think the k8s style is the cleaner of the two options, but would it be considered a breaking change? Since we're moving out memoryLimit to under limit.memory?

Given that odo v2 is going to GA with the current setting, we need to avoid introducing breaking changes at this point. I'd advise not to change that unless it is really necessary.

@johnmcollier
Copy link
Member

johnmcollier commented Sep 17, 2020

Yeah, that's what I'm thinking too, even though I like the format of #2, option #1 allows us to introduce this to the spec without breaking anything for existing consumers of devfile 2.0. We're just adding 3 new fields (memoryRequest, cpuLimit, and cpuRequest).

@elsony
Copy link
Contributor

elsony commented Sep 22, 2020

Given that odo v2 is already using the memoryLimit in optinon #1, It has been agreed to use option #1 to avoid breakage of schema and it is easier to read.

@GeekArthur
Copy link
Contributor

I am wondering is this the final design or temporary design only to avoid breaking changes for odo v2 GA? From scalability perspective, the k8s style is better as it group all functionality properly (eg. request, limit). In the future, if we still have more fields add to the "resources", then it might make fields more redundant (eg. many request and limit) if we don't use k8s style.

@elsony
Copy link
Contributor

elsony commented Oct 1, 2020

We only plan to expose the most common settings on this and we have no intention to duplication everything that users can configure on K8. Therefore, we are not expecting there will be a lot of new fields that we are add in the future. If we happens in a position in the future that we want to support many other K8 settings, we should probably consider a different approach at that time, e.g. include a link to k8 yaml.

@gorkem
Copy link

gorkem commented Nov 27, 2020

I do not think devfile needs cpu limits. If there are no cpu limits are set limits are determined by the LimitRange set by the administrator of the cluster or if LimitRanges are not set it is limited to the cpu available to the node that pod is scheduled to (or Quota limits). When we set the limits on devfiles, we are assuming that those values are not in conflict with cluster configuration, which assumes that devfile author knows about the configuration of the cluster, which in turn makes devfiles cluster specific. For instance, if we are running on a namespace with a LimitRange object that sets the spec.limit.max.cpu to 800m setting the limit on devfile to 1000m will cause pod to fail.

Requests are a different story because they are indicating the absolute minimum for devfile to function. If limits, quotas or node sizes can not satisfy those they should cause a failure.

@NissesSenap
Copy link

In short lets agree to disagree @gorkem.
I want to set my default LimitRange that is as low as possible, I want my cluster usage to be as slim as possible and I want my developers to learn to always always define request and limit in there yaml.

apiVersion: v1
kind: LimitRange
metadata:
  name: core-resource-limits
spec:
  limits:
    - type: Container
      default:
        cpu: 200m
        memory: 200Mi
      defaultRequest:
        cpu: 100m
        memory: 100Mi

To force me as sysadmin to have multiple LimitRanges for different teams if they are using odo or not is a pain.
Because if I don't I know that I will increase cost of my cluster since It will automatically get bigger.

And even in the cases where the developers are using odo they probably won't do it every time. Most likely the developers will run odo with a application that they are currently developing and have another microservice in the same namespace that they are developing towards.

If i have something like, i hang out allot with java developers currently that's why the high numbers.

apiVersion: v1
kind: LimitRange
metadata:
  name: core-resource-limits
spec:
  limits:
    - type: Container
      default:
        cpu: 800m
        memory: 1Gi
      defaultRequest:
        cpu: 400m
        memory: 500Mi

That would make the random microservice that my odo developer is developing against increase it's cpu/memory usage by more than 100% and that isn't okay.
If I have this kind of config for 100 developers or 1000 developers this makes a big difference in my size of my developer clusters.

If I go and bring those numbers to a my bosses and tell them that I have 20-30 more worker nodes just because I can't set reasonable defaults due to odo they will most likely ask me to get rid of odo...

@gorkem
Copy link

gorkem commented Nov 29, 2020

We are in agreement... I think. I do think odo or che needs to provide a way to set values for cpu limits. However this issue is specifically for putting them in devfile. A devfile is related to the application and it should not have cluster related items. For instance setting the limit to 2000m may be OK for one cluster (or a user on the cluster) but it may not work for the next cluster. So we loose the mobility of the application between clusters. That is why I suggested on the issue reported to odo repo to use odo config mechanism.

Most developers do not need to know about limits and quotas though. I argue that odo and Che should do more and calculate a sensible cpu limit (for instance 75% of what the user is allowed) and use that by default.

@NissesSenap
Copy link

Personally I don't care if it's in the devfile or in some other file as long as the developers can change this value in a easy way.
But since the devfile contains memoryLimits in the 2.0 release it makes sense to add cpuLimits to it as well since I'm guessing you want to be backwards compatible and consistent in the devfile.

I don't think automatic calculation and configuration is a good idea due to what you write in redhat-developer/odo#4266 (comment), we don't know if what access the developers have namespaces where limits is configured or not.
I guess you could come with a suggestion on how to configure it to stdout to make it as user friendly as possible.

@kadel
Copy link
Member

kadel commented Nov 30, 2020

@gorkem

However this issue is specifically for putting them in devfile. A devfile is related to the application and it should not have cluster related items

I don't view CPU or memory limit or requests as cluster-specific things. For me, both are application specifics. If I'm writing an application I should know are minimal requirements for the application to start as well as maximum resource consumption that the application is allowed (when the application reaches this I know that something went wrong).

LimitRange complicates all this. This is a cluster-specific thing. If my applications specify a limit or request that it's outside of the namespace LimitRange it is completely ok that it gets rejected because the cluster might not be able to handle my application.

@gorkem
Copy link

gorkem commented Nov 30, 2020

@kadel That is an interesting perspective to look at those values, I can buy into it.
How about we take those values as guidance to the tools. And odo and Che can adjust them according to the LimitRanges so that the deployment does not fail?

@l0rd l0rd added this to the 2.1 milestone Dec 2, 2020
@kadel
Copy link
Member

kadel commented Dec 4, 2020

How about we take those values as guidance to the tools. And odo and Che can adjust them according to the LimitRanges so that the deployment does not fail?

Adjusting them only if the memory or cpu limits or requests are not set in Devfile?
That is a good idea, and that should work.

But if Devfile defines memory or CPU requests we need to follow it.
For example, I know that my application needs at least 6GB of memory, so I set memoryReqest in devfile to 6GB.
If I try to deploy this to the namespace with LimitRange with maximum memory 4GB it needs to fail.

@elsony
Copy link
Contributor

elsony commented Dec 9, 2020

We'll use the first proposed style.

@elsony elsony added the area/api Enhancement or issue related to the api/devfile specification label Dec 14, 2020
@elsony elsony added this to Under consideration in Sprint 195 via automation Jan 5, 2021
@yangcao77 yangcao77 self-assigned this Jan 20, 2021
@yangcao77 yangcao77 moved this from Under consideration to In progress in Sprint 195 Jan 20, 2021
@yangcao77 yangcao77 moved this from In progress to Done in Sprint 195 Jan 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Enhancement or issue related to the api/devfile specification
Projects
No open projects
Development

No branches or pull requests

9 participants