-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding resources for samples #15
Conversation
operator/config/manager/manager.yaml
Outdated
@@ -59,8 +59,8 @@ spec: | |||
periodSeconds: 10 | |||
resources: | |||
requests: | |||
cpu: 100m | |||
memory: 100Mi | |||
cpu: 500m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CPU is probably the more constrained resource if someone is trying to run this on Kind. If this can work with 100m as a starting point, perhaps we should set it to that level for request/limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the feedback! Do you recommend similar changes on the samples as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would. Generally samples and "getting started" kind of content you want to have as low resource reservation as possible, so people can run it on their local machines. If there are resource recommendations for running this in a deployed kube cluster, that can be commented line in the same file or as a separate markdown text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated, thanks again!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM , thank you for making the updates
This PR