Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add resource information to pod annotations #223

Closed
lorenzo-cavazzi opened this issue Nov 22, 2019 · 3 comments · Fixed by #261
Closed

feat: add resource information to pod annotations #223

lorenzo-cavazzi opened this issue Nov 22, 2019 · 3 comments · Fixed by #261
Assignees

Comments

@lorenzo-cavazzi
Copy link
Member

It should be easy to add a couple of extra annotations to the pods about the environment resources/options selected during the start phase. These could be added to the /servers endpoint and be used by the UI to give extra information (e.g. number of GPU available)

@rokroskar
Copy link
Member

You could get the same information from the k8s api, no? I don't think going through pod annotations for determining resource availability is the way to go. Or am I misunderstanding the intent?

@lorenzo-cavazzi
Copy link
Member Author

This is to easily recover the options chosen by the user at start time (e.g. 2 cpu/1gpu) and use them in the UI, not determining the overall resource availability (that is addressed by #222).

Anyway, I guess you are totally right, I don't need to add any extra annotation since all the information seems to be already available when invoking kubectl describe. Then it's just a matter of adding the missing information in the /servers api 🙂

@rokroskar
Copy link
Member

yes, when you retrieve the pod you can (should) get all of those options, afaik. I wouldn't add them to annotations unless there is something very renku-specific and not related to the characteristics of the pod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants