New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-GPU/Session Runtimes! #92
Comments
@lorazabora Thank you for your feature idea! Would you please write your idea according to the ISSUE_TEMLATE and modify the title more explicitly? It is helpful not only for us but also other community participants to understand your proposal more concretely. |
Done, let me know if it’s more proper now Thanks! |
@lorazabora Thank you for your effort for adjusting the issue. I understand your request is not only selecting the multiple GPU but also choosing the trade-off between GPU spec and running time limit. If so, adjusting the title is more preferable for us. |
May I kindly ask if the team is trying to add some features and that’s why runtimes aren’t available? What is the cause exactly |
Hi and thank you for trying our SageMaker Studio Lab. As I have said on other threads we are experiencing quite a bit of demand for GPU. We appreciate your patience while we work on it. |
I understand thanks, can you give us an estimated time on how much it should take |
@lorazabora We can not guarantee the concreate solution schedule now. To realize more sustainable management of GPU, we are listening and gathering the voice of community. We are still in preview, so we appreciate your patience and your responsible usage. |
Whenever it gets fixed, will there be an announcement here on github? Just let me know personally if you don’t mind Thanks again for all the teams efforts |
@lorazabora We will notify on the GitHub Issue #88 when GPU instance problem solved because this issue is the first issue related to this problem. If you are waiting to solve the problem, please 👍 to this issue. And I created the issue #98 inspired from your request. If you need this feature, please 👍 also because it becomes the metrics of needs in the community and it affects our priority. I wrote the feature proposal from the aspect of the student. If you have another aspect and use-case, please let us know in the #98 comment. To know the needs of various use-case is important for us. To measure the metrics, please close this issue and add 👍 to related issue if you do not mind. |
It has been quite a long time since this issue started, why can’t we get at least an estimated time line |
Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. No my feature request is not related to a problem!
Describe the solution you'd like A clear and concise description of what you want to happen. Since aws has multiple variety of different GPUS why not add this in studiolab for shorter runtimes in exchange since its free?
For an example for high loads you can choose a Tesla V100 notebook that will run for 2 hours, the default GPU can remain 4 hours
That’s my feedback, thanks
The text was updated successfully, but these errors were encountered: