You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.
I tried to add a similar code block for train_evaluate_predict_pipeline and pass the gpu parameter
but i keep getting invalid option during runtime. I know its not the package issue, but the documentation did not help either
The text was updated successfully, but these errors were encountered:
I am working on multigpu machines and I usually do it just by going along these lines:
CUDA_VISIBLE_DEVICES=1 neptune run -- train_evaluate_predict_pipeline -p fasttext_gru
and for multigpu you just do:
CUDA_VISIBLE_DEVICES=1,2 neptune run -- train_evaluate_predict_pipeline -p fasttext_gru
I haven't tried multigpu for this project yet.
Also apart from the documentation we have quite a vibrant community here https://community.neptune.ml/
drop your issue there and I am sure someone will help.
Hi,
how do we add a command line parameter to the select the GPU at run time, especially on amulti gpu machine?
I tried adding
I tried to add a similar code block for train_evaluate_predict_pipeline and pass the gpu parameter
but i keep getting invalid option during runtime. I know its not the package issue, but the documentation did not help either
The text was updated successfully, but these errors were encountered: