You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue here is, most likely, that we aren't initializing the model with neither CPU nor CUDA. But, with xm.xla_device(). nvidia_deeprecommender assigns an instance of different classes depending on the device. However, there's no case for XLA devices.
Maybe, a better solution would be to change that part of the benchmark code for handling XLA devices.
The text was updated successfully, but these errors were encountered:
🐛 Bug
Running the upstreamed benchmarking scripts with the following command results in an unexpected error.
python xla/benchmarks/experiment_runner.py \ --suite-name torchbench \ --accelerator cuda \ --xla PJRT --xla None \ --dynamo openxla --dynamo None \ --test eval --test train \ --repeat 30 --iterations-per-run 5 \ --print-subprocess \ --no-resume -k nvidia_deeprecommender
Environment
Additional context
The issue here is, most likely, that we aren't initializing the model with neither CPU nor CUDA. But, with
xm.xla_device()
.nvidia_deeprecommender
assigns an instance of different classes depending on the device. However, there's no case for XLA devices.Maybe, a better solution would be to change that part of the benchmark code for handling XLA devices.
The text was updated successfully, but these errors were encountered: