You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
just writing little code as the issue is very easy to recreate
device_gpu = torch.device("mps") model = bundle.get_model(with_star=False).to(device_gpu, non_blocking=False) with torch.inference_mode(): emission, _ = model(waveform.to(device_gpu))
this produces completely wrong emissions, the same code provides very precise results if ran on the cpu device (at more and less half the speed unfortunately)
I am gonna report the same issue on the pytorch github as I'm not capable of understanding where the the actual problem is located, loading the model to cpu or gpu provides same result (in terms of the model object data) but the inferences return 2 substantially different results.
The text was updated successfully, but these errors were encountered:
Hey, I got news, the issue is within the components.py auxiliary script of the wav2vec2 model, specifically in the forward function, I'll link you to a similar issue that was raised in the pytorch git which led to a workaround too.
just writing little code as the issue is very easy to recreate
device_gpu = torch.device("mps")
model = bundle.get_model(with_star=False).to(device_gpu, non_blocking=False)
with torch.inference_mode():
emission, _ = model(waveform.to(device_gpu))
this produces completely wrong emissions, the same code provides very precise results if ran on the cpu device (at more and less half the speed unfortunately)
I am gonna report the same issue on the pytorch github as I'm not capable of understanding where the the actual problem is located, loading the model to cpu or gpu provides same result (in terms of the model object data) but the inferences return 2 substantially different results.
The text was updated successfully, but these errors were encountered: