Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect SDR (inconsistent with mir_eval package) while running on GPU #5

Closed
vskadandale opened this issue Jun 6, 2021 · 2 comments
Assignees

Comments

@vskadandale
Copy link

vskadandale commented Jun 6, 2021

Hi,

I noticed that the bss_eval_sources function from torch_mir_eval package outputs incorrect SDR for certain inputs ONLY while using GPU. It doesn't match with the mir_eval package's SDR output for the same inputs. The torch_mir_eval SDR output matches mir_eval SDR output when I run it on CPU instead of GPU. Also, torch_mir_eval SDR output varies depending on the GPU.

Steps to reproduce the issue:

  1. Download the attached zip file and extract its .npy files.
    inputs.zip

  2. Run the following code snippets.

import numpy as np
import torch
import mir_eval
from torch_mir_eval import bss_eval_sources
src = torch.from_numpy(np.load('gt.npy',allow_pickle=True)).cuda(0)
est = torch.from_numpy(np.load('est.npy',allow_pickle=True)).cuda(0)
sdr,sir,sar,perm = bss_eval_sources(src,est,compute_permutation=True)
print('SDR obtained using torch_mir_eval: '+str(sdr))

Output: SDR obtained using torch_mir_eval: tensor([inf, 0.])
NOTE: You might get a different output depending on your GPU.

src_npy = np.load('gt.npy',allow_pickle=True)
est_npy = np.load('est.npy',allow_pickle=True)
(sdr, sir, sar, perm) = mir_eval.separation.bss_eval_sources(src_npy, est_npy, compute_permutation=True)
print('SDR obtained using mir_eval: '+str(sdr))

Output: SDR obtained using mir_eval: [-1.89931866 3.0518311 ]

Thanks!

@vskadandale vskadandale changed the title Infinite SDR (inconsistent with mir_eval package) while running on GPU Incorrect SDR (inconsistent with mir_eval package) while running on GPU Jun 6, 2021
@JuanFMontesinos JuanFMontesinos self-assigned this Jun 16, 2021
@JuanFMontesinos
Copy link
Owner

Hi,
Indeed it seems there are differences between GPU and CPU version of the library. The CPU version matches that of original mir_eval implementation.

As both CPU and GPU versions use the same undelying code, it seems to be a pytorch issue which cannot be solved from this side.

@JuanFMontesinos
Copy link
Owner

From version 0.4 onwards it seems to be ok in this example (pytorch released a new linalg package) but cpu results are wrong. The other way around to what we observed for previous versions of pytorch.

SDR obtained using torch_mir_eval cpu: tensor([-1.8993,  0.8896])
SDR obtained using torch_mir_eval gpu: tensor([-1.8993,  3.0512])
SDR obtained using mir_eval: [-1.89931866  3.0518311 ]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants