Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameterize librosa compatibility test #1350

Merged
merged 3 commits into from Mar 4, 2021

Conversation

jcaw
Copy link
Contributor

@jcaw jcaw commented Mar 4, 2021

Closes #1346

I've parameterized the test_create_fb test, as mentioned in #1346:

We'd like to use parameterized for the functional librosa compatibility test here. This will make it easier to know which option is failing.

cc comment

Parameterize `test_create_fb` so each set of values are tested
independently.
@jcaw
Copy link
Contributor Author

jcaw commented Mar 4, 2021

I don't think the CI checks librosa < 0.7.2, but I installed librosa==0.7.1 here locally[1] and it works fine. 19 tests are generated rather than 25, and they all pass.

[1] With this fix for numba, so numba==0.48 rather than the default numba==0.52

Copy link
Contributor

@vincentqb vincentqb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this!

Comment on lines 49 to 73
@parameterized.expand(
[
param(),
param(n_mels=128, sample_rate=44100),
param(n_mels=128, fmin=2000.0, fmax=5000.0),
param(n_mels=56, fmin=100.0, fmax=9000.0),
param(n_mels=56, fmin=800.0, fmax=900.0),
param(n_mels=56, fmin=1900.0, fmax=900.0),
param(n_mels=10, fmin=1900.0, fmax=900.0),
param(mel_scale="slaney"),
param(n_mels=128, sample_rate=44100, mel_scale="slaney"),
param(n_mels=128, fmin=2000.0, fmax=5000.0, mel_scale="slaney"),
param(n_mels=56, fmin=100.0, fmax=9000.0, mel_scale="slaney"),
param(n_mels=56, fmin=800.0, fmax=900.0, mel_scale="slaney"),
param(n_mels=56, fmin=1900.0, fmax=900.0, mel_scale="slaney"),
param(n_mels=10, fmin=1900.0, fmax=900.0, mel_scale="slaney"),
] + ([
param(n_mels=128, sample_rate=44100, norm="slaney"),
param(n_mels=128, fmin=2000.0, fmax=5000.0, norm="slaney"),
param(n_mels=56, fmin=100.0, fmax=9000.0, norm="slaney"),
param(n_mels=56, fmin=800.0, fmax=900.0, norm="slaney"),
param(n_mels=56, fmin=1900.0, fmax=900.0, norm="slaney"),
param(n_mels=10, fmin=1900.0, fmax=900.0, norm="slaney"),
] if StrictVersion(librosa.__version__) >= StrictVersion("0.7.2") else [])
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also simplify this a little following this example? e.g.

[
        param(norm=norm, mel_scale=mel_scale, **p.kwargs)
        for p in [
            param(),
            param(n_mels=128, sample_rate=44100),
            param(n_mels=128, fmin=2000.0, fmax=5000.0),
            ...
            param(n_mels=10, fmin=1900.0, fmax=900.0),
        ]
        for norm in [None, 'slaney']
        for mel_scale in ['htk', 'slaney']
]

This will also add the test when norm and mel_scale being slaney :) If they happen to fail, we'll mark them as known to fail too for now :)

Copy link
Contributor Author

@jcaw jcaw Mar 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought something similar when I dug into that MelSpectrogram test. I'll switch it over.

param(n_mels=56, fmin=800.0, fmax=900.0, norm="slaney"),
param(n_mels=56, fmin=1900.0, fmax=900.0, norm="slaney"),
param(n_mels=10, fmin=1900.0, fmax=900.0, norm="slaney"),
] if StrictVersion(librosa.__version__) >= StrictVersion("0.7.2") else [])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's simpler to skip directly at the beginning of the function:

if `norm=="slaney"` and StrictVersion(librosa.__version__) >= StrictVersion("0.7.2"):
        self.skipTest('Test is known to fail with older versions of librosa.')

With your observation on having the test passing on an older version of librosa, this message may not be correct. I don't have a better message to suggest though that would be "conservative" enough without having to do more investigation :) Maybe "Test is not known to pass with librosa 0.7.1 and older."? :)

Copy link
Contributor Author

@jcaw jcaw Mar 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, to be clear I just meant that when these 6 test cases are excluded everything passes. I haven't tried including them with older versions. I'll swap this over too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying :)

Also explicitly skip on older versions of librosa (< 0.7.2) when
`norm="slaney"`.
@mthrok
Copy link
Collaborator

mthrok commented Mar 4, 2021

I don't think the CI checks librosa < 0.7.2, but I installed librosa==0.7.1 here locally[1] and it works fine. 19 tests are generated rather than 25, and they all pass.

[1] With this fix for numba, so numba==0.48 rather than the default numba==0.52

@jcaw Could you try and see what happens with the skipped tests whenlibrosa==0.7.1? The skip is supposed to be related to facebook's internal system. However, I was doing a similar test because I thought I needed to apply the same guard for the recent changes, but it does not fail. If it does not fail on librosa==0.7.1 we should remove the skip logic.

@jcaw
Copy link
Contributor Author

jcaw commented Mar 4, 2021

@mthrok just tested with librosa==0.7.1 - the tests fail with this error:

        if norm is not None and norm != 1 and norm != np.inf:
>           raise ParameterError('Unsupported norm: {}'.format(repr(norm)))
E           librosa.util.exceptions.ParameterError: Unsupported norm: 'slaney'

Copy link
Contributor

@vincentqb vincentqb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM!

@vincentqb
Copy link
Contributor

thanks for checking with 0.7.1! it looks like the tests indeed fails for librosa <= 0.7.1 as you found out, or we need more investigation before removing the skip logic, which we'll leave to a potential follow-up. in either case, the goal of the PR is achieved, so I'll go ahead and merge the pull request :) thanks again!

@vincentqb vincentqb merged commit 7de5f98 into pytorch:master Mar 4, 2021
@jcaw
Copy link
Contributor Author

jcaw commented Mar 4, 2021

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

parameterized for functional librosa compatibility test
4 participants