Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Spleeter: a fast and efficient music source separation tool with pre-trained models #2154

Closed
38 tasks done
whedon opened this issue Mar 9, 2020 · 45 comments
Closed
38 tasks done
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Mar 9, 2020

Submitting author: @romi1502 (Romain Hennequin)
Repository: https://github.com/deezer/spleeter
Version: v1.5.3
Editor: @terrytangyuan
Reviewer: @bmcfee, @faroit
Archive: 10.5281/zenodo.3906389

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/259e5efe669945a343bad6eccb89018b"><img src="https://joss.theoj.org/papers/259e5efe669945a343bad6eccb89018b/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/259e5efe669945a343bad6eccb89018b/status.svg)](https://joss.theoj.org/papers/259e5efe669945a343bad6eccb89018b)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@bmcfee & @faroit , please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @terrytangyuan know.

Please try and complete your review in the next two weeks

Review checklist for @bmcfee

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@romi1502) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @faroit

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@romi1502) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Mar 9, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @bmcfee , @faroit it looks like you're currently assigned to review this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Mar 9, 2020

Reference check summary:

OK DOIs

- 10.1109/ICASSP.2019.8683555 is OK
- 10.5281/zenodo.3269749 is OK
- 10.1109/TSA.2005.858005 is OK
- 10.5281/zenodo.1117372 is OK
- 10.21105/joss.01667 is OK

MISSING DOIs

- https://doi.org/10.1007/978-3-319-93764-9_28 may be missing for title: The 2018 Signal Separation Evaluation Campaign

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Mar 9, 2020

@arfon
Copy link
Member

arfon commented Mar 14, 2020

Dear authors and reviewers

We wanted to notify you that in light of the current COVID-19 pandemic, JOSS has decided to suspend submission of new manuscripts and to handle existing manuscripts (such as this one) on a "best efforts basis". We understand that you may need to attend to more pressing issues than completing a review or updating a repository in response to a review. If this is the case, a quick note indicating that you need to put a "pause" on your involvement with a review would be appreciated but is not required.

Thanks in advance for your understanding.

Arfon Smith, Editor in Chief, on behalf of the JOSS editorial team.

@faroit
Copy link

faroit commented Mar 19, 2020

Sorry for the delay. Things are a bit rough over here in 🇫🇷 . I will hopefully able to provide a review by next week.

@faroit
Copy link

faroit commented Apr 16, 2020

I am now back on the review. Thanks for your patience

@mmoussallam
Copy link

Hi @faroit

Did you had time to advance on this one ?

Best,
Manuel

@faroit
Copy link

faroit commented May 20, 2020

Review/Comments

Spleeter is a very valuable addition to the music separation ecosystem. The software is already hugely popular. The majority of its users are end-users without scientific background, so that it can be said that spleeter has made our research domain significantly more popular, which is a great achievement for a software package. A paper about it here, hence naturally deserves publication.

I see two reasons for the success of spleeter:
1. The code is thoughtfully designed and sufficiently fast to run inference on simple desktop clients.
2. Separation performance with the provided pre-trained models is very good, which allows users to separate audio simply and nicely.

Nonetheless, I want to bring up two issues that I see with respect to the performance as stated in the paper.

1. Reproducibility

Although it may definitely be helpful as a pre-processing step in some domains, I think it is fair to say that Spleeter does not contribute significantly to the advance of source separation research per se. This is mainly so because its good performance comes from the fact that it was trained on a private dataset, that was not made public, and that it actually behaves not as good when trained on the widely used MUSDB18, at least with the provided configuration files.

This fact prevents other researchers from reproducing results. While the authors are very clear about this fact in their paper, it would be very easy to at least report the results obtained on MUSDB18, and tune a training configuration that would be optimized for this case, as many other source separation researchers are doing. This would allow other researchers to decide on comparable grounds whether they want to use spleeter for research or another system.

2. "State-of-the-art" performance

Given that other recent architectures such as the ones listed here surpassed the performance of spleeter, I think it is be slightly miss-leading to keep state-of-the-art in the title of this paper.

All in all, I would see these two points as a starting point for discussion with the other reviewers (@bmcfee) and @terrytangyuan, and the authors. In the meantime, maybe issue #381 and #384 can be addressed.

@bmcfee
Copy link

bmcfee commented May 23, 2020

Thanks @faroit for laying this out carefully. I agree with point 2.

Point 1 is a bit trickier, and I can see both sides of the issue here. (This is why I've left the "performance" box unchecked for now.)

  • On the one hand, the authors are making an existential claim that their pre-trained model achieves the reported scores on musdb. I haven't verified this exactly, but in working with this package over the past few months, I don't see it as controversial.
  • On the other hand, one could argue that the contribution here is the entire implementation, including training algorithm, and the pre-trained model parameters are just one (important) example produced by their system on proprietary data. As @faroit correctly states, this does not guarantee that retraining on open data will produce the same results (and nobody expects that), but there is no reported benchmark for that condition to compare against anyway.

So to me, the question is: are we evaluating the pre-trained model (the application), or the entire framework which produced it? Put more succinctly: is training within scope for the review or not? If so, then we should have some benchmarks on open data to verify the results (even if they're below what's reported with the private training set). If not, I'm fine to approve it as is. But I think we need some editorial guidance here -- @terrytangyuan ?

@faroit
Copy link

faroit commented May 23, 2020

@bmcfee Just adding to your comments that point 1 was addressed yesterday as documented here.

That means there are settings available now for reproducible training on publicly available data. Even though the scores are significantly below SOTA, I still see them as very valuable for other researchers. And I therefore think that Point 1 would be fully addressed if these scores are stated in either of the following form

  • in the paper, or
  • in the spleeter docs/wiki, or
  • directly commented in the json config file

@bmcfee
Copy link

bmcfee commented May 23, 2020

Oh great, I hadn't seen that yet. I would vote for paper + docs/wiki.

@romi1502
Copy link

romi1502 commented May 26, 2020

Thanks @faroit and @bmcfee for your comments.

Issues #381 and #384 have been addressed and closed.

Regarding the 2 aspects mentioned by @faroit:

  1. Reproducibility

We did the following modifications:

  • We've modified the config file that makes it possible to train spleeter on the musdb dataset and it trains properly now.
  • We've added this page in the wiki to talk about performances of spleeter with a paragraph on a model trained on the musdb train set that reports a performance table (we also put the command to reproduce the training of the model). We also added a link to it in the README of the repo.

So it should be quite easy now to reproduce these results for anyone.

However, we think that putting an extra table for a model trained on a different dataset in the paper may bring confusion. As already stated, the added value of spleeter mainly comes from the provided pretrained models that have quite good performances (because they were trained on a private dataset): as the paper is not targeted at the source separation community but rather at MIR researchers needing a simple and efficient tool to perform separation as a pre-processing, putting a highlight on models trained on musdb in the JOSS paper (which is supposed to be concise) may sound confusing.

  1. "State-of-the-art" performance

We agree that the use of SOA may be misleading (but advertising ;) ).
We propose to change the title of the paper to "Spleeter: a fast and efficient music source separation tool with pre-trained models" to keep a quite similar one without the SOA claim.

What do you think about these modifications/suggestions?

@faroit
Copy link

faroit commented May 26, 2020

@romi1502 @mmoussallam Thanks a lot for your edits. I am more than pleased regarding both issues - from my side this paper can be accepted for publication 👍

@bmcfee
Copy link

bmcfee commented May 27, 2020

Agreed, all looks good on my end too. Thanks to the authors for their patience and putting in all the work for this! 👍

@romi1502
Copy link

Ok,
Thanks @faroit and @bmcfee for your reviews.
@terrytangyuan, I think we're good to move forward.
We will just need to change the title of the paper: I already updated the paper.md of the project repo, but we'll need to update the submission title too. Do you how we can do it ?

@danielskatz danielskatz changed the title [REVIEW]: Spleeter: a fast and state-of-the art music source separation tool with pre-trained models [REVIEW]: Spleeter: a fast and efficient music source separation tool with pre-trained models May 28, 2020
@danielskatz
Copy link

I've updated the title here.
👋 @arfon - does anything else need to be done to update the title?

@arfon
Copy link
Member

arfon commented May 28, 2020

👋 @arfon - does anything else need to be done to update the title?

This should be good to go.

@romi1502
Copy link

Hi all,
everything seems to be checked for the review phase. What are the next steps?

@arfon
Copy link
Member

arfon commented Jun 11, 2020

@whedon generate pdf

@arfon
Copy link
Member

arfon commented Jun 11, 2020

@romi1502 - At this point could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:

  • The title of the archive is the same as the JOSS paper title
  • That the authors of the archive are the same as the JOSS paper authors

I can then move forward with accepting the submission.

@romi1502
Copy link

Thank you @arfon
The latest spleeter version (1.5.3) we've just released includes all changes discussed in the review.
I've just uploaded an archive with this version on Zenodo with same title and same authors as the paper.

It has DOI 10.5281/zenodo.3906389.

Let us know if we need to anything else.

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon set 10.5281/zenodo.3906389 as archive

@whedon
Copy link
Author

whedon commented Jun 24, 2020

OK. 10.5281/zenodo.3906389 is the archive.

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon set v1.5.3 as version

@whedon
Copy link
Author

whedon commented Jun 24, 2020

OK. v1.5.3 is the version.

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon accept

@whedon
Copy link
Author

whedon commented Jun 24, 2020

Attempting dry run of processing paper acceptance...

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 24, 2020
@whedon
Copy link
Author

whedon commented Jun 24, 2020

Reference check summary:

OK DOIs

- 10.1109/ICASSP.2019.8683555 is OK
- 10.5281/zenodo.3269749 is OK
- 10.1109/TSA.2005.858005 is OK
- 10.5281/zenodo.1117372 is OK
- 10.21105/joss.01667 is OK

MISSING DOIs

- https://doi.org/10.1007/978-3-319-93764-9_28 may be missing for title: The 2018 Signal Separation Evaluation Campaign

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jun 24, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1513

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1513, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@romi1502 - can you check if this ☝️ DOI is correct? If it is, please add it to your BibTeX file.

@romi1502
Copy link

We were actually citing the arxiv version of the paper which is not good. I replaced it by the published version and added the DOI in the BibTex file.

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon check references

@whedon
Copy link
Author

whedon commented Jun 24, 2020

Reference check summary:

OK DOIs

- 10.1007/978-3-319-93764-9_28 is OK
- 10.1109/ICASSP.2019.8683555 is OK
- 10.5281/zenodo.3269749 is OK
- 10.1109/TSA.2005.858005 is OK
- 10.5281/zenodo.1117372 is OK
- 10.21105/joss.01667 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon accept

@whedon
Copy link
Author

whedon commented Jun 24, 2020

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Jun 24, 2020

Reference check summary:

OK DOIs

- 10.1007/978-3-319-93764-9_28 is OK
- 10.1109/ICASSP.2019.8683555 is OK
- 10.5281/zenodo.3269749 is OK
- 10.1109/TSA.2005.858005 is OK
- 10.5281/zenodo.1117372 is OK
- 10.21105/joss.01667 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jun 24, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1514

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1514, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Jun 24, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon whedon added accepted published Papers published in JOSS labels Jun 24, 2020
@whedon
Copy link
Author

whedon commented Jun 24, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Jun 24, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02154 joss-papers#1515
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02154
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@arfon
Copy link
Member

arfon commented Jun 24, 2020

@bmcfee, @faroit - many thanks for your reviews here and to @terrytangyuan for editing this submission ✨

@romi1502 - your paper is now accepted into JOSS ⚡🚀💥

@arfon arfon closed this as completed Jun 24, 2020
@whedon
Copy link
Author

whedon commented Jun 24, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02154/status.svg)](https://doi.org/10.21105/joss.02154)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02154">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02154/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02154/status.svg
   :target: https://doi.org/10.21105/joss.02154

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@faroit
Copy link

faroit commented Jun 24, 2020

@romi1502 @arfon My first experience with JOSS when I submitted a paper was excellent and I really tried to help so that other submissions like this one could get the same treatment but unfortunately this was really difficult during COVID.
So thanks again for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

8 participants