New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: Spleeter: a fast and efficient music source separation tool with pre-trained models #2154
Comments
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @bmcfee , @faroit it looks like you're currently assigned to review this paper 🎉. ⭐ Important ⭐ If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿 To fix this do the following two things:
For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Dear authors and reviewers We wanted to notify you that in light of the current COVID-19 pandemic, JOSS has decided to suspend submission of new manuscripts and to handle existing manuscripts (such as this one) on a "best efforts basis". We understand that you may need to attend to more pressing issues than completing a review or updating a repository in response to a review. If this is the case, a quick note indicating that you need to put a "pause" on your involvement with a review would be appreciated but is not required. Thanks in advance for your understanding. Arfon Smith, Editor in Chief, on behalf of the JOSS editorial team. |
Sorry for the delay. Things are a bit rough over here in 🇫🇷 . I will hopefully able to provide a review by next week. |
I am now back on the review. Thanks for your patience |
Hi @faroit Did you had time to advance on this one ? Best, |
Review/CommentsSpleeter is a very valuable addition to the music separation ecosystem. The software is already hugely popular. The majority of its users are end-users without scientific background, so that it can be said that spleeter has made our research domain significantly more popular, which is a great achievement for a software package. A paper about it here, hence naturally deserves publication. I see two reasons for the success of spleeter: Nonetheless, I want to bring up two issues that I see with respect to the performance as stated in the paper. 1. ReproducibilityAlthough it may definitely be helpful as a pre-processing step in some domains, I think it is fair to say that Spleeter does not contribute significantly to the advance of source separation research per se. This is mainly so because its good performance comes from the fact that it was trained on a private dataset, that was not made public, and that it actually behaves not as good when trained on the widely used MUSDB18, at least with the provided configuration files. This fact prevents other researchers from reproducing results. While the authors are very clear about this fact in their paper, it would be very easy to at least report the results obtained on MUSDB18, and tune a training configuration that would be optimized for this case, as many other source separation researchers are doing. This would allow other researchers to decide on comparable grounds whether they want to use spleeter for research or another system. 2. "State-of-the-art" performanceGiven that other recent architectures such as the ones listed here surpassed the performance of spleeter, I think it is be slightly miss-leading to keep All in all, I would see these two points as a starting point for discussion with the other reviewers (@bmcfee) and @terrytangyuan, and the authors. In the meantime, maybe issue #381 and #384 can be addressed. |
Thanks @faroit for laying this out carefully. I agree with point 2. Point 1 is a bit trickier, and I can see both sides of the issue here. (This is why I've left the "performance" box unchecked for now.)
So to me, the question is: are we evaluating the pre-trained model (the application), or the entire framework which produced it? Put more succinctly: is training within scope for the review or not? If so, then we should have some benchmarks on open data to verify the results (even if they're below what's reported with the private training set). If not, I'm fine to approve it as is. But I think we need some editorial guidance here -- @terrytangyuan ? |
@bmcfee Just adding to your comments that point 1 was addressed yesterday as documented here. That means there are settings available now for reproducible training on publicly available data. Even though the scores are significantly below SOTA, I still see them as very valuable for other researchers. And I therefore think that Point 1 would be fully addressed if these scores are stated in either of the following form
|
Oh great, I hadn't seen that yet. I would vote for paper + docs/wiki. |
Thanks @faroit and @bmcfee for your comments. Issues #381 and #384 have been addressed and closed. Regarding the 2 aspects mentioned by @faroit:
We did the following modifications:
So it should be quite easy now to reproduce these results for anyone. However, we think that putting an extra table for a model trained on a different dataset in the paper may bring confusion. As already stated, the added value of spleeter mainly comes from the provided pretrained models that have quite good performances (because they were trained on a private dataset): as the paper is not targeted at the source separation community but rather at MIR researchers needing a simple and efficient tool to perform separation as a pre-processing, putting a highlight on models trained on musdb in the JOSS paper (which is supposed to be concise) may sound confusing.
We agree that the use of SOA may be misleading (but advertising ;) ). What do you think about these modifications/suggestions? |
@romi1502 @mmoussallam Thanks a lot for your edits. I am more than pleased regarding both issues - from my side this paper can be accepted for publication 👍 |
Agreed, all looks good on my end too. Thanks to the authors for their patience and putting in all the work for this! 👍 |
Ok, |
I've updated the title here. |
This should be good to go. |
Hi all, |
@whedon generate pdf |
@romi1502 - At this point could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:
I can then move forward with accepting the submission. |
@whedon set 10.5281/zenodo.3906389 as archive |
OK. 10.5281/zenodo.3906389 is the archive. |
@whedon set v1.5.3 as version |
OK. v1.5.3 is the version. |
@whedon accept |
|
|
👋 @openjournals/joss-eics, this paper is ready to be accepted and published. Check final proof 👉 openjournals/joss-papers#1513 If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1513, then you can now move forward with accepting the submission by compiling again with the flag
|
@romi1502 - can you check if this ☝️ DOI is correct? If it is, please add it to your BibTeX file. |
We were actually citing the arxiv version of the paper which is not good. I replaced it by the published version and added the DOI in the BibTex file. |
@whedon check references |
|
@whedon accept |
|
|
👋 @openjournals/joss-eics, this paper is ready to be accepted and published. Check final proof 👉 openjournals/joss-papers#1514 If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1514, then you can now move forward with accepting the submission by compiling again with the flag
|
@whedon accept deposit=true |
|
🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦 |
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨 Here's what you must now do:
Any issues? Notify your editorial technical team... |
@bmcfee, @faroit - many thanks for your reviews here and to @terrytangyuan for editing this submission ✨ @romi1502 - your paper is now accepted into JOSS ⚡🚀💥 |
🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉 If you would like to include a link to your paper from your README use the following code snippets:
This is how it will look in your documentation: We need your help! Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
|
Submitting author: @romi1502 (Romain Hennequin)
Repository: https://github.com/deezer/spleeter
Version: v1.5.3
Editor: @terrytangyuan
Reviewer: @bmcfee, @faroit
Archive: 10.5281/zenodo.3906389
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@bmcfee & @faroit , please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @terrytangyuan know.
✨ Please try and complete your review in the next two weeks ✨
Review checklist for @bmcfee
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @faroit
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
The text was updated successfully, but these errors were encountered: