Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qustions-About the paper result #30

Closed
Wangbin1125 opened this issue May 25, 2019 · 3 comments
Closed

Qustions-About the paper result #30

Wangbin1125 opened this issue May 25, 2019 · 3 comments
Labels
question Further information is requested

Comments

@Wangbin1125
Copy link

Hi,I'm trying training the M6 with the musdb dataset and have the following two questions to consult you.
1)How much GPU memory is needed to train this model?I have to set the batch size to 1, otherwise the GPU will report a memory error at the beginning of training.

2)When the code that Training.py is finished, the result saved in the folder where the evaluation results are saved is 151 json files, one json file for each song, and one test-test.json file.There are also four separate audio sources for each song.I want to know how to produce the Table3(test performance metrics for multi-instrument model) in the paper?
I think the compute_mean_metrics(json_folder, compute_averages=True, metric="SDR") function will computes all the numbers shown in the paper (Mean, SD, Median, MAD),but I don’t know how to use this function.I only found that the drawing module named plot.py calls this function, so how do I use this function in the evaluation process?
I am a deep learning beginner and hope to get your answer

@f90 f90 added the question Further information is requested label May 27, 2019
@f90
Copy link
Owner

f90 commented May 27, 2019

Hey!
For 1), the batch size is set to 16 in all our experiments and this ran fine with 8GB of GPU memory, so I am a bit surprised that you have so much memory issues? Is it the same with the singing voice separation models?

For 2), you are right that the compute_mean_metrics function can be used to compute the results. It is meant to be used as standalone function, so you should be able to do

import Evaluate
Evaluate.compute_mean_metrics(PATH_TO_JSONS)

from a Python console, where PATH_TO_JSONS is simply the path to the folder containing all the JSON files you want to evaluate, so that should be 50 (or 51 including test.json) files, one for each song. For details on the other parameters refer to the documentation of the compute_mean_metrics function.

draw_violin_sdr can be used with the same json path parameter to directly plot the distribution of SDR values so it builds on the other function.

@Wangbin1125
Copy link
Author

@f90 According to your instruction, I have successfully solved my second problem.
I haven't trained the singing voice separation models yet, I'll try again for GPU memory problems and come back to you if there are any questions.
Thank you very much for your detailed reply again.

@f90
Copy link
Owner

f90 commented Jun 3, 2019

No problem! Closing this for now due to inactivity.

@f90 f90 closed this as completed Jun 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants