New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate REMI data by models? #9
Comments
REMI is the type of symbolic representation we used in the task. You can follow the steps here for data preprocessing. |
Just to be clear, the step of getting REMI representation is only during the steps of data preprocessing, so we haven't gone into the model yet. And yes, the codes for the step "Corpus to Representation" is explicitly for converting midi to the representation. |
Thanks for your answering In the paper, you used LSTM-att+REMI to test the generation model's outputs. if I want to reproduce the results in paper, should I use "Corpus to Representation" on the MIDI files that generated by generation model to get REMI representation, and feed those REMI files into LST-att+REMI to get classification accuracy? Thanks. |
|
reply for 2., following code is from this github ( workspace/transformer/main_cp.py), I thought it would generate npy files in the end of model generation. |
To your earlier question "Models will generate .npy files, and are they the same as "corpus" in preprocessing?", if you go check the function |
Hi,
In the paper, I notice that the model used for Emotion(4Q/Valence/Arousal) classification in Objective metrics is LSTM-Att + REMI.
But In the repo, the CP transformer model will generate a .mid file and a .npy for one music clip.
Could I use both of them to generate REMI data? And how to generate the REMI data type output?
thanks.
The text was updated successfully, but these errors were encountered: