-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[getmusic] chork error during track_generation.py #133
Comments
hahaha, it works due to no lead in my midi file. I change vercol into chords (clavichord, it is what I can find in musescore software). then I use:
then a fresh midi file generated. it sounds interesting. |
I'm glad to hear that you were able to successfully run our code. However, I'd like to point out a few things. Firstly, the instrument name for "lead" should be "square wave synthesizer," which can be found under the directory of electronic music. Secondly, when generating, you don't need to specify the content again if it's already included in the conditions. Just 'cp->l' is ok. Lastly, in our code, the "chord" is played by a piano with the program set to 1. So converting the vercol to a clavichord won't assist in your generation. In fact, your clavichord will be filtered out. The code runs successfully because you already have the piano, and our code automatically infers the chord based on the piano. I suggest you change your sax to a legal 'lead' using the following code: Additionally, as we mentioned in Section 3.6 in the README, it's important to consider that a saxophone melody can significantly differ from the lead melody played by the square wave synthesizer in our training data. Directly assigning the saxophone to play as the lead melody may result in a substantial domain gap and potentially compromise the quality of the generated output. |
|
First, it is true that it is unnecessary to specify a track as a content track if it has served as a condition track. I am a little confusing: since this midi only has vocal, how to evaluate the similarity of the generated 'dgp' part? You mean it sounds almost same as the original accompaniment of this song? The lead should be same because we do not modify the condition tracks. As for similar 'dgp' part, if this song lacks any accompaniment, we are sure to filter it out during our preprocessing process. |
if I put lead in condition, the generation sounds almost same as the original accompaniment of this song. Hahaha, Here is my opinion: For the first time, humans first had dance before they had songs. At first, humans would celebrate hunting by dancing on bonfires, then jumping and shouting blindly. Later, beautiful singing developed. Some rhythmic songs can almost be associated with dance movements when listening to the song, or in other words, during a concert, the audience will unconsciously dance with them. So what I mean is that we can broaden our training ideas and not just consider using songs to train songs. The true source of songs is dance movements. We can use dance movements as training objects, that is, using images with time frames to generate music. This method is more likely to generate a good concert. |
I think use dancing pictures in frame is much more easier to train AI model than others . |
Thank you for your engaging and interesting advice! Perhaps we can draw inspiration from it. By the way, you can try this and see whether the quality becomes better: input the midi file and set condition track as 'c' rather than leaving condition tracks empty, the code will infer and condition on the chord progression from the lead melody in your input rather than directly conditioning on the lead melody. Empirically, we find chord guidance makes the generation have more regular pattern and better melodic quality. |
the input midi file only have lead (replaced by vocal). I mean there is only one track in my midi, now I configure it as lead. |
Yes, even the midi has only lead track, you can set the condition as 'c' which will be inferred automatically. Subsequently, GETMusic can generate tracks you want following the chord progression inferred by the lead melody rather than directly conditioning on the lead melody. Details can be found here: muzic/getmusic/track_generation.py Line 325 in fa414fa
|
Great, it sounds smoothly, hahaha |
I really appreciate this project. It has a very positive impact on people's lives.
I download midi file in musescore.com.
then I click python track_generation.py --load_path /path-of-checkpoint --file_path example_data/inference
Select condition tracks I click ‘lp’ , because midi dosen't have guitar, only has piano
Select content tracks I click ‘dgp’
log says chork error.
I guess track_generation.py can not find lead in my midi file, and the fact it is.
I use the software musescore to edit midi file. I find only sax and piano. I change sax into vercol, log says chork error. and click the button left to change instrument, can not find any instrument named lead.
Sorry for my poor Musical Knowledge. Could you tell me how should I handle midi file before inference. It seems more hard than python script.
The text was updated successfully, but these errors were encountered: