-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Datasets in dataset_cache #63
Comments
Hi, Yes you're right. It seems that I put the wrong ChoraleDataset in tensor_dataset.... Sorry for that. So you'll have to recreate it if you want to train a new model (This takes some time because of all the transpositions together with the key analyzer of music21). Maybe you can find the correct ChoraleDataset in the docker, but I'm not sure. Best, |
No worries! I recreated the dataset, which was very straight-forward to do with the provided code. :) By the way, there were a bunch of Oh, the reason for saving two different datasets in |
Yes, I have exactly the same error with one of the chorales because of the key analyzer. This particular chorale will be just skipped and won't appear in the dataset. So no worries! |
We noticed that only the dataset in
dataset_cache/tensor_datasets/
is required to train the model and generate new chorales. However, the provided dataset intensor_datasets/
in the zip file is namedChoraleDataset([0],bach_chorales,['fermata', 'tick', 'key'],8,4)
, indicating that it only contains the soprano voice.If this dataset is used for training, should it not contain all four voices? Otherwise if it is used to fix the soprano part at generation time, it seems from our manual observation of the generated chorales that all notes are being sampled, and the soprano part are not real Lutheran melodies.
Also, what is the difference in purpose between the datasets in the
datasets/
andtensor_datasets/
folder?Thank you so much!
The text was updated successfully, but these errors were encountered: