-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Model from Scratch #17
Comments
You can see #99 in model.py which is used to restore the pre-trained checkpoint. You can remove this line if you want to train from scratch. |
thank you for your help, in your dictionary.pkl file are there any vocabulary semantics around midi events and bars? For example If I add more events to the dictionary based on my training data do I need to keep in mind the current indices of existing events? Or can I just add them to the end of the dictionary as new keys? Also, do I need to adjust the data preparation in any way if my training data are each only around 16 bars? I'm assuming I need to adjust this section that creates the segments based on group size. Apologies on the long winded comment but I'm trying to cover all the issues I've ran into in training from scratch. |
|
So I've revised the code in model.py above to not load the model, and have verified that model.prepare_date() is generating an actual ndarray. However when I run model.finetune I'm getting the below error.
Is there some additional initialization I need to set manually when training from scratch? |
can you train from the scratch after remove the restore line? i removed this line but i can't train,it says:" self._traceback = tf_stack.extract_stack_for_node(self._c_op)" |
Hello, thank you for making your source code available for this project. Is it possible to train the model from scratch using our own midi dataset without using one of the shared checkpoints? I've tried the finetune.py script you've included but the resulting model is still too biased toward the initial training set for what I'm trying to do. Thanks again for sharing your work in this space.
The text was updated successfully, but these errors were encountered: