New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training on top of an existing model #129
Comments
Please paste here the full commands you used to:
|
Which branch/tag are you using? The option to load a model is only supported in the master branch. |
@amitdo The train model
|
@Christophered Your "load"/"train" steps are the same script? Also, you can enclose multi-line code in triple backticks ( #!/bin/bash
set -x
-a
sort -R manifest.txt > /tmp/manifest2.txt
sed 1,100d /tmp/manifest2.txt > train.txt
sed 100q /tmp/manifest2.txt > test.txt
report_every=1000
save_every=1000
maxtrain=50000
target_height=48
dewarp=center
display_every=1000
test_every=1000
hidden=100
lrate=1e-4
save_name=arabic
load=arabic-8000.clstm
start=8000
clstmocrtrain train.txt test.txt |
@kba loading script is similar to the training script except for the last 3 lines |
|
How can I train on top of an existing model, or stop and continue training later? |
I was using the seperat-derive "legacy" clstm version, it doesn't have save/load options |
Hi there,
I am trying to train a new clstm model containing +1000 lines, the training process would take days.
My technique would be to train a couple of hours a day, and continue training the next day, as such.
I created an arabic-8000.clstm model for testing, and added to the script:
load=arabic-8000.clstm
start=8000
But the problem is that clstmocrtrain starts from 0 all over again.
Waiting for your reply
The text was updated successfully, but these errors were encountered: