You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying a run straight out of the box on swbd, data prep went (mostly) fine (have one change I'll suggest at some point). Ran into this with run_ctc_phn.sh
Sure enough, the train_ctc_tf.sh script does not define a default batch_size as well as learn_rate (may be lr_rate?), window, or norm. Is there some sort of version skew between the scripts?
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for pointing this out. I think now should be solved (
52d8974).
We had a version disagreement with train_ctc_tf.sh. There are few
parameters defined outside the .sh script and some in the python script
(that we pas as arguments straight away). We should come up with some type
of agreement. For now is clean but it would be good to have them either in
the python side or in the bash side.
Any thoughts/suggestions @fmetze@efosler ?
Also, please remember to change path.sh accordingly (i.e. add your hostname
in the path.sh with $PYTHONPATH and source your virtualenv).
Please keep us letting us know you experience with the tf_clean branch. I
am very happy to keep on solving possible bugs.
Thanks!
2018-06-21 13:26 GMT-04:00 Eric Fosler-Lussier <notifications@github.com>:
Trying a run straight out of the box on swbd, data prep went (mostly) fine (have one change I'll suggest at some point). Ran into this with run_ctc_phn.sh
...
=====================================================================
generating train labels...
generating cv labels...
steps/train_ctc_tf.sh --nlayer 4 --nhidden 320 --batch_size 16 --learn_rate 0.005 --half_after 6 --model deepbilstm --window 3 --ninitproj 80 --nproj 60 --nfinalproj 100 --norm false data/train_nodup data/train_dev exp/train_phn_l4_c320_mdeepbilstm_w3_nfalse_p60_ip80_fp80
steps/train_ctc_tf.sh: invalid option --batch_size
Sure enough, the train_ctc_tf.sh script does not define a default batch_size as well as learn_rate (may be lr_rate?), window, or norm. Is there some sort of version skew between the scripts?
The text was updated successfully, but these errors were encountered: