Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf_clean swbd v1-tf: illegal argument batch_size passed from run_ctc_phn.sh #189

Closed
efosler opened this issue Jun 21, 2018 · 2 comments
Closed

Comments

@efosler
Copy link
Contributor

efosler commented Jun 21, 2018

Trying a run straight out of the box on swbd, data prep went (mostly) fine (have one change I'll suggest at some point). Ran into this with run_ctc_phn.sh

...

            Training AM with the Full Set

=====================================================================
generating train labels...
generating cv labels...
steps/train_ctc_tf.sh --nlayer 4 --nhidden 320 --batch_size 16 --learn_rate 0.005 --half_after 6 --model deepbilstm --window 3 --ninitproj 80 --nproj 60 --nfinalproj 100 --norm false data/train_nodup data/train_dev exp/train_phn_l4_c320_mdeepbilstm_w3_nfalse_p60_ip80_fp80
steps/train_ctc_tf.sh: invalid option --batch_size

Sure enough, the train_ctc_tf.sh script does not define a default batch_size as well as learn_rate (may be lr_rate?), window, or norm. Is there some sort of version skew between the scripts?

@efosler
Copy link
Contributor Author

efosler commented Jun 21, 2018

FWIW, it does look like run_ctc_char.sh has lr_rate rather than learn_rate, but also declares batch_size.

@ramonsanabria
Copy link

ramonsanabria commented Jun 21, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants