Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cnn-tdnn-f for switchboard #50

Open
wants to merge 43 commits into
base: cnn_tdnnf
Choose a base branch
from

Conversation

GaofengCheng
Copy link

No description provided.

eginhard and others added 21 commits July 25, 2018 13:59
…asr#2573)

If a single phoneme is aligned to the whole utterance, it is counted both as
`begin` and `end`, but is added to the total only once. This caused
`assert count >= 0` in analyze_phone_length_stats.py to fail. Now only the
`begin` is counted in that case.
…aldi-asr#2596)

OpenFst 1.6.7 does not build with 4.8.1, and 4.8.2 has an stl bug that is fatal for Kaldi.
@GaofengCheng
Copy link
Author

Hi Dan,
the results did not change.
This update is a fix that in the first commit I used the wrong xconfig.
This update corrects the xconfig.
Gaofeng

@danpovey
Copy link
Owner

danpovey commented Aug 10, 2018 via email

conv-relu-batchnorm-layer name=cnn3 $cnn_opts height-in=40 height-out=20 height-subsample-out=2 time-offsets=-1,0,1 height-offsets=-1,0,1 num-filters-out=128
conv-relu-batchnorm-layer name=cnn4 $cnn_opts height-in=20 height-out=20 time-offsets=-1,0,1 height-offsets=-1,0,1 num-filters-out=128
conv-relu-batchnorm-layer name=cnn5 $cnn_opts height-in=20 height-out=20 time-offsets=-1,0,1 height-offsets=-1,0,1 num-filters-out=128
conv-relu-batchnorm-layer name=cnn6 $cnn_opts height-in=20 height-out=20 time-offsets=-1,0,1 height-offsets=-1,0,1 num-filters-out=128
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, I didn't look at this before. Can you try a version where the height-out of cnn5 and cnn6 is 10, not 20, and their num-filters-out is 256? This will leave the compute time about the same (while increasing the parameters), and will allow those layers to see a wider range of frequency. So reducing the height (and increasing the num-filters) actually increases the modeling power.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK will try

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

                   tdnn7q_sp  cnn_tdnn1a_sp  cnn_tdnn1a_more_filters_sp

WER on train_dev(tg) 12.08 12.13 11.97
WER on train_dev(fg) 11.15 11.16 11.12
WER on eval2000(tg) 14.1 14.1 13.9
WER on eval2000(fg) 12.8 12.6 12.5
WER on rt03(tg) 17.5 17.3 17.1
WER on rt03(fg) 15.3 14.9 14.9
Final train prob -0.055 -0.057 -0.056
Final valid prob -0.072 -0.075 -0.075
Final train prob (xent) -0.875 -0.877 -0.871
Final valid prob (xent) -0.9064 -0.9134 -0.9110
Num-parameters 18725244 14597020 15187100

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! So that's the setup with more filters and height-out=10 on the last 2 layers, then?
In that case I think you should just change your 1a to be that configuration, and we could mege that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.