Skip to content

Commit

Permalink
update (#32)
Browse files Browse the repository at this point in the history
* [build] Allow configure script to handle package-based OpenBLAS (kaldi-asr#2618)

* [egs] updating local/make_voxceleb1.pl so that it works with newer versions of VoxCeleb1 (kaldi-asr#2684)

* [egs,scripts] Remove unused --nj option from some scripts (kaldi-asr#2679)

* [egs] Fix to tedlium v3 run.sh (rnnlm rescoring) (kaldi-asr#2686)

* [scripts,egs] Tamil OCR with training data from yomdle and testing data from slam (kaldi-asr#2621)

note: this data may not be publicly available at the moment.  we'll work on that.

* [egs] mini_librispeech: allow relative pathnames in download_and_untar.sh (kaldi-asr#2689)

* [egs] Updating SITW recipe to account for changes to VoxCeleb1  (kaldi-asr#2690)

* [src] Fix nnet1 proj-lstm bug where gradient clipping not used; thx:@cbtpkzm (kaldi-asr#2696)

* [egs] Update aishell2 recipe to allow online decoding (no pitch for ivector) (kaldi-asr#2698)

* [src] Make cublas and cusparse use per-thread streams. (kaldi-asr#2692)

This will reduce synchronization overhead when we actually use multiple
cuda devices in one process go down drastically, since we no longer
synchronize on the legacy default stream.

More details here: https://docs.nvidia.com/cuda/cuda-runtime-api/stream-sync-behavior.html

* [src] improve handling of low-rank covariance in ivector-compute-lda (kaldi-asr#2693)

* [egs] Changes to IAM handwriting-recognition recipe, including BPE encoding (kaldi-asr#2658)

* [scripts] Make sure pitch is not included in i-vector feats, in online decoding preparation (kaldi-asr#2699)

* [src] fix help message in post-to-smat (kaldi-asr#2703)

* [scripts] Fix to steps/cleanup/debug_lexicon.sh (kaldi-asr#2704)

* [egs] Cosmetic and file-mode fixes in HKUST recipe (kaldi-asr#2708)

* [scripts] nnet1: remove the log-print of args in 'make_nnet_proto.py', thx:mythilisharan@gmail.com (kaldi-asr#2706)

* [egs] update README in AISHELL-2 (kaldi-asr#2710)

* [src] Make constructor of CuDevice private (kaldi-asr#2711)

* [egs] fix sorting issue in aishell v1 (kaldi-asr#2705)

* [egs] Add soft links for CNN+TDNN scripts (kaldi-asr#2715)

* [build] Add missing packages in extras/check_dependencies.sh (kaldi-asr#2719)

* [egs] madcat arabic: clean scripts, tuning, use 6-gram LM (kaldi-asr#2718)

* [egs] Update WSJ run.sh: comment out outdated things, add run_tdnn.sh. (kaldi-asr#2723)

* [scripts,src] Fix potential issue in scripts; minor fixes. (kaldi-asr#2724)

The use of split() in latin-1 encoding (which might be used for other ASCII-compatible encoded data like utf-8) is not right because character 160 (expressed here in decimal) is a NBSP in latin-8 encoding and is also in the range UTF-8 uses for encoding. The same goes for strip().  Thanks @ChunChiehChang for finding the issue.

* [egs] add example script for RNNLM lattice rescoring for WSJ recipe (kaldi-asr#2727)

* [egs] add rnnlm example on tedlium+lm1b; add rnnlm rescoring results (kaldi-asr#2248)

* [scripts] Small fix to utils/data/convert_data_dir_to_whole.sh (RE backups) (kaldi-asr#2735)

* [src] fix memory bug in kaldi::~LatticeFasterDecoderTpl(), (kaldi-asr#2737)

- found it when running 'latgen-faster-mapped-parallel',
- core-dumps from the line: decoder/lattice-faster-decoder.cc:52
-- the line is doing 'delete &(FST*)', i.e. deleting the pointer to FST, instead of deleting the FST itslef,
-- bug was probably introduced by refactoring commit d0c68a6 from 2018-09-01,
-- after the change the code runs fine... (the unit tests for src/decoder are missing)

* [egs] Remove per-utt option from nnet3/align scripts (kaldi-asr#2717)

* [egs] Small Librispeech example fix, thanks: Yasasa Tennakoon. (kaldi-asr#2738)

* [egs] Aishell2 recipe: turn off jieba's new word discovery in word segmentation (kaldi-asr#2740)

* [egs] Add missing file local/join_suffix.py in TEDLIUM s5_r3; thx:anand@sayint.ai (kaldi-asr#2741)

* [egs,scripts] Add Tunisian Arabic (MSA) recipe; cosmetic fixes to pbs.pl (kaldi-asr#2725)

* [scripts] Fix missing import in utils/langs/grammar/augment_words_txt.py (kaldi-asr#2742)

* [scripts] Fix build_const_arpa_lm.sh w.r.t.  where <s> appears inside words (kaldi-asr#2745)

* [scripts] Slight improvements to decode_score_fusion.sh usability (kaldi-asr#2746)

* [build] update configure to support cuda 10 (kaldi-asr#2747)

* [scripts] Fix bug in utils/data/resample_data_dir.sh (kaldi-asr#2749)

* [scripts] Fix bug in cleanup after steps/cleanup/clean_and_segment_data*.sh (kaldi-asr#2750)

* [egs] several updates of the tunisian_msa recipe  (kaldi-asr#2752)

* [egs] Small fix to Tunisian MSA TDNN script (RE train_stage) (kaldi-asr#2757)

* [src,scripts] Batched nnet3 computation (kaldi-asr#2726)

This PR adds the underlying utilities for much faster nnet3 inference on GPU, and a command-line binary (and script support) for nnet3 decoding and posterior computation.  TBD: a binary for x-vector computation.  This PR also contains unrelated decoder speedups (skipping range checks for transition ids... this may cause segfaults when graphs are mismatched).

* [build] Add python3 compatibility to install scripts (kaldi-asr#2748)

* [scripts] tfrnnlm: Modify TensorFlow flag format for compatibility with recent versions (kaldi-asr#2760)

* [egs] fix old style perl regex in egs/chime1/s5/local/chime1_prepare_data.sh (kaldi-asr#2762)

* [scripts] Fix bug in steps/cleanup/debug_lexicon.sh (kaldi-asr#2763)

* [egs] Add example for Yomdle Farsi OCR (kaldi-asr#2702)

* [scripts] debug_lexicon.sh: Fix bug introduced in kaldi-asr#2763. (kaldi-asr#2764)

* [egs] add missing online cmvn config in aishell2 (kaldi-asr#2767)

* [egs] Add CNN-TDNN-F script for Librispeech (kaldi-asr#2744)

* [src] Some minor cleanup/fixes regarding CUDA memory allocation; other small fixes. (kaldi-asr#2768)

* [scripts] Update reverberate_data_dir.py so that it works with python3 (kaldi-asr#2771)

* [egs] Chime5: fix total number of words for WER calculation  (kaldi-asr#2772)

* [egs] RNNLMs on Tedlium w/ Google 1Bword: Increase epochs, update results (kaldi-asr#2775)

* [scripts,egs] Added phonetisaurus-based g2p scripts (kaldi-asr#2730)

Phonetisaurus is much faster to train then sequitur.

* [egs] madcat arabic: clean scripts, tuning, rescoring, text localization (kaldi-asr#2716)

* [scripts] Enhancements & minor bugfix to segmentation postprocessing (kaldi-asr#2776)

* [src] Update gmm-decode-simple to accept ConstFst (kaldi-asr#2787)

* [scripts] Update documentation of train_raw_dnn.py (kaldi-asr#2785)

* [src] nnet3: extend what descriptors can be parsed. (kaldi-asr#2780)

* [src] Small fix to 'fstrand' (make sure args are parsed) (kaldi-asr#2777)

* [src,scripts] Minor, mostly cosmetic updates (kaldi-asr#2788)

* [src,scripts] Add script to compare alignment directories. (kaldi-asr#2765)

* [scripts] Small fixes to script usage messages, etc. (kaldi-asr#2789)

* [egs] Update ami_download.sh after changes on Edinburgh website. (kaldi-asr#2769)

* [scripts] Update compare_alignments.sh to allow different lang dirs. (kaldi-asr#2792)

* [scripts] Change make_rttm.py so output is in determinstic order (kaldi-asr#2794)

* [egs] Fixes to yomdle_zh RE encoding direction, etc. (kaldi-asr#2791)

* [src] Add support for context independent phones in gmm-init-biphone (for e2e) (kaldi-asr#2779)

* [egs] Simplifying multi-condition version of AMI recipe (kaldi-asr#2800)

* [build] Fix openblas build for aarch64 (kaldi-asr#2806)

* [build] Make CUDA_ARCH configurable at configure-script level (kaldi-asr#2807)

* [src] Print maximum memory stats in CUDA allocator (kaldi-asr#2799)

* [src,scripts] Various minor code cleanups (kaldi-asr#2809)

* [scripts] Fix handling of UTF-8 in filenames, in wer_per_spk_details.pl (kaldi-asr#2811)

* [egs] Update AMI chain recipes (kaldi-asr#2817)

* [egs] Improvements to multi_en tdnn-opgru/lstm recipes (kaldi-asr#2824)

* [scripts] Fix initial prob of silence when lexicon has silprobs.  Thx:@agurianov (kaldi-asr#2823)

* [scripts,src] Fix to multitask nnet3 training (kaldi-asr#2818); cosmetic code change. (kaldi-asr#2827)

* [scripts] Create shared versions of get_ctm_conf.sh, add get_ctm_conf_fast.sh (kaldi-asr#2828)

* [src] Use cuda streams in matrix library (kaldi-asr#2821)

* [egs] Add online-decoding recipe to aishell1 (kaldi-asr#2829)

* [egs] Add DIHARD 2018 diarization recipe. (kaldi-asr#2822)

* [egs] add nnet3 online result for aishell1 (kaldi-asr#2836)

* [scripts] RNNLM scripts: don't die when features.txt is not present (kaldi-asr#2837)

* [src] Optimize cuda allocator for multi-threaded case (kaldi-asr#2820)

* [build] Add cub library for cuda projects (kaldi-asr#2819)

not needed now but will be in future.

* [src] Make Cuda allocator statistics visible to program (kaldi-asr#2835)

* [src] Fix bug affecting scale in GeneralDropoutComponent (non-continuous case) (kaldi-asr#2815)

* [build] FIX kaldi-asr#2842: properly check $use_cuda against false. (kaldi-asr#2843)

* [doc] Add note about OOVs to data-prep. (kaldi-asr#2844)

* [scripts] Allow segmentation with nnet3 chain models (kaldi-asr#2845)

* [build] Remove -lcuda from cuda makefiles which breaks operation when no driver present (kaldi-asr#2851)

* [scripts] Fix error in analyze_lats.sh for long lattices (replace awk with perl) (kaldi-asr#2854)

* [egs] add rnnlm recipe for librispeech (kaldi-asr#2830)

* [build] change configure version from 9 to 10 (kaldi-asr#2853) (kaldi-asr#2855)

* [src] fixed compilation errors when built with --DOUBLE_PRECISION=1 (kaldi-asr#2856)

* [build] Clarify instructions if cub is not found (kaldi-asr#2858)

* [egs] Limit MFCC feature extraction job number in Dihard recipe (kaldi-asr#2865)

* [egs] Added Bentham handwriting recognition recipe (kaldi-asr#2846)

* [src] Share roots of different tones of phones aishell (kaldi-asr#2859)

* [egs] Fix path to sequitur in commonvoice egs (kaldi-asr#2868)

* [egs] Update reverb recipe (kaldi-asr#2753)

* [scripts] Fix error while analyzing lattice (parsing bugs) (kaldi-asr#2873)

* [src] Fix memory leak in OnlineCacheFeature; thanks @Worldexe (kaldi-asr#2872)

* [egs] TIMIT: fix mac compatibility of sed command (kaldi-asr#2874)

* [egs] mini_librispeech: fixing some bugs and limiting repeated downloads (kaldi-asr#2861)

* [src,scripts,egs] Speedups to GRU-based networks (special components) (kaldi-asr#2712)

* [src] Fix infinite recursion with -DDOUBLE_PRECISION=1. Thx: @hwiorn (kaldi-asr#2875) (kaldi-asr#2876)

* Revert "[src] Fix infinite recursion with -DDOUBLE_PRECISION=1. Thx: @hwiorn (kaldi-asr#2875) (kaldi-asr#2876)" (kaldi-asr#2877)

This reverts commit 84435ff.

* Revert "Revert "[src] Fix infinite recursion with -DDOUBLE_PRECISION=1. Thx: @hwiorn (kaldi-asr#2875) (kaldi-asr#2876)" (kaldi-asr#2877)" (kaldi-asr#2878)

This reverts commit b196b7f.

* Revert "[src] Fix memory leak in OnlineCacheFeature; thanks @Worldexe" (kaldi-asr#2882)

the fix was buggy.  apologies.

* [src] Remove unused code that caused Windows compile failure.  Thx:@btiplitz (kaldi-asr#2881)

* [src] Really fix memory leak in online decoding; thx:@Worldexe (kaldi-asr#2883)

* [src] Fix Windows cuda build failure (use C++11 standard include) (kaldi-asr#2880)

* [src] Add #include that caused build failure on Windows (kaldi-asr#2886)

* [scripts] Fix max duration check in sad_to_segments.py (kaldi-asr#2889)

* [scripts] Fix speech duration calculation in sad_to_segments.py (kaldi-asr#2891)

* [src] Fix Windows build problem (timer.h) (kaldi-asr#2888)

* [egs] add HUB4 spanish tdnn-f and cnn-tdnn script (kaldi-asr#2895)

* [egs] Fix Aishell2 dict prepare bug; should not affect results (kaldi-asr#2890)

* [egs] Self-contained example for KWS for mini_librispeech (kaldi-asr#2887)

* [egs,scripts] Fix bugs in Dihard 2018 (kaldi-asr#2897)

* [scripts] Check last character of files to match with newline (kaldi-asr#2898)

* [egs] Update Librispeech RNNLM results; use correct training data (kaldi-asr#2900)

* [scripts] RNNLM: old iteration model cleanup; save space (kaldi-asr#2885)

* [scripts] Make prepare_lang.sh cleanup beforehand (prevents certain failures) (kaldi-asr#2906)

* [scripts] Expose dim-range-node at xconfig level (kaldi-asr#2903)

* [scripts] Fix bug related to multi-task in train_raw_rnn.py (kaldi-asr#2907)

[scripts] Fix bug related to multi-task in train_raw_rnn.py. Thx:tessfu2001@gmail.com

* [scripts] Cosmetic fix/clarification to utils/prepare_lang.sh (kaldi-asr#2912)

* [scripts,egs] Added a new lexicon learning (adaptation) recipe for tedlium, in accordance with the IS17 paper. (kaldi-asr#2774)

* [egs] TDNN+LSTM example scripts, with RNNLM, for Librispeech (kaldi-asr#2857)

* [src] cosmetic fix in nnet1 code (kaldi-asr#2921)

* [src] Fix incorrect invocation of mutex in nnet-batch-compute code (kaldi-asr#2932)

* [egs,minor] Fix typo in comment in voxceleb script (kaldi-asr#2926)

* [src,egs] Mostly cosmetic changes; add some missing includes (kaldi-asr#2936)

* [egs] Fix path of rescoring binaries used in tfrnnlm scripts (kaldi-asr#2941)

* [src] Fix bug in nnet3-latgen-faster-batch for determinize=false (kaldi-asr#2945)

thx: Maxim Korenevsky.

* [egs] Add example for rimes handwriting database; Madcat arabic script cleanup  (kaldi-asr#2935)

* [egs] Add scripts for yomdle korean (kaldi-asr#2942)

* [build] Refactor/cleanup build system, easier build on ubuntu 18.04. (kaldi-asr#2947)

note: if this breaks someone's build we'll have to debug it then.

* [scripts,egs] Changes for Python 2/3 compatibility (kaldi-asr#2925)

* [egs] Add more modern DNN recipe for fisher_callhome_spanish (kaldi-asr#2951)

* [scripts] switch from bc to perl to reduce dependencies (diarization scripts) (kaldi-asr#2956)

* [scripts] Further fix for Python 2/3 compatibility  (kaldi-asr#2957)

* [egs] Remove no-longer-existing option in tedlium_r3 recipe (kaldi-asr#2959)

* [build] Handle dependencies for .cu files in addition to .cc files (kaldi-asr#2944)

* [src] remove duplicate test mode option from class GeneralDropoutComponent (kaldi-asr#2960)

* [egs] Fix minor bugs in WSJ's flat-start/e2e recipe (kaldi-asr#2968)

* [egs] Fix to BSD compatibility of TIMIT data prep (kaldi-asr#2966)

* [scripts] Fix RNNLM training script problem (chunk_length was ignored) (kaldi-asr#2969)

* [src] Fix bug in lattice-1best.cc RE removing insertion penalty (kaldi-asr#2970)

* [src] Compute a separate avg (start, end) interval for each sausage word (kaldi-asr#2972)

* [build] Move nvcc verbose flag to proper location (kaldi-asr#2962)

* [egs] Fix mini_librispeech download_lm.sh crash; thx:chris.keith.johnson@gmail.com (kaldi-asr#2974)

* [egs] minor fixes related to python2 vs python3 differences (kaldi-asr#2977)

* [src] Small fix in test code, avoid spurious failure (kaldi-asr#2978)

* [egs] Fix CSJ data-prep; minor path fix for USB version of data (kaldi-asr#2979)

* [egs] Add paper ref to README.txt in reverb example (kaldi-asr#2982)

* [egs] Minor fixes to sitw recipe (fix problem introdueced in kaldi-asr#2925) (kaldi-asr#2985)

* [scripts]  Fix bug introduced in kaldi-asr#2957, RE integer division (kaldi-asr#2986)

* [egs] Update WSJ flat-start chain recipes to use TDNN-F not TDNN+LSTM (kaldi-asr#2988)

* [scripts] Fix typo introduced in kaldi-asr#2925 (kaldi-asr#2989)

* [build] Modify Makefile and travis script to fix Travis failures (kaldi-asr#2987)

* [src] Simplification and efficiency improvement in ivector-plda-scoring-dense (kaldi-asr#2991)

* [egs] Update madcat Arabic and Chinese egs, IAM (kaldi-asr#2964)

* [src] Fix overflow bug in convolution code (kaldi-asr#2992)

* [src] Fix nan issue in ctm times introduced in kaldi-asr#2972, thx: @vesis84 (kaldi-asr#2993)

* [src] Fix 'sausage-time' issue which occurs with disabled MBR decoding. (kaldi-asr#2996)

* [egs] Add scripts for yomdle Russian (OCR task) (kaldi-asr#2953)

* [egs] Simplify lexicon preparation in Fisher callhome Spanish (kaldi-asr#2999)

* [egs] Update GALE Arabic recipe (kaldi-asr#2934)

* [egs] Remove outdated NN results from Gale Arabic recipe (kaldi-asr#3002)

* [egs] Add RESULTS file for the tedlium s5_r3 (release 3) setup (kaldi-asr#3003)

* [src] Fixes to grammar-fst code to handle LM-disambig symbols properly (kaldi-asr#3000)

thanks: armando.muscariello@gmail.com

* [src] Cosmetic change to mel computation (fix option string) (kaldi-asr#3011)

* [src] Fix Visual Studio error due to alternate syntactic form of noreturn (kaldi-asr#3018)

* [egs] Fix location of sequitur installation (kaldi-asr#3017)

* [src] Fix w/ ifdef Visual Studio error from alternate syntactic form noreturn (kaldi-asr#3020)

* [egs] Some fixes to getting data in heroico recipe (kaldi-asr#3021)

* [egs] BABEL script fix: avoid make_L_align.sh generating invalid files (kaldi-asr#3022)

* [src] Fix to older online decoding code in online/ (OnlineFeInput; was broken by commit cc2469e). (kaldi-asr#3025)

* [script] Fix unset bash variable in make_mfcc.sh (kaldi-asr#3030)

* [scripts]  Extend limit_num_gpus.sh to support --num-gpus 0.  (kaldi-asr#3027)

* [scripts] fix bug in utils/add_lex_disambig.pl when sil-probs and pron-probs used (kaldi-asr#3033)

bug would likely have resulted in determinization failure (only when not using word-position-dependent phones).

* [egs] Fix path in Tedlium r3 rnnlm training script (kaldi-asr#3039)

* [src] Thread-safety for GrammarFst (thx:armando.muscariello@gmail.com) (kaldi-asr#3040)

* [scripts] Cosmetic fix to get_degs.sh (kaldi-asr#3045)

* [egs] Small bug fixes for IAM and UW3 recipes (kaldi-asr#3048)

* [scripts] Nnet3 segmentation: fix default params (kaldi-asr#3051)

* [scripts] Allow perturb_data_dir_speed.sh to work with utt2lang (kaldi-asr#3055)

* [scripts] Make beam in monophone training configurable  (kaldi-asr#3057)

* [scripts] Allow reverberate_data_dir.py to support unicode filenames (kaldi-asr#3060)

* [scripts] Make some cleanup scripts work with python3 (kaldi-asr#3054)

* [scripts] bug fix to nnet2->3 conversion, fixes kaldi-asr#886 (kaldi-asr#3071)

* [src] Make copies occur in per-thread default stream (for GPUs)  (kaldi-asr#3068)

* [src] Add GPU version of MergeTaskOutput().. relates to batch decoding (kaldi-asr#3067)

* [src] Add device options to enable tensor core math mode. (kaldi-asr#3066)

* [src] Log nnet3 computation to VLOG, not std::cout (kaldi-asr#3072)

* [src] Allow upsampling in compute-mfcc-feats, etc. (kaldi-asr#3014)

* [src] fix problem with rand_r being undefined on Android (kaldi-asr#3037)

* [egs] Update swbd1_map_words.pl, fix them_1's -> them's (kaldi-asr#3052)

* [src] Add const overload OnlineNnet2FeaturePipeline::IvectorFeature (kaldi-asr#3073)

* [src] Fix syntax error in egs/bn_music_speech/v1/local/make_musan.py (kaldi-asr#3074)

* [src] Memory optimization for online feature extraction of long recordings (kaldi-asr#3038)

* [build] fixed a bug in linux_configure_redhat_fat when use_cuda=no (kaldi-asr#3075)

* [scripts] Add missing '. ./path.sh' to get_utt2num_frames.sh (kaldi-asr#3076)

* [src,scripts,egs] Add count-based biphone tree tying for flat-start chain training (kaldi-asr#3007)

* [scripts,egs] Remove sed from various scripts (avoid compatibility problems)  (kaldi-asr#2981)

* [src] Rework error logging for safety and cleanliness (kaldi-asr#3064)

* [src] Change warp-synchronous to cub::BlockReduce (safer but slower) (kaldi-asr#3080)

* [src] Fix && and || uses where & and | intended, and other weird errors (kaldi-asr#3087)

* [build] Some fixes to Makefiles (kaldi-asr#3088)

clang is unhappy with '-rdynamic' in compile-only step, and the
switch is really unnecessary.

Also, the default location for MKL 64-bit libraries is intel64/.
The em64t/ was explained already obsolete by an Intel rep in 2010:
https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/285973

* [src] Fixed -Wreordered warnings in feat (kaldi-asr#3090)

* [egs] Replace bc with perl -e (kaldi-asr#3093)

* [scripts] Fix python3 compatibility issue in data-perturbing script (kaldi-asr#3084)

* [doc] fix some typos in doc. (kaldi-asr#3097)

* [build] Make sure expf() speed probe times sensibly (kaldi-asr#3089)

* [scripts] Make sure merge_targets.py works in python3 (kaldi-asr#3094)

* [src] ifdef to fix compilation failure on CUDA 8 and earlier (kaldi-asr#3103)

* [doc] fix typos and broken links in doc. (kaldi-asr#3102)

* [scripts] Fix frame_shift bug in egs/swbd/s5c/local/score_sclite_conf.sh (kaldi-asr#3104)

* [src] Fix wrong assertion failure in nnet3-am-compute (kaldi-asr#3106)

* [src] Cosmetic changes to natural-gradient code (kaldi-asr#3108)

* [src,scripts] Python2 compatibility fixes and code cleanup for nnet1 (kaldi-asr#3113)

* [doc] Small documentation fixes; update on Kaldi history (kaldi-asr#3031)

* [src] Various mostly-cosmetic changes (copying from another branch) (kaldi-asr#3109)

* [scripts]  Simplify text encoding in RNNLM scripts (now only support utf-8) (kaldi-asr#3065)

* [egs] Add "formosa_speech" recipe (Taiwanese Mandarin ASR) (kaldi-asr#2474)

* [egs] python3 compatibility in csj example script (kaldi-asr#3123)

* [egs] python3 compatibility in example scripts (kaldi-asr#3126)

* [scripts] Bug-fix for removing deleted words (kaldi-asr#3116)

The type of --max-deleted-words-kept-when-merging in segment_ctm_edits.py
was a string, which prevented the mechanism from working altogether.

* [scripts] Add fix regarding num-jobs for segment_long_utterances*.sh(kaldi-asr#3130)

* [src] Enable allow_{upsample,downsample} with online features (kaldi-asr#3139)

* [src] Fix bad assert in fstmakecontextsyms (kaldi-asr#3142)

* [src] Fix to "Fixes to grammar-fst & LM-disambig symbols" (kaldi-asr#3000) (kaldi-asr#3143)

* [build] Make sure PaUtils exported from portaudio (kaldi-asr#3144)

* [src] cudamatrix: fixing a synchronization bug in 'normalize-per-row' (kaldi-asr#3145)

was only apparent using large matrices

* [src] Fix typo in comment (kaldi-asr#3147)

* [src] Add binary that functions as a TCP server (kaldi-asr#2938)

* [scripts] Fix bug in comment (kaldi-asr#3152)

* [scripts] Fix bug in steps/segmentation/ali_to_targets.sh (kaldi-asr#3155)

* [scripts] Avoid holding out more data than the requested num-utts (due to utt2uniq) (kaldi-asr#3141)

* [src,scripts] Add support for two-pass agglomerative clustering. (kaldi-asr#3058)

* [src] Disable unget warning in PeekToken (and other small fix) (kaldi-asr#3163)

* [build] Add new nvidia tools to windows build (kaldi-asr#3159)

* [doc] Fix documentation errors and add more docs for tcp-server decoder  (kaldi-asr#3164)
  • Loading branch information
chenzhehuai committed Jun 3, 2019
1 parent e26347f commit d39b6de
Show file tree
Hide file tree
Showing 1,856 changed files with 97,559 additions and 21,321 deletions.
15 changes: 11 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -73,15 +73,17 @@ GSYMS
/src/kaldi.mk.bak

# /egs/
/egs/*/s*/mfcc
/egs/*/s*/plp
/egs/*/s*/exp
/egs/*/s*/data
/egs/*/*/mfcc
/egs/*/*/plp
/egs/*/*/exp
/egs/*/*/data

# /tools/
/tools/pocolm/
/tools/ATLAS/
/tools/atlas3.8.3.tar.gz
/tools/irstlm/
/tools/mitlm/
/tools/openfst
/tools/openfst-1.3.2.tar.gz
/tools/openfst-1.3.2/
Expand All @@ -101,6 +103,8 @@ GSYMS
/tools/openfst-1.6.2/
/tools/openfst-1.6.5.tar.gz
/tools/openfst-1.6.5/
/tools/openfst-1.6.7.tar.gz
/tools/openfst-1.6.7/
/tools/BeamformIt/
/tools/libsndfile-1.0.25.tar.gz
/tools/libsndfile-1.0.25/
Expand Down Expand Up @@ -141,3 +145,6 @@ GSYMS
/tools/mmseg-1.3.0.tar.gz
/tools/mmseg-1.3.0/
/kaldiwin_vs*
/tools/cub-1.8.0.zip
/tools/cub-1.8.0/
/tools/cub
5 changes: 3 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ addons:
- gfortran-4.9
- liblapack-dev
- clang-3.8
- sox

branches:
only:
Expand All @@ -47,8 +48,8 @@ script:
# http://peter.eisentraut.org/blog/2014/12/01/ccache-and-clang-part-3/
# for the explanation why extra switches needed for clang with ccache.
- CXX="ccache clang++-3.8 -Qunused-arguments -fcolor-diagnostics -Wno-tautological-compare"
CFLAGS="-march=native"
LDFLAGS="-llapack"
CFLAGS=""
LDFLAGS="-llapack -Wl,-fuse-ld=gold"
INCDIRS="$XROOT/usr/include"
LIBDIRS="$XROOT/usr/lib"
tools/extras/travis_script.sh
Expand Down
12 changes: 6 additions & 6 deletions COPYING
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ contributors and original source material as well as the full text of the Apache
License v 2.0 are set forth below.

Individual Contributors (in alphabetical order)

Mohit Agarwal
Tanel Alumae
Gilles Boulianne
Expand Down Expand Up @@ -123,7 +123,7 @@ Individual Contributors (in alphabetical order)
Haihua Xu
Hainan Xu
Xiaohui Zhang

Other Source Material

This project includes a port and modification of materials from JAMA: A Java
Expand All @@ -136,9 +136,9 @@ Other Source Material
"Signal processing with lapped transforms," Artech House, Inc., 1992. The
current copyright holder, Henrique S. Malvar, has given his permission for the
release of this modified version under the Apache License 2.0.
This project includes material from the OpenFST Library v1.2.7 available at
http://www.openfst.org and released under the Apache License v. 2.0.

This project includes material from the OpenFST Library v1.2.7 available at
http://www.openfst.org and released under the Apache License v. 2.0.

[OpenFst COPYING file begins here]

Expand All @@ -147,7 +147,7 @@ Other Source Material
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
[![Build Status](https://travis-ci.org/kaldi-asr/kaldi.svg?branch=master)](https://travis-ci.org/kaldi-asr/kaldi)

[![Build Status](https://travis-ci.com/kaldi-asr/kaldi.svg?branch=master)](https://travis-ci.com/kaldi-asr/kaldi)
Kaldi Speech Recognition Toolkit
================================

Expand Down
26 changes: 18 additions & 8 deletions egs/aishell/s5/RESULTS
Original file line number Diff line number Diff line change
@@ -1,8 +1,18 @@
%WER 33.82 [ 35432 / 104765, 743 ins, 3991 del, 30698 sub ] exp/mono/decode_test/cer_12_0.0
%WER 19.39 [ 20310 / 104765, 903 ins, 1452 del, 17955 sub ] exp/tri1/decode_test/cer_13_0.5
%WER 19.23 [ 20147 / 104765, 910 ins, 1287 del, 17950 sub ] exp/tri2/decode_test/cer_14_0.5
%WER 17.14 [ 17961 / 104765, 812 ins, 1024 del, 16125 sub ] exp/tri3a/decode_test/cer_14_0.0
%WER 13.64 [ 14294 / 104765, 669 ins, 736 del, 12889 sub ] exp/tri4a/decode_test/cer_14_0.5
%WER 12.23 [ 12809 / 104765, 656 ins, 580 del, 11573 sub ] exp/tri5a/decode_test/cer_13_1.0
%WER 8.45 [ 8849 / 104765, 312 ins, 538 del, 7999 sub ] exp/nnet3/tdnn_sp/decode_test/cer_13_1.0
%WER 7.46 [ 7813 / 104765, 287 ins, 472 del, 7054 sub ] exp/chain/tdnn_1a_sp/decode_test/cer_10_1.0
%WER 36.41 [ 38146 / 104765, 837 ins, 3114 del, 34195 sub ] exp/mono/decode_test/cer_10_0.0
%WER 18.76 [ 19654 / 104765, 949 ins, 1152 del, 17553 sub ] exp/tri1/decode_test/cer_13_0.5
%WER 18.64 [ 19531 / 104765, 941 ins, 1159 del, 17431 sub ] exp/tri2/decode_test/cer_14_0.5
%WER 17.04 [ 17849 / 104765, 810 ins, 1021 del, 16018 sub ] exp/tri3a/decode_test/cer_14_0.5
%WER 13.82 [ 14482 / 104765, 764 ins, 670 del, 13048 sub ] exp/tri4a/decode_test/cer_13_0.5
%WER 12.12 [ 12694 / 104765, 751 ins, 523 del, 11420 sub ] exp/tri5a/decode_test/cer_13_0.5
%WER 8.65 [ 9064 / 104765, 367 ins, 455 del, 8242 sub ] exp/nnet3/tdnn_sp/decode_test/cer_14_0.5
%WER 7.48 [ 7839 / 104765, 285 ins, 454 del, 7100 sub ] exp/chain/tdnn_1a_sp/decode_test/cer_10_1.0

# nnet3 tdnn with online pitch, local/nnet3/tuning/tun_tdnn_2a.sh
%WER 8.64 [ 9050 / 104765, 349 ins, 521 del, 8180 sub ] exp/nnet3/tdnn_sp/decode_test/cer_15_0.5
%WER 8.72 [ 9135 / 104765, 367 ins, 422 del, 8346 sub ] exp/nnet3/tdnn_sp_online/decode_test/cer_12_1.0
%WER 9.36 [ 9807 / 104765, 386 ins, 441 del, 8980 sub ] exp/nnet3/tdnn_sp_online/decode_test_per_utt/cer_13_1.0

# chain with online pitch, local/chain/tuning/run_tdnn_2a.sh
%WER 7.45 [ 7807 / 104765, 340 ins, 497 del, 6970 sub ] exp/chain/tdnn_2a_sp/decode_test/cer_11_0.5
%WER 7.43 [ 7780 / 104765, 341 ins, 469 del, 6970 sub ] exp/chain/tdnn_2a_sp_online/decode_test/cer_11_0.5
%WER 7.92 [ 8296 / 104765, 384 ins, 472 del, 7440 sub ] exp/chain/tdnn_2a_sp_online/decode_test_per_utt/cer_11_0.5
4 changes: 4 additions & 0 deletions egs/aishell/s5/conf/online_pitch.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
--sample-frequency=16000
--simulate-first-pass-online=true
--normalization-right-context=25
--frames-per-chunk=10
18 changes: 3 additions & 15 deletions egs/aishell/s5/local/aishell_prepare_dict.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,9 @@ mkdir -p $dict_dir
cp $res_dir/lexicon.txt $dict_dir

cat $dict_dir/lexicon.txt | awk '{ for(n=2;n<=NF;n++){ phones[$n] = 1; }} END{for (p in phones) print p;}'| \
sort -u |\
perl -e '
my %ph_cl;
while (<STDIN>) {
$phone = $_;
chomp($phone);
chomp($_);
$phone = $_;
next if ($phone eq "sil");
if (exists $ph_cl{$phone}) { push(@{$ph_cl{$phone}}, $_) }
else { $ph_cl{$phone} = [$_]; }
}
foreach $key ( keys %ph_cl ) {
print "@{ $ph_cl{$key} }\n"
}
perl -e 'while(<>){ chomp($_); $phone = $_; next if ($phone eq "sil");
m:^([^\d]+)(\d*)$: || die "Bad phone $_"; $q{$1} .= "$phone "; }
foreach $l (values %q) {print "$l\n";}
' | sort -k1 > $dict_dir/nonsilence_phones.txt || exit 1;

echo sil > $dict_dir/silence_phones.txt
Expand Down
2 changes: 1 addition & 1 deletion egs/aishell/s5/local/aishell_train_lms.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ kaldi_lm=`which train_lm.sh`
if [ -z $kaldi_lm ]; then
echo "$0: train_lm.sh is not found. That might mean it's not installed"
echo "$0: or it is not added to PATH"
echo "$0: Use the script tools/extra/install_kaldi_lm.sh to install it"
echo "$0: Use the script tools/extras/install_kaldi_lm.sh to install it"
exit 1
fi

Expand Down
2 changes: 1 addition & 1 deletion egs/aishell/s5/local/chain/tuning/run_tdnn_1a.sh
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ if [ $stage -le 10 ]; then
echo "$0: creating neural net configs using the xconfig parser";

num_targets=$(tree-info $treedir/tree |grep num-pdfs|awk '{print $2}')
learning_rate_factor=$(echo "print 0.5/$xent_regularize" | python)
learning_rate_factor=$(echo "print (0.5/$xent_regularize)" | python)

mkdir -p $dir/configs
cat <<EOF > $dir/configs/network.xconfig
Expand Down
211 changes: 211 additions & 0 deletions egs/aishell/s5/local/chain/tuning/run_tdnn_2a.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,211 @@
#!/bin/bash

# This script is based on run_tdnn_1a.sh.
# This setup used online pitch to train the neural network.
# It requires a online_pitch.conf in the conf dir.

set -e

# configs for 'chain'
affix=
stage=0
train_stage=-10
get_egs_stage=-10
dir=exp/chain/tdnn_2a # Note: _sp will get added to this
decode_iter=

# training options
num_epochs=4
initial_effective_lrate=0.001
final_effective_lrate=0.0001
max_param_change=2.0
final_layer_normalize_target=0.5
num_jobs_initial=2
num_jobs_final=12
minibatch_size=128
frames_per_eg=150,110,90
remove_egs=true
common_egs_dir=
xent_regularize=0.1

# End configuration section.
echo "$0 $@" # Print the command line for logging

. ./cmd.sh
. ./path.sh
. ./utils/parse_options.sh

if ! cuda-compiled; then
cat <<EOF && exit 1
This script is intended to be used with GPUs but you have not compiled Kaldi with CUDA
If you want to use GPUs (and have them), go to src/, and configure and make on a machine
where "nvcc" is installed.
EOF
fi

# The iVector-extraction and feature-dumping parts are the same as the standard
# nnet3 setup, and you can skip them by setting "--stage 8" if you have already
# run those things.

dir=${dir}${affix:+_$affix}_sp
train_set=train_sp
ali_dir=exp/tri5a_sp_ali
treedir=exp/chain/tri6_7d_tree_sp
lang=data/lang_chain


# if we are using the speed-perturbed data we need to generate
# alignments for it.
local/nnet3/run_ivector_common.sh --stage $stage --online true || exit 1;

if [ $stage -le 7 ]; then
# Get the alignments as lattices (gives the LF-MMI training more freedom).
# use the same num-jobs as the alignments
nj=$(cat $ali_dir/num_jobs) || exit 1;
steps/align_fmllr_lats.sh --nj $nj --cmd "$train_cmd" data/$train_set \
data/lang exp/tri5a exp/tri5a_sp_lats
rm exp/tri5a_sp_lats/fsts.*.gz # save space
fi

if [ $stage -le 8 ]; then
# Create a version of the lang/ directory that has one state per phone in the
# topo file. [note, it really has two states.. the first one is only repeated
# once, the second one has zero or more repeats.]
rm -rf $lang
cp -r data/lang $lang
silphonelist=$(cat $lang/phones/silence.csl) || exit 1;
nonsilphonelist=$(cat $lang/phones/nonsilence.csl) || exit 1;
# Use our special topology... note that later on may have to tune this
# topology.
steps/nnet3/chain/gen_topo.py $nonsilphonelist $silphonelist >$lang/topo
fi

if [ $stage -le 9 ]; then
# Build a tree using our new topology. This is the critically different
# step compared with other recipes.
steps/nnet3/chain/build_tree.sh --frame-subsampling-factor 3 \
--context-opts "--context-width=2 --central-position=1" \
--cmd "$train_cmd" 5000 data/$train_set $lang $ali_dir $treedir
fi

if [ $stage -le 10 ]; then
echo "$0: creating neural net configs using the xconfig parser";

num_targets=$(tree-info $treedir/tree |grep num-pdfs|awk '{print $2}')
learning_rate_factor=$(echo "print (0.5/$xent_regularize)" | python)

mkdir -p $dir/configs
cat <<EOF > $dir/configs/network.xconfig
input dim=100 name=ivector
input dim=43 name=input
# please note that it is important to have input layer with the name=input
# as the layer immediately preceding the fixed-affine-layer to enable
# the use of short notation for the descriptor
fixed-affine-layer name=lda input=Append(-1,0,1,ReplaceIndex(ivector, t, 0)) affine-transform-file=$dir/configs/lda.mat
# the first splicing is moved before the lda layer, so no splicing here
relu-batchnorm-layer name=tdnn1 dim=625
relu-batchnorm-layer name=tdnn2 input=Append(-1,0,1) dim=625
relu-batchnorm-layer name=tdnn3 input=Append(-1,0,1) dim=625
relu-batchnorm-layer name=tdnn4 input=Append(-3,0,3) dim=625
relu-batchnorm-layer name=tdnn5 input=Append(-3,0,3) dim=625
relu-batchnorm-layer name=tdnn6 input=Append(-3,0,3) dim=625
## adding the layers for chain branch
relu-batchnorm-layer name=prefinal-chain input=tdnn6 dim=625 target-rms=0.5
output-layer name=output include-log-softmax=false dim=$num_targets max-change=1.5
# adding the layers for xent branch
# This block prints the configs for a separate output that will be
# trained with a cross-entropy objective in the 'chain' models... this
# has the effect of regularizing the hidden parts of the model. we use
# 0.5 / args.xent_regularize as the learning rate factor- the factor of
# 0.5 / args.xent_regularize is suitable as it means the xent
# final-layer learns at a rate independent of the regularization
# constant; and the 0.5 was tuned so as to make the relative progress
# similar in the xent and regular final layers.
relu-batchnorm-layer name=prefinal-xent input=tdnn6 dim=625 target-rms=0.5
output-layer name=output-xent dim=$num_targets learning-rate-factor=$learning_rate_factor max-change=1.5
EOF
steps/nnet3/xconfig_to_configs.py --xconfig-file $dir/configs/network.xconfig --config-dir $dir/configs/
fi

if [ $stage -le 11 ]; then
if [[ $(hostname -f) == *.clsp.jhu.edu ]] && [ ! -d $dir/egs/storage ]; then
utils/create_split_dir.pl \
/export/b0{5,6,7,8}/$USER/kaldi-data/egs/aishell-$(date +'%m_%d_%H_%M')/s5c/$dir/egs/storage $dir/egs/storage
fi

steps/nnet3/chain/train.py --stage $train_stage \
--cmd "$decode_cmd" \
--feat.online-ivector-dir exp/nnet3/ivectors_${train_set} \
--feat.cmvn-opts "--norm-means=false --norm-vars=false" \
--chain.xent-regularize $xent_regularize \
--chain.leaky-hmm-coefficient 0.1 \
--chain.l2-regularize 0.00005 \
--chain.apply-deriv-weights false \
--chain.lm-opts="--num-extra-lm-states=2000" \
--egs.dir "$common_egs_dir" \
--egs.stage $get_egs_stage \
--egs.opts "--frames-overlap-per-eg 0" \
--egs.chunk-width $frames_per_eg \
--trainer.num-chunk-per-minibatch $minibatch_size \
--trainer.frames-per-iter 1500000 \
--trainer.num-epochs $num_epochs \
--trainer.optimization.num-jobs-initial $num_jobs_initial \
--trainer.optimization.num-jobs-final $num_jobs_final \
--trainer.optimization.initial-effective-lrate $initial_effective_lrate \
--trainer.optimization.final-effective-lrate $final_effective_lrate \
--trainer.max-param-change $max_param_change \
--cleanup.remove-egs $remove_egs \
--feat-dir data/${train_set}_hires_online \
--tree-dir $treedir \
--lat-dir exp/tri5a_sp_lats \
--dir $dir || exit 1;
fi

if [ $stage -le 12 ]; then
# Note: it might appear that this $lang directory is mismatched, and it is as
# far as the 'topo' is concerned, but this script doesn't read the 'topo' from
# the lang directory.
utils/mkgraph.sh --self-loop-scale 1.0 data/lang_test $dir $dir/graph
fi

graph_dir=$dir/graph
if [ $stage -le 13 ]; then
for test_set in dev test; do
steps/nnet3/decode.sh --acwt 1.0 --post-decode-acwt 10.0 \
--nj 10 --cmd "$decode_cmd" \
--online-ivector-dir exp/nnet3/ivectors_$test_set \
$graph_dir data/${test_set}_hires_online $dir/decode_${test_set} || exit 1;
done
fi

if [ $stage -le 14 ]; then
steps/online/nnet3/prepare_online_decoding.sh --mfcc-config conf/mfcc_hires.conf \
--add-pitch true \
$lang exp/nnet3/extractor "$dir" ${dir}_online || exit 1;
fi

dir=${dir}_online
if [ $stage -le 15 ]; then
for test_set in dev test; do
steps/online/nnet3/decode.sh --acwt 1.0 --post-decode-acwt 10.0 \
--nj 10 --cmd "$decode_cmd" \
--config conf/decode.config \
$graph_dir data/${test_set}_hires_online $dir/decode_${test_set} || exit 1;
done
fi

if [ $stage -le 16 ]; then
for test_set in dev test; do
steps/online/nnet3/decode.sh --acwt 1.0 --post-decode-acwt 10.0 \
--nj 10 --cmd "$decode_cmd" --per-utt true \
--config conf/decode.config \
$graph_dir data/${test_set}_hires_online $dir/decode_${test_set}_per_utt || exit 1;
done
fi

exit;
Loading

0 comments on commit d39b6de

Please sign in to comment.