Permalink
..
Failed to load latest commit information.
README.md Fix #328; distributed TensorFlow Mar 31, 2017
benchmark_nc.py Non positionnal arguments. Everywhere. Jul 4, 2018
benchmark_plotter.py Remove keyword argument that has default value May 29, 2018
enqueue.sh Fix #678; copying keep directory into right target directory Jul 5, 2017
gpu_usage_chart Tracking GPU Usage Nov 17, 2016
gpu_usage_plot Tracking GPU Usage Nov 17, 2016
import_cv.py Use progressbar2 in import_cv.py Jul 11, 2018
import_fisher.py Remove audio file too large for its transcript Nov 1, 2017
import_ldc93s1.py Fix #715; removed wrong and unused import Jul 13, 2017
import_librivox.py Save absolute paths in dataset definition files Apr 26, 2017
import_swb.py Fixed #789 Aug 25, 2017
import_ted.py Keep unsplit WAV files in the TED corpus May 10, 2017
import_timit.py removed blank Sep 12, 2017
import_voxforge.py parallelized import_voxforge.py Oct 23, 2017
job-template.sbatch Fix #590; Fix #551; Fix #615; Better process exit and exception handlin Jun 7, 2017
run-cluster.sh Fix #328; distributed TensorFlow Mar 31, 2017
run-fisher.sh Fix #493; DeepSpeech part of SLURM cluster support May 18, 2017
run-ldc93s1.sh Fix #493; DeepSpeech part of SLURM cluster support May 18, 2017
run-librivox.sh Fix #493; DeepSpeech part of SLURM cluster support May 18, 2017
run-swb.sh Removed log level parameter Aug 25, 2017
run-tc-ldc93s1.sh Use LDC93S1 small language model Feb 15, 2018
run-ted.sh Fixed issue #608 May 28, 2017
run-timit.sh added timit import scripts Sep 11, 2017
run-wer-automation.sh Fixed issue #608 May 28, 2017
update-website.sh Handling of website publication Oct 24, 2016

README.md

Utility scripts

This folder contains scripts that can be used to do training on the various included importers from the command line. This is useful to be able to run training without a browser open, or unattended on a remote machine. They should be run from the base directory of the repository. Note that the default settings assume a very well-specified machine. In the situation that out-of-memory errors occur, you may find decreasing the values of --train_batch_size, --dev_batch_size and --test_batch_size will allow you to continue, at the expense of speed.