(Moved from julius.osdn.jp since 2015/09, this is official) (Since 2019/1/2, master has UTF-8-purified codes. We are still keeping the snap of old encoding at 4.5 release at branch "master-4.5-legacy".)
Julius: Open-Source Large Vocabulary Continuous Speech Recognition Engine
Copyright (c) 1991-2019 Kawahara Lab., Kyoto University Copyright (c) 2005-2019 Julius project team, Lee Lab., Nagoya Institute of Technology Copyright (c) 1997-2000 Information-technology Promotion Agency, Japan Copyright (c) 2000-2005 Shikano Lab., Nara Institute of Science and Technology
"Julius" is a high-performance, small-footprint large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers. Based on word N-gram and context-dependent HMM, it can perform real-time decoding on various computers and devices from micro-computer to cloud server. The algorithm is based on 2-pass tree-trellis search, which fully incorporates major decoding techniques such as tree-organized lexicon, 1-best / word-pair context approximation, rank/score pruning, N-gram factoring, cross-word context dependency handling, enveloped beam search, Gaussian pruning, Gaussian selection, etc. Besides search efficiency, it is also modularized to be independent from model structures, and wide variety of HMM structures are supported such as shared-state triphones and tied-mixture models, with any number of mixtures, states, or phone sets. It also can run multi-instance recognition, running dictation, grammar-based recognition or isolated word recognition simultaneously in a single thread. Standard formats are adopted for the models to cope with other speech / language modeling toolkit such as HTK, SRILM, etc. Recent version also supports Deep Neural Network (DNN) based real-time decoding.
The main platform is Linux and other Unix-based system, as well as Windows, Mac, Androids and other platforms.
Julius has been developed as a research software for Japanese LVCSR since 1997, and the work was continued under IPA Japanese dictation toolkit project (1997-2000), Continuous Speech Recognition Consortium, Japan (CSRC) (2000-2003) and Interactive Speech Technology Consortium (ISTC).
The main developer / maintainer is Akinobu Lee (firstname.lastname@example.org).
- An open-source LVCSR software (see terms and conditions of license.)
- Real-time, hi-speed, accurate recognition based on 2-pass strategy.
- Low memory requirement: less than 32MBytes required for work area (<64MBytes for 20k-word dictation with on-memory 3-gram LM).
- Supports LM of N-gram with arbitrary N. Also supports rule-based grammar, and word list for isolated word recognition.
- Language and unit-dependent: Any LM in ARPA standard format and AM in HTK ascii hmm definition format can be used.
- Highly configurable: can set various search parameters. Also alternate decoding algorithm (1-best/word-pair approx., word trellis/word graph intermediates, etc.) can be chosen.
- List of major supported features:
- On-the-fly recognition for microphone and network input
- GMM-based input rejection
- Successive decoding, delimiting input by short pauses
- N-best output
- Word graph output
- Forced alignment on word, phoneme, and state level
- Confidence scoring
- Server mode and control API
- Many search parameters for tuning its performance
- Character code conversion for result output.
- (Rev. 4) Engine becomes Library and offers simple API
- (Rev. 4) Long N-gram support
- (Rev. 4) Run with forward / backward N-gram only
- (Rev. 4) Confusion network output
- (Rev. 4) Arbitrary multi-model decoding in a single thread.
- (Rev. 4) Rapid isolated word recognition
- (Rev. 4) User-defined LM function embedding
- DNN-based decoding, using front-end module for frame-wise state probability calculation for flexibility.
How to run Julius with English DNN model. The procedures below are for Linux but almost the same for other OS.
1. Build latest Julius
% sudo apt-get install build-essential zlib1g-dev libsdl2-dev libasound2-dev % git clone https://github.com/julius-speech/julius.git % cd julius % ./configure --enable-words-int % make -j4 % ls -l julius/julius -rwxr-xr-x 1 ri lab 746056 May 26 13:01 julius/julius
2. Get English DNN model
Go to JuliusModel page and download the English model(LM+DNN-HMM) named "
ENVR-v5.4.Dnn.Bin.zip". Unzip it and cd to there.
% cd .. % unzip /some/where/ENVR-v5.4.Dnn.Bin.zip % cd ENVR-5.4.Dnn.Bin
3. Modify config file
dnn.jconf file in the unzipped folder to fit the latest version of Julius:
(edit dnn.jconf) @@ -1,5 +1,5 @@ feature_type MFCC_E_D_A_Z -feature_options -htkconf wav_config -cvn -cmnload ENVR-v5.3.norm -cmnstatic +feature_options -htkconf wav_config -cvn -cmnload ENVR-v5.3.norm -cvnstatic num_threads 1 feature_len 48 context_len 11 @@ -21,3 +21,4 @@ output_B ENVR-v5.3.layerout_bias.npy state_prior_factor 1.0 state_prior ENVR-v5.3.prior +state_prior_log10nize false
4. Recognize audio file
mozilla.wav" included in the zip file.
% ../julius/julius/julius -C julius.jconf -dnnconf dnn.jconf
You'll get tons of messages, but the final result of the first speech part will be output like this:
sentence1: <s> without the data said the article was useless </s> wseq1: <s> without the data said the article was useless </s> phseq1: sil | w ih dh aw t | dh ax | d ae t ah | s eh d | dh iy | aa r t ah k ah l | w ax z | y uw s l ah s | sil cmscore1: 0.785 0.892 0.318 0.284 0.669 0.701 0.818 0.103 0.528 1.000 score1: 261.947144
test.dbl" contains list of audio files to be recognized. Edit the file and run again to test with another files.
5. Run with live microphone input
To run Julius on live microphone input, save the following text as "
-input mic -htkconf wav_config -h ENVR-v5.3.am -hlist ENVR-v5.3.phn -d ENVR-v5.3.lm -v ENVR-v5.3.dct -b 4000 -lmp 12 -6 -lmp2 12 -6 -fallback1pass -multipath -iwsp -iwcd1 max -spmodel sp -no_ccd -sepnum 150 -b2 360 -n 40 -s 2000 -m 8000 -lookuprange 5 -sb 80 -forcedict
and run Julius with the mic.jconf instead of julius.jconf
% ../julius/julius/julius -C mic.jconf -dnnconf dnn.jconf
Version 4.4 supports stand-alone DNN-HMM support, and several new tools and bug fixes are included. See the "Release.txt" file for the full list of updates. Run with "-help" to see full list of options.
Follow the instructions in INSTALL.txt.
Tools and Assets
There are also toolkit and assets to run Julius. They are maintained by the Julius development team. You can get them from the following Github pages:
A set of Julius executables and Japanese LM/AM. You can test 60k-word Japanese dictation with this kit. For AM, triphone HMMs of both GMM and DNN are included. For DNN, a front-end DNN module, separated from Julius, computes the state probabilities of HMM for each input frame and send them to Julius via socket to perform real-time DNN decoding. For LM, 60k-word 3-gram trained by BCCWJ corpus is included. You can get it from its GitHub page.
Documents, sample files and conversion tools to use and build a recognition grammar for Julius. You can get it from the GitHub page.
This is a handy toolkit to do phoneme segmentation (aka phoneme alignments) for speech audio file using Julius. Given pairs of speech audio file and its transcription, this toolkit perform Viterbi alignment to get the beginning and ending time of each phoneme. This toolkit is available at its GitHub page.
Prompter is a perl/Tkx based tiny program that displays recognition results of Julius in a scrolling caption style.
Since Julius itself is a language-independent decoding program, you can make a recognizer of a language if given an appropriate language model and acoustic model for the target language. The recognition accuracy largely depends on the models. Julius adopts acoustic models in HTK ascii format, pronunciation dictionary in almost HTK format, and word 3-gram language models in ARPA standard format (forward 2-gram and reverse N-gram trained from same corpus).
We had already examined English dictations with Julius, and another researcher has reported that Julius has also worked well in English, Slovenian (see pp.681--684 of Proc. ICSLP2002), French, Thai language, and many other Languages.
Here you can get Japanese and English language/acoustic models.
Japanese language model (60k-word trained by balanced corpus) and acoustic models (triphone GMM/DNN) are included in the Japanese dictation kit. More various types of Japanese N-gram LM and acoustic models are available at CSRC. For more detail, please contact email@example.com.
There are some user-contributed English models for Julius available on the Web.
JuliusModels hosts English and Polish models for Julius. All of the models are based on HTK modelling software and data sets available freely on the Internet. They can be downloaded from a project website which I created for this purpose. Please note that DNN version of these models require minor changes which the author included in a modified version of Julius on Github at https://github.com/palles77/julius .
The VoxForge-project is working on the creation of an open-source acoustic model for the English language. If you have any language or acoustic model that can be distributed as a freeware, would you please contact us? We want to run dictation kit on various languages other than Japanese, and share them freely to provide a free speech recognition system available for various languages.
- Up-to-date document is now provided in markdown at doc/.
- All options are fully described at Options, also listed in sample configuration file Sample.jconf, also be output when invoked with "julius --help".
- Full history and short descriptions are in Release Notes (JP version)
- For DNN-HMM, take a look at 00readme-DNN.txt for how-to and Sample.dnnconf as example.
Other, old documents:
- The Juliusbook 3 (English) - translated from Japanese for 3.x
- The Juliusbook 4 (Japanese) - full documentation in Japanese
- The grammar format of Julius
- Official web site (Japanese)
- Old development site, having old releases
- A. Lee and T. Kawahara. "Recent Development of Open-Source Speech Recognition Engine Julius" Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2009.
- A. Lee, T. Kawahara and K. Shikano. "Julius --- an open source real-time large vocabulary recognition engine." In Proc. European Conference on Speech Communication and Technology (EUROSPEECH), pp. 1691--1694, 2001.
- T. Kawahara, A. Lee, T. Kobayashi, K. Takeda, N. Minematsu, S. Sagayama, K. Itou, A. Ito, M. Yamamoto, A. Yamada, T. Utsuro and K. Shikano. "Free software toolkit for Japanese large vocabulary continuous speech recognition." In Proc. Int'l Conf. on Spoken Language Processing (ICSLP) , Vol. 4, pp. 476--479, 2000.
Moving to UTF-8
We are going to move to UTF-8.
The master branch after the release of 4.5 (2019/1/2) has codes converted to UTF-8. All files were converted to UTF-8, and future update will be commited also in UTF-8.
For backward compatibility and log visibility, we are keeping the old encoding codes at branch "master-4.5-legacy". The branch keeps legacy encoding version of version 4.5. If you want to inspect the code progress before the release of 4.5 (2019/1/2), please checkout the branch.