ctcdecode is an implementation of CTC (Connectionist Temporal Classification) beam search decoding for PyTorch. C++ code borrowed liberally from Paddle Paddles' DeepSpeech. It includes swappable scorer support enabling standard beam search, and KenLM-based decoding. If you are new to the concepts of CTC and Beam Search, please visit the Resources section where we link a few tutorials explaining why they are needed.
It seems like no one is maintaining the old repo here. I just want to fix some compatibility issue in third-party packages. It was successfully installed with Pytorch 2.1.0. I want share this for anyone who is struggling with the installation. If you met any other compatibility problem, feel free to raise issues!
The modification is very simple, the comilation failed because dynamic exception specifications are not permitted since C++17. I just replace all throw(xxx)
with noexept(false)
.
Note that the function of this repo is not GUARANTEED, I don't have time to test them all, though I think it would work properly:)
The library is largely self-contained and requires only PyTorch. Building the C++ library requires gcc or clang. KenLM language modeling support is also optionally included, and enabled by default.
The below installation also works for Google Colab.
# get the code
git clone --recursive https://github.com/WayenVan/ctcdecode.git
cd ctcdecode && pip install .
from ctcdecode import CTCBeamDecoder
decoder = CTCBeamDecoder(
labels,
model_path=None,
alpha=0,
beta=0,
cutoff_top_n=40,
cutoff_prob=1.0,
beam_width=100,
num_processes=4,
blank_id=0,
log_probs_input=False
)
beam_results, beam_scores, timesteps, out_lens = decoder.decode(output)
labels
are the tokens you used to train your model. They should be in the same order as your outputs. For example if your tokens are the english letters and you used 0 as your blank token, then you would pass in list("_abcdefghijklmopqrstuvwxyz") as your argument to labelsmodel_path
is the path to your external kenlm language model(LM). Default is none.alpha
Weighting associated with the LMs probabilities. A weight of 0 means the LM has no effect.beta
Weight associated with the number of words within our beam.cutoff_top_n
Cutoff number in pruning. Only the top cutoff_top_n characters with the highest probability in the vocab will be used in beam search.cutoff_prob
Cutoff probability in pruning. 1.0 means no pruning.beam_width
This controls how broad the beam search is. Higher values are more likely to find top beams, but they also will make your beam search exponentially slower. Furthermore, the longer your outputs, the more time large beams will take. This is an important parameter that represents a tradeoff you need to make based on your dataset and needs.num_processes
Parallelize the batch using num_processes workers. You probably want to pass the number of cpus your computer has. You can find this in python withimport multiprocessing
thenn_cpus = multiprocessing.cpu_count()
. Default 4.blank_id
This should be the index of the CTC blank token (probably 0).log_probs_input
If your outputs have passed through a softmax and represent probabilities, this should be false, if they passed through a LogSoftmax and represent negative log likelihood, you need to pass True. If you don't understand this, runprint(output[0][0].sum())
, if it's a negative number you've probably got NLL and need to pass True, if it sums to ~1.0 you should pass False. Default False.
output
should be the output activations from your model. If your output has passed through a SoftMax layer, you shouldn't need to alter it (except maybe to transpose), but if youroutput
represents negative log likelihoods (raw logits), you either need to pass it through an additionaltorch.nn.functional.softmax
or you can passlog_probs_input=False
to the decoder. Your output should be BATCHSIZE x N_TIMESTEPS x N_LABELS so you may need to transpose it before passing it to the decoder. Note that if you pass things in the wrong order, the beam search will probably still run, you'll just get back nonsense results.
4 things get returned from decode
beam_results
- Shape: BATCHSIZE x N_BEAMS X N_TIMESTEPS A batch containing the series of characters (these are ints, you still need to decode them back to your text) representing results from a given beam search. Note that the beams are almost always shorter than the total number of timesteps, and the additional data is non-sensical, so to see the top beam (as int labels) from the first item in the batch, you need to runbeam_results[0][0][:out_len[0][0]]
.beam_scores
- Shape: BATCHSIZE x N_BEAMS A batch with the approximate CTC score of each beam (look at the code here for more info). If this is true, you can get the model's confidence that the beam is correct withp=1/np.exp(beam_score)
.timesteps
- Shape: BATCHSIZE x N_BEAMS The timestep at which the nth output character has peak probability. Can be used as alignment between the audio and the transcript.out_lens
- Shape: BATCHSIZE x N_BEAMS.out_lens[i][j]
is the length of the jth beam_result, of item i of your batch.
from ctcdecode import OnlineCTCBeamDecoder
decoder = OnlineCTCBeamDecoder(
labels,
model_path=None,
alpha=0,
beta=0,
cutoff_top_n=40,
cutoff_prob=1.0,
beam_width=100,
num_processes=4,
blank_id=0,
log_probs_input=False
)
state1 = ctcdecode.DecoderState(decoder)
probs_seq = torch.FloatTensor([probs_seq])
beam_results, beam_scores, timesteps, out_seq_len = decoder.decode(probs_seq[:, :2], [state1], [False])
beam_results, beam_scores, timesteps, out_seq_len = decoder.decode(probs_seq[:, 2:], [state1], [True])
The Online decoder is copying CTCBeamDecoder interface, but it requires states and is_eos_s sequences.
States are used to accumulate sequences of chunks, each corresponding to one data source. Is_eos_s tells the decoder whether the chunks have stopped being pushed to the corresponding state.
Get the top beam for the first item in your batch
beam_results[0][0][:out_len[0][0]]
Get the top 50 beams for the first item in your batch
for i in range(50):
print(beam_results[0][i][:out_len[0][i]])
Note, these will be a list of ints that need decoding. You likely already have a function to decode from int to text, but if not you can do something like.
"".join[labels[n] for n in beam_results[0][0][:out_len[0][0]]]
using the labels you passed in to CTCBeamDecoder