Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Initial squeezeformer impl Signed-off-by: smajumdar <smajumdar@nvidia.com> * Start time reduce and recovery Signed-off-by: smajumdar <smajumdar@nvidia.com> * Working commit of time reduction and time recovery modules Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix issue with number of params being incorrect Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add initializations to the model Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix scheduler Signed-off-by: smajumdar <smajumdar@nvidia.com> * Remove float() Signed-off-by: smajumdar <smajumdar@nvidia.com> * Correct order of operations Signed-off-by: smajumdar <smajumdar@nvidia.com> * Correct order of operations Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update time reduce PE to only update PE and nothing else Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix initialization Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix PE usage Signed-off-by: smajumdar <smajumdar@nvidia.com> * Comment out k2 for now Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add usage comments to buffered ctc script Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update docs Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add squeezeformer configs for CTC Signed-off-by: smajumdar <smajumdar@nvidia.com> * Mark squeezeformer as experimental Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add Jenkinsfile test Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add Jenkinsfile test Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix style Signed-off-by: smajumdar <smajumdar@nvidia.com> * Replace all with /content/ Signed-off-by: smajumdar <smajumdar@nvidia.com> * Try Jenkinsfile Fix with closure Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update ctc config Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update ctc config Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update ctc config Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add squeezeformer Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add squeezeformer Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix Jenkinsfile Signed-off-by: smajumdar <smajumdar@nvidia.com> * Fix Jenkinsfile Signed-off-by: smajumdar <smajumdar@nvidia.com> * Try closure Signed-off-by: smajumdar <smajumdar@nvidia.com> * Remove test Signed-off-by: smajumdar <smajumdar@nvidia.com> * Add back squeezeformer test Signed-off-by: smajumdar <smajumdar@nvidia.com> * Remvoe script tag Signed-off-by: smajumdar <smajumdar@nvidia.com> * Update for review comments Signed-off-by: smajumdar <smajumdar@nvidia.com> * Remove experimental Signed-off-by: smajumdar <smajumdar@nvidia.com> * Correct an issue with RNNT alignments Signed-off-by: smajumdar <smajumdar@nvidia.com> * Correct an issue with RNNT metrics Signed-off-by: smajumdar <smajumdar@nvidia.com> * Code formatting Signed-off-by: smajumdar <smajumdar@nvidia.com> * Correct offset calculation for no look ahead Signed-off-by: smajumdar <smajumdar@nvidia.com>
- Loading branch information
Showing
22 changed files
with
1,402 additions
and
21 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
201 changes: 201 additions & 0 deletions
201
examples/asr/conf/squeezeformer/squeezeformer_ctc_bpe.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,201 @@ | ||
# It contains the default values for training a Squeezeformer-CTC ASR model, large size (~120M) with CTC loss and sub-word encoding. | ||
|
||
# Architecture and training config: | ||
# Default learning parameters in this config are set for effective batch size of 2K. To train it with smaller effective | ||
# batch sizes, you may need to re-tune the learning parameters or use higher accumulate_grad_batches. | ||
# Here are the recommended configs for different variants of Squeezeformer-CTC, other parameters are the same as in this config file. | ||
# One extra layer (compared to original paper) is added to the medium and large variants to compensate for replacing the LSTM decoder with a linear one. | ||
# | ||
# | Model | d_model | n_layers | n_heads | time_masks | lr | time_reduce_idx | GBS | | ||
# |--------------|---------|----------|---------|------------|--------|-----------------|------| | ||
# | Extra-Small | 144 | 16 | 4 | 5 | 2e-3 | 7 | 1024 | | ||
# | Small | 196 | 18 | 4 | 5 | 2e-3 | 8 | 1024 | | ||
# | Small-Medium | 256 | 16 | 4 | 5 | 1.5e-3 | 7 | 1024 | | ||
# | Medium | 324 | 20 | 4 | 7 | 1.5e-3 | 9 | 1024 | | ||
# | Medium-Large | 512 | 18 | 8 | 10 | 1e-3 | 8 | 2048 | | ||
# | Large | 640 | 22 | 8 | 10 | 5e-4 | 10 | 2048 | | ||
# | ||
# You may find more info about Squeezeformer-CTC here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#squeezeformer-ctc | ||
# Pre-trained models of Squeezeformer-CTC can be found here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/results.html | ||
|
||
name: "Squeezeformer-CTC-BPE" | ||
|
||
model: | ||
sample_rate: 16000 | ||
log_prediction: true # enables logging sample predictions in the output during training | ||
ctc_reduction: 'mean_batch' | ||
skip_nan_grad: false | ||
|
||
train_ds: | ||
manifest_filepath: ??? | ||
sample_rate: ${model.sample_rate} | ||
batch_size: 8 # you may increase batch_size if your memory allows | ||
shuffle: true | ||
num_workers: 8 | ||
pin_memory: true | ||
use_start_end_token: false | ||
trim_silence: false | ||
max_duration: 16.7 # it is set for LibriSpeech, you may need to update it for your dataset | ||
min_duration: 0.1 | ||
# tarred datasets | ||
is_tarred: false | ||
tarred_audio_filepaths: null | ||
shuffle_n: 2048 | ||
# bucketing params | ||
bucketing_strategy: "synced_randomized" | ||
bucketing_batch_size: null | ||
|
||
validation_ds: | ||
manifest_filepath: ??? | ||
sample_rate: ${model.sample_rate} | ||
batch_size: 8 # you may increase batch_size if your memory allows | ||
shuffle: false | ||
num_workers: 8 | ||
pin_memory: true | ||
use_start_end_token: false | ||
|
||
test_ds: | ||
manifest_filepath: null | ||
sample_rate: ${model.sample_rate} | ||
batch_size: 8 # you may increase batch_size if your memory allows | ||
shuffle: false | ||
num_workers: 8 | ||
pin_memory: true | ||
use_start_end_token: false | ||
|
||
# recommend small vocab size of 128 or 256 when using 4x sub-sampling | ||
# you may find more detail on how to train a tokenizer at: /scripts/tokenizers/process_asr_text_tokenizer.py | ||
tokenizer: | ||
dir: ??? # path to directory which contains either tokenizer.model (bpe) or vocab.txt (wpe) | ||
type: bpe # Can be either bpe (SentencePiece tokenizer) or wpe (WordPiece tokenizer) | ||
|
||
preprocessor: | ||
_target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor | ||
sample_rate: ${model.sample_rate} | ||
normalize: "per_feature" | ||
window_size: 0.025 | ||
window_stride: 0.01 | ||
window: "hann" | ||
features: 80 | ||
n_fft: 512 | ||
log: true | ||
frame_splicing: 1 | ||
dither: 0.00001 | ||
pad_to: 0 | ||
pad_value: 0.0 | ||
|
||
spec_augment: | ||
_target_: nemo.collections.asr.modules.SpectrogramAugmentation | ||
freq_masks: 2 # set to zero to disable it | ||
# you may use lower time_masks for smaller models to have a faster convergence | ||
time_masks: 10 # set to zero to disable it | ||
freq_width: 27 | ||
time_width: 0.05 | ||
|
||
encoder: | ||
_target_: nemo.collections.asr.modules.SqueezeformerEncoder | ||
feat_in: ${model.preprocessor.features} | ||
feat_out: -1 # you may set it if you need different output size other than the default d_model | ||
n_layers: 18 | ||
d_model: 512 | ||
|
||
# Squeezeformer params | ||
adaptive_scale: true | ||
time_reduce_idx: 8 | ||
time_recovery_idx: null | ||
|
||
# Sub-sampling params | ||
subsampling: dw_striding # dw_striding, vggnet, striding or stacking, vggnet may give better results but needs more memory | ||
subsampling_factor: 4 # must be power of 2 | ||
subsampling_conv_channels: -1 # -1 sets it to d_model | ||
|
||
# Feed forward module's params | ||
ff_expansion_factor: 4 | ||
|
||
# Multi-headed Attention Module's params | ||
self_attention_model: rel_pos # rel_pos or abs_pos | ||
n_heads: 8 # may need to be lower for smaller d_models | ||
# [left, right] specifies the number of steps to be seen from left and right of each step in self-attention | ||
att_context_size: [-1, -1] # -1 means unlimited context | ||
xscaling: true # scales up the input embeddings by sqrt(d_model) | ||
untie_biases: true # unties the biases of the TransformerXL layers | ||
pos_emb_max_len: 5000 | ||
|
||
# Convolution module's params | ||
conv_kernel_size: 31 | ||
conv_norm_type: 'batch_norm' # batch_norm or layer_norm | ||
|
||
### regularization | ||
dropout: 0.1 # The dropout used in most of the Conformer Modules | ||
dropout_emb: 0.0 # The dropout used for embeddings | ||
dropout_att: 0.1 # The dropout for multi-headed attention modules | ||
|
||
decoder: | ||
_target_: nemo.collections.asr.modules.ConvASRDecoder | ||
feat_in: null | ||
num_classes: -1 | ||
vocabulary: [] | ||
|
||
optim: | ||
name: adamw | ||
lr: 0.001 | ||
# optimizer arguments | ||
betas: [0.9, 0.98] | ||
# less necessity for weight_decay as we already have large augmentations with SpecAug | ||
# you may need weight_decay for large models, stable AMP training, small datasets, or when lower augmentations are used | ||
# weight decay of 0.0 with lr of 2.0 also works fine | ||
weight_decay: 4e-5 | ||
|
||
# scheduler setup | ||
sched: | ||
name: NoamHoldAnnealing | ||
# scheduler config override | ||
warmup_steps: 5000 # paper uses ~ 6500 steps (20 epochs) out of 500 epochs. | ||
warmup_ratio: null | ||
hold_steps: 40000 | ||
hold_ratio: null # paper uses ~ 40000 steps (160 epochs) out of 500 epochs. | ||
decay_rate: 1.0 # Noam decay = 0.5 and no hold steps. For Squeezeformer, use hold ~ 10-30% of training, then faster decay. | ||
min_lr: 1e-5 | ||
|
||
trainer: | ||
devices: -1 # number of GPUs, -1 would use all available GPUs | ||
num_nodes: 1 | ||
max_epochs: 1000 | ||
max_steps: null # computed at runtime if not set | ||
val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations | ||
accelerator: auto | ||
strategy: ddp | ||
accumulate_grad_batches: 1 | ||
gradient_clip_val: 0.0 | ||
precision: 32 # Should be set to 16 for O1 and O2 to enable the AMP. | ||
log_every_n_steps: 10 # Interval of logging. | ||
progress_bar_refresh_rate: 10 | ||
resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc. | ||
num_sanity_val_steps: 0 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it | ||
check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs | ||
sync_batchnorm: true | ||
enable_checkpointing: False # Provided by exp_manager | ||
logger: false # Provided by exp_manager | ||
benchmark: false # needs to be false for models with variable-length speech input as it slows down training | ||
|
||
exp_manager: | ||
exp_dir: null | ||
name: ${name} | ||
create_tensorboard_logger: true | ||
create_checkpoint_callback: true | ||
checkpoint_callback_params: | ||
# in case of multiple validation sets, first one is used | ||
monitor: "val_wer" | ||
mode: "min" | ||
save_top_k: 5 | ||
always_save_nemo: True # saves the checkpoints as nemo files instead of PTL checkpoints | ||
|
||
# you need to set these two to True to continue the training | ||
resume_if_exists: false | ||
resume_ignore_no_checkpoint: false | ||
|
||
# You may use this section to create a W&B logger | ||
create_wandb_logger: false | ||
wandb_logger_kwargs: | ||
name: null | ||
project: null |
Oops, something went wrong.