Skip to content

Commit

Permalink
Sslmultitask (#8)
Browse files Browse the repository at this point in the history
* Megatron BART BOS / EOS bug fix (#4495)

* 1. Debugging.

Signed-off-by: Micha Livne <mlivne@cs.toronto.edu>

* 1. BART dataset fixes missing <EOS> for deocder output.

Signed-off-by: Micha Livne <mlivne@cs.toronto.edu>

* 1. Debugging.

Signed-off-by: Micha Livne <mlivne@cs.toronto.edu>

* 1. Debugging.

Signed-off-by: Micha Livne <mlivne@cs.toronto.edu>

* 1. Removed extra padding from BARTDataset.

Signed-off-by: Micha Livne <mlivne@cs.toronto.edu>

* GPT Prompt Learning Improvements (#4496)

* Updated pipeline parallel code to speed up training

Signed-off-by: Virginia Adams <vadams@nvidia.com>

* Load global batch size not local mini batch size

Signed-off-by: Virginia Adams <vadams@nvidia.com>

* Python reformatting

Signed-off-by: Virginia Adams <vadams@nvidia.com>

* Megatron perceiver with tensor parallelism only (#4318)

* Temp

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add megatron dataset

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Update config and fix global batch fetcher

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add dataset class

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Update comments

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Update yaml

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix duplicate yaml key

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Translate method and preprocess script for raw text

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove pdb

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix arg name

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix other arg

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Change sampler back

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Move back to global batch fetcher to use distributed sampler

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add text memmap data

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Update monitor

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fixes for PP

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove unused import

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Truncate examples in text memmap

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* NMT training batch interpolation key

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* tarred data fix

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Change dataset type check

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix sampler

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Pass dataset cfg to determine type

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Log global step on validation step as well

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix NMT model saving with artifacts

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Initialize DDP in decode if not initialized. Needed for inference only mode

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Megatron NMT inference script

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Inference config file

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* hardcode max delta temporarily

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* detokenizer if processor is not none

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Sampler config

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Compat with configs without sampler arg

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Comment for validation dataset type

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix tokenizer building

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* CI test for megatron nmt

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix tokenizer in restore

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* O2 restore from fix

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove print

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Change tokenizer model name in config

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Logging

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Set seed for distributed sampler

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Cluster debugging messages

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix max generation delta

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* No LM Init

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Use nlp save restore connector

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove useless infer args

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Typo

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* UTF8 safe print of translation result

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add save restore connector back with comment

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Refactor

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix CI test

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add missing args

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Address comments

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Empty to restart

* Fix CI test

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Check for test ds

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* set fusion to false

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Initial perceiver encoder

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Perceiver with PP=1

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove init cross attn

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* CI test and remove init cross attn arg

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove init cross attn layers from file

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Clean up

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* Set headscale false (#4364)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add wandb as dependency (#4365)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Raise trainer error (#4356)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Set headscale false (#4364) (#4366)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Finetuning changes for BART (#4003)

* Temp

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Checkpoint converter to nemo for bart

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Make position embedding expansion specific to a batch to avoid checkpoint size mismatches (#4357)

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix logging warning

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Refactor bias act fusion

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Update NMT config

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix electronic bug, new time ITN rule (#4355)

* fix electronic bug

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* add new itn time rule

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* revert domain changes

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* remove repetition

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* Update ci tests

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Correct support for dataclasses in default module dim (#4372)

* Correct support for dataclasses in default module dim

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix path for save of results

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* fix pad id bug (#4377)

Signed-off-by: Yi Dong <yidong@nvidia.com>

* Question answering bug fix (#4381)

* refactor dialogue state tracking for modelling/dataset interoperability

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style changes

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style raised by lgtm

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style formatting

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update template to include description of intent

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* changes based on requests in review

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add compatibility with assistant dataset

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove dialogue_state_tracking

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update huggingface utils for dialogue

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* rename dialogue_state_tracking_hybrid to dialogue_state_tracking_sgdqa

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix nemo/collections/nlp/models/dialogue_state_tracking_sgdqa/__init__.py

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add docstrings for assistant data processsor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins for SGDGEN local checkpoint

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* use local vocab file for Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* patch for Jenkins CI using local file

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add slot filling prediction and metrics

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused code

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* refactor metrics code out of Dialogue GPT Model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate backward compatible support for IntentSlotClassificationModel (bert model)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* save prediction file for IntentSlotClassification

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue gpt model training for megatron gpt

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove batch generate for HF GPT2, which causes lower performance

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add few shot capability to dialogue gpt model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile and remove unused import

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update code description and clarity

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address PR comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate compatibility with ZeroShotIntentModel

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* rename folder to dialogue due to increased scope and further refactor for clarity

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* added dialogue GPT for sequence generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add CI test for DialogueGPTGenerationModel

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate DialogueS2SGenerationModel for generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* modify huggingface utils to support HF t5/BART models

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix bleu metric style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* debug bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* debug bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update 2 based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update 3 based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate sgd generation based on user user utterance and system slot-values to generate system utterance

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add validation model saving capabilities

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* cleaned up code for SGD Based Answer extender

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Dialogue Generation CI

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix Jenkins CI issue"

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add support for design dataset

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unnecessary imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support megatron for dialogue_s2s_generation_model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* reduce loaded samples in MSMarcoDataProcessor to 64 when cfg.model.dataset.debug_mode=True

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update CI

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update checkpoint and predictions filename to include epoch number

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate HF BART MNLI into zero shot intent model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate Dialogue Nearest Neighbour Model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* refactor Dialogue SGD Data Processor to make interface for models cleaner

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Dialogue S2S Generation model for DialogueSGDDataProcessor interface

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support sgd and drive thru datasets by zero shot model and nearest neighbour model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add prediction saving code to nearest neighbour and zero shot intent models

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo in sgd data processor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate Dialogue Mellon QA Data Processor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update mellon qa

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue.py to remove outdated info

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add dialogue docs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address review comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix for cfg

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* make dependency on apex optional

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* change NLPDDPluggin calling logic to make it possible to run without apex

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add first draft of tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* reduce ms marco size by removing lines without wellFormedAnswers

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address pr comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update colab tutorial link in dialogue docs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* include unit test and some refactor to facilitate unit test

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address pr issues

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove typos in dialogue tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support larger files for question answering

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unnecessary artifacts to reduce memory use

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* put 0 tensor to device

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update link within dialogue tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* restore previously delete files

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error handling when loss = nan

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update nan handling

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update spanning loss func

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update spanning loss

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix type error raised in qa_dataset.py

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add error checking message

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert back to float32

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert back to float32

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update exp logging

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* limit number of negative samples

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert post processing

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert post processing

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused methods and style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add more documentation

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* changes base on PR review

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* set wandb logger falseby default

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

* style fix

* correct typo

* style fix

* style fix

Co-authored-by: Zhilin Wang <zhilinw@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>

* Fix ASR Typos in tutorials (#4384)

* Fix typos

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Quick wav2vec fix. In-place operation adding convolutional positions to encoder was overwriting leaf history. Wasn't caught on previous torch versions. (#4383)

Signed-off-by: tbartley94 <tbartley@nvidia.com>

Co-authored-by: tbartley94 <tbartley@nvidia.com>
(cherry picked from commit 0322b158f26a0b690edca7a84714e33752283923)

Co-authored-by: Travis Bartley <Travismbartley@gmail.com>

* Add Docs for NeMo Adapters (#4369)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update NeMo docs (#4397)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

Co-authored-by: Eric Harper <complex451@gmail.com>

* Punctuation and capitalization tests race condition (#4399)

* Add draft of race condition fixes

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Minor improvements

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* More race condition fixes

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Improve error message

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Improve error message

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Improve error message

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* bias act fusion changes

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Address comments

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix geglu without fusion

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Reset files to main

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove hidden blocks

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: ericharper <complex451@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Yi Dong <43824965+yidong72@users.noreply.github.com>
Co-authored-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Co-authored-by: Zhilin Wang <zhilinw@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Travis Bartley <Travismbartley@gmail.com>
Co-authored-by: PeganovAnton <peganoff2@mail.ru>

* NMESC speaker counting algorithm update (#4500)

* initial commit

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Default maj_vote = False, max_rp=0.25

Signed-off-by: Taejin Park <tango4j@gmail.com>

* doc strings and style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Docstring minor edit

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Default False in the functions

Signed-off-by: Taejin Park <tango4j@gmail.com>

* fixed repeated variable

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Default as maj_vote=False

Signed-off-by: Taejin Park <tango4j@gmail.com>

* removed redundant part in wrtie_rttm func

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Removed unused function

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Updated and tested silence and very short samples

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Style fix and removing unnecessary parts

Signed-off-by: Taejin Park <tango4j@gmail.com>

* unused variables are removed

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Fixed commented torch.jit.script

Signed-off-by: Taejin Park <tango4j@gmail.com>

* majority voting update

Signed-off-by: Taejin Park <tango4j@gmail.com>

* cancelling the update on speaker_utils and clus_diarizer

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* bug fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Added fp32 converting for torch.mm

Signed-off-by: Taejin Park <tango4j@gmail.com>

Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>

* Fix dataset parameter typo on tacotron2 example yaml (#4471)

Signed-off-by: saarus72 <saarus72@gmail.com>

Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

* Noam lr sched: do not force min_lr after max_steps (#4472)

Signed-off-by: Adrian Lancucki <alancucki@users.noreply.github.com>

Co-authored-by: Adrian Lancucki <alancucki@users.noreply.github.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

* Refactor for punctuation model (#4367)

* Dataloader, collector, loss and metric for multiscale diarization decoder  (#4187)

* First commit

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Checked funtionality and imports

Signed-off-by: Taejin Park <tango4j@gmail.com>

* fixed import issues

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Removed the changed made by mistake

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Fixed LGTM errors 001

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Fixed LGTM and style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Changed docstrings

Signed-off-by: Taejin Park <tango4j@gmail.com>

* LGTM again

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Removed unnecessary torch setting lines

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Style fix and isort

Signed-off-by: Taejin Park <tango4j@gmail.com>

* jbalam-nv comments reflected

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Reflected comments and created _diar_label.py

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Typo fix and style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Fixed target_spks[0] index error

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* LGTM unused import IterDataset

Signed-off-by: Taejin Park <tango4j@gmail.com>

* revert collection doc year

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Code format error in collections.py

Signed-off-by: Taejin Park <tango4j@gmail.com>

* fix collections space format error

Signed-off-by: Taejin Park <tango4j@gmail.com>

* merged main correctly

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Reflected all comments and tested

Signed-off-by: Taejin Park <tango4j@gmail.com>

* style fix and LGTM

Signed-off-by: Taejin Park <tango4j@gmail.com>

* rttm_filepath to rttm_file and removed self included funcs, tested

Signed-off-by: Taejin Park <tango4j@gmail.com>

Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* removed references to data_dir

Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* added missing parameters to data preparation script

Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* removed unnecessary file extension check

Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Add ASR CTC Decoding module (#4342)

* Initial commit

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Full support for decoding strategy

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Temp

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix labels of y_sequence

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Set support for sentencepiece subword merging

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix char and word based token merge alignment

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Revert incorrect change

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update docstring

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Improve compatibility with greedy tokens and log probs

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update scripts to use decoding strategy

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add tests and docs

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add tests and docs

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix speaker decoder timestamps

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix speaker decoder timestamps

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix decoding of ctc models

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Address reviewer comments

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Address reviewer comments

Signed-off-by: smajumdar <smajumdar@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Option to disable mp in VAD via num_workers=1 (#4317)

* Option to disable mp in VAD via num_workers=1

In certain environments python multiprocessing can deadlock. This adds a convenient version to disable by setting num_workers to 1.

Signed-off-by: Georg Kucsko <gkucsko@gmail.com>

* add none handling

Signed-off-by: Georg Kucsko <gkucsko@gmail.com>

* additional none handling

Signed-off-by: Georg Kucsko <gkucsko@gmail.com>

Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* remove redundant bias expand (#4382)

* remove redundant bias expand

Signed-off-by: Xiaowei Ren <xren@nvidia.com>

* delete redundant code

Signed-off-by: Xiaowei Ren <xren@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* fixed style

Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Add option for specifying wandb save_dir from config (#4379)

* give option to user to specify wandb save dir via config

Signed-off-by: Shantanu Acharya <shantanua@nvidia.com>

* create save_dir directory for wandb logger if not exists

Signed-off-by: Shantanu Acharya <shantanua@nvidia.com>

* update save_dir get method with a default value

Signed-off-by: Shantanu Acharya <shantanua@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Quick wav2vec fix. In-place operation adding convolutional positions to encoder was overwriting leaf history. Wasn't caught on previous torch versions. (#4383)

Signed-off-by: tbartley94 <tbartley@nvidia.com>

Co-authored-by: tbartley94 <tbartley@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* [Bugfix][TTS] wrong order of returned tuple for general_collate_fn. (#4388)

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Merge r1.10.0 main (#4398)

* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* Set headscale false (#4364)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add wandb as dependency (#4365)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Raise trainer error (#4356)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Set headscale false (#4364) (#4366)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Finetuning changes for BART (#4003)

* Temp

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Checkpoint converter to nemo for bart

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Make position embedding expansion specific to a batch to avoid checkpoint size mismatches (#4357)

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix logging warning

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>

* Fix electronic bug, new time ITN rule (#4355)

* fix electronic bug

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* add new itn time rule

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* revert domain changes

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* remove repetition

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* Correct support for dataclasses in default module dim (#4372)

* Correct support for dataclasses in default module dim

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Fix path for save of results

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* fix pad id bug (#4377)

Signed-off-by: Yi Dong <yidong@nvidia.com>

* Question answering bug fix (#4381)

* refactor dialogue state tracking for modelling/dataset interoperability

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style changes

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style raised by lgtm

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style formatting

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update template to include description of intent

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* changes based on requests in review

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add compatibility with assistant dataset

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove dialogue_state_tracking

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update huggingface utils for dialogue

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* rename dialogue_state_tracking_hybrid to dialogue_state_tracking_sgdqa

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix nemo/collections/nlp/models/dialogue_state_tracking_sgdqa/__init__.py

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add docstrings for assistant data processsor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins for SGDGEN local checkpoint

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* use local vocab file for Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* patch for Jenkins CI using local file

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add slot filling prediction and metrics

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused code

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* refactor metrics code out of Dialogue GPT Model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate backward compatible support for IntentSlotClassificationModel (bert model)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* save prediction file for IntentSlotClassification

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue gpt model training for megatron gpt

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove batch generate for HF GPT2, which causes lower performance

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add few shot capability to dialogue gpt model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile and remove unused import

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update code description and clarity

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address PR comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate compatibility with ZeroShotIntentModel

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* rename folder to dialogue due to increased scope and further refactor for clarity

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* added dialogue GPT for sequence generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add CI test for DialogueGPTGenerationModel

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate DialogueS2SGenerationModel for generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* modify huggingface utils to support HF t5/BART models

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix bleu metric style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* debug bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* debug bleu metric

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update 2 based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update 3 based on PR #3893

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate sgd generation based on user user utterance and system slot-values to generate system utterance

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add validation model saving capabilities

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* cleaned up code for SGD Based Answer extender

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Dialogue Generation CI

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix Jenkins CI issue"

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add support for design dataset

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unnecessary imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support megatron for dialogue_s2s_generation_model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* reduce loaded samples in MSMarcoDataProcessor to 64 when cfg.model.dataset.debug_mode=True

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update CI

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update checkpoint and predictions filename to include epoch number

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate HF BART MNLI into zero shot intent model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate Dialogue Nearest Neighbour Model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* refactor Dialogue SGD Data Processor to make interface for models cleaner

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Dialogue S2S Generation model for DialogueSGDDataProcessor interface

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support sgd and drive thru datasets by zero shot model and nearest neighbour model

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add prediction saving code to nearest neighbour and zero shot intent models

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo in sgd data processor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* integrate Dialogue Mellon QA Data Processor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update mellon qa

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue.py to remove outdated info

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add dialogue docs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address review comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix for cfg

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* make dependency on apex optional

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* change NLPDDPluggin calling logic to make it possible to run without apex

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add first draft of tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* reduce ms marco size by removing lines without wellFormedAnswers

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address pr comments

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update colab tutorial link in dialogue docs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* include unit test and some refactor to facilitate unit test

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* address pr issues

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove typos in dialogue tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* support larger files for question answering

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unnecessary artifacts to reduce memory use

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* put 0 tensor to device

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update link within dialogue tutorial

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* restore previously delete files

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error handling when loss = nan

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update nan handling

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update spanning loss func

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update spanning loss

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix type error raised in qa_dataset.py

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add error checking message

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert back to float32

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert back to float32

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update exp logging

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update error msgs

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* limit number of negative samples

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert post processing

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* revert post processing

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused methods and style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add more documentation

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove unused imports

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* changes base on PR review

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* set wandb logger falseby default

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

* style fix

* correct typo

* style fix

* style fix

Co-authored-by: Zhilin Wang <zhilinw@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>

* Fix ASR Typos in tutorials (#4384)

* Fix typos

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Quick wav2vec fix. In-place operation adding convolutional positions to encoder was overwriting leaf history. Wasn't caught on previous torch versions. (#4383)

Signed-off-by: tbartley94 <tbartley@nvidia.com>

Co-authored-by: tbartley94 <tbartley@nvidia.com>
(cherry picked from commit 0322b158f26a0b690edca7a84714e33752283923)

Co-authored-by: Travis Bartley <Travismbartley@gmail.com>

* Add Docs for NeMo Adapters (#4369)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update NeMo docs (#4397)

Signed-off-by: smajumdar <smajumdar@nvidia.com>

Co-authored-by: Eric Harper <complex451@gmail.com>

* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* remove Copy of

Signed-off-by: ericharper <complex451@gmail.com>

Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Yi Dong <43824965+yidong72@users.noreply.github.com>
Co-authored-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Co-authored-by: Zhilin Wang <zhilinw@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Travis Bartley <Travismbartley@gmail.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* [bugfix][TTS] pitch, voiced_mask, prob_voiced have the same values. (#4392)

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Fixing import error in some cases (#4401)

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Fixing bugs in calling method ctc_decoder_predictions_tensor. (#4414)

* updated ctc decoding calls.

Signed-off-by: Vahid <vnoroozi@nvidia.com>

* fixed the ones for timestamp_utils.py

Signed-off-by: Vahid <vnoroozi@nvidia.com>

* fixed the ones for timestamp_utils.py

Signed-off-by: Vahid <vnoroozi@nvidia.com>

* fixed the ones for timestamp_utils.py

Signed-off-by: Vahid <vnoroozi@nvidia.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* Update with new conformer checkpoints. (#4417)

Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

* [TTS] add static method decorator. (#4443)

* [TTS] add static method decorator.

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

* remove protect prefix

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

* fixed style error

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: Matvei Novikov <mattyson.so@gmail.com>

Co-authored-by: Taejin Park <tango4j@gmail.com>
Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Georg Kucsko <gkucsko@users.noreply.github.com>
Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Co-authored-by: Xiaowei Ren <103958965+xrennvidia@users.noreply.github.com>
Co-authored-by: Shantanu Acharya <shantanua@nvidia.com>
Co-authored-by: Travis Bartley <Travismbartley@gmail.com>
Co-authored-by: tbartley94 <tbartley@nvidia.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Yi Dong <43824965+yidong72@users.noreply.github.com>
Co-authored-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Co-authored-by: Zhilin Wang <zhilinw@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>

* Add ITN pt (#4516)

* Add ITN pt

Signed-off-by: Guilherme Steinmann <guist@linse.ufsc.br>

* Fix style

Signed-off-by: Guilherme Steinmann <guist@linse.ufsc.br>

* Fix style

Signed-off-by: Guilherme Steinmann <guist@linse.ufsc.br>

* Update copyright year to 2022 on ITN pt rules and tests

Signed-off-by: Guilherme Steinmann <guist@linse.ufsc.br>

* Fixed WER initialization in ASR_with_Nemo notebook (#4523)

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

Co-authored-by: Ante Jukić <ajukic@nvidia.com>

* Update cmudict (#4510)

phoneme IY1 -> IH1 in NVIDIA
Added phonemes for CUSTOMIZABLE

Update cmudict file revision and its reference.

Signed-off-by: Jason Roche <jroche@nvidia.com>

Co-authored-by: Jason Roche <jroche@nvidia.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

* [Add] Support for Different LRs with Param Groups (#4508)

* add support for param groups

Signed-off-by: stevehuang52 <heh@nvidia.com>

* make config more general

Signed-off-by: stevehuang52 <heh@nvidia.com>

Co-authored-by: Eric Harper <complex451@gmail.com>

* Weighted bucketing (#4474)

* Add silence handling for speaker diarization pipeline (#4512)

* initial commit

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* fixed silence wav file issue causing clustering to evaluate on null embeddings

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* fixed zero duration issue

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* updated with comments

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* minor doc change

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update log

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* Fix runtime check (#4501)

* Runtime check refinements

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Added fp32 casting for ASR nets export

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* style

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Used torch.float32 for clarity

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Fixing parameters passing

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Update finetune label models (#4504)

* initial_script

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* move old script

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* remove finetune func from label models

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* style clean

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* updated config

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update tutorial

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* lgtm fixes

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* updated based on comments

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update doc

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* [ASR][Breaking Change] Update signature of Hypothesis alignments (#4511)

* Preserve logprobs when preserving alignments

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update tests for rnnt gredy and beam search

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update all dependents of alignments

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update docs

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Weighted bucketing (#4530)

* Additional sentencepiece args - Byte fallback, split digits, split_on_whitespace (#4525)

* Fix geglu without fusion

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add extra args

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Reset transformer

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Style

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix spm arg

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix help string

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Add support for ASR Adapter Auxiliary Losses (#4480)

* Add support for access mixin registry of custom losses

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* add support for asr custom losses

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update for l2 loss

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add unittests

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add unittests

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add unittests

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update registration of tensors to reset after finishing step

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Remove comment

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Remove comment

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Update SSL models

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Add support for validation step properly registering tensors

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* Move reset of registry outside

Signed-off-by: smajumdar <smajumdar@nvidia.com>

* update (#4520)

Signed-off-by: stevehuang52 <heh@nvidia.com>

* fix duplex inference with grammars (#4517)

* fix duplex inference with grammars

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* add ci test for duplex, fix electronic last sym bug

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* test fix

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* update jenkins grammars

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* add pt to the docs

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* disable test

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins refactor

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* fix jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* jenkins

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* test

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* test

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* test

Signed-off-by: ekmb <ebakhturina@nvidia.com>

* test

Signed-off-by: ekmb <ebakhturina@nvidia.com>

Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>

* Add Bucketing support to TarredAudioToClassificationLabelDataset (#4465)

* Add Bucketing support to TarredAudioToClassificationLabelDataset

Signed-off-by: Ewald Enzinger <ewald.enzinger@entn.at>

* Add MTEncDec Finetune support (#4540)

* add FT support

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* rm preproc

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* review changes

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* add CI

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* newline fix

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* CI fix

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* clean up

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* post training cleanup

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* test

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* revert

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* CI test

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* revert CI changes

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* original CI

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>

* Add nsys profiling (#4539)

* add nsys profiling

Signed-off-by: ericharper <complex451@gmail.com>

* only access omegaconf in setup

Signed-off-by: ericharper <complex451@gmail.com>

* use robust get_rank function

Signed-off-by: ericharper <complex451@gmail.com>

* simplify

Signed-off-by: ericharper <complex451@gmail.com>

* Update megatron prompt learning interface to dialogue  (#4545)

* refactor dialogue state tracking for modelling/dataset interoperability

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style changes

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style raised by lgtm

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style formatting

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update template to include description of intent

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* changes based on requests in review

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add compatibility with assistant dataset

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* remove dialogue_state_tracking

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update huggingface utils for dialogue

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* rename dialogue_state_tracking_hybrid to dialogue_state_tracking_sgdqa

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix style

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* style fix nemo/collections/nlp/models/dialogue_state_tracking_sgdqa/__init__.py

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* fix typo

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* add docstrings for assistant data processsor

Signed-off-by: Zhilin Wang <zhilinw@nvidia.com>

* update Jenkins for SGDGEN local checkpoint

Signed-off-by: Zhilin Wang …
  • Loading branch information
Show file tree
Hide file tree
Showing 11 changed files with 1,202 additions and 11 deletions.
4 changes: 3 additions & 1 deletion examples/asr/conf/ssl/conformer/conformer_ssl.yaml
Expand Up @@ -27,8 +27,10 @@ model:

train_ds:
manifest_filepath: ???
manifest_speaker_verification_fp: ???
manifest_content_fp: ???
sample_rate: ${model.sample_rate}
batch_size: 16 # you may increase batch_size if your memory allows
batch_size: 32 # you may increase batch_size if your memory allows
shuffle: true
num_workers: 8
pin_memory: false
Expand Down
198 changes: 198 additions & 0 deletions examples/tts/conf/ssl_tts.yaml
@@ -0,0 +1,198 @@
# This config contains the default values for self-supervised pre-training of a Conformer ASR model, large size (~120M).

# Architecture and training config:
# Default learning parameters in this config are set for effective batch size of 2K. To train it with smaller effective
# batch sizes, you may need to re-tune the learning parameters or use higher accumulate_grad_batches.
# Here are the recommended configs for different variants of Conformer-CTC, other parameters are the same as in this config file.
# One extra layer (compared to original paper) is added to the medium and large variants to compensate for replacing the LSTM decoder with a linear one.
#
# +-------------+---------+---------+----------+------------+-----+
# | Model | d_model | n_heads | n_layers | time_masks | lr |
# +=============+=========+========+===========+============+=====+
# | Small (13M)| 176 | 4 | 16 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Medium (30M)| 256 | 4 | 18 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Large (121M)| 512 | 8 | 18 | 10 | 2.0 |
# +---------------------------------------------------------------+
#
# If you do not want to train with AMP, you may use weight decay of 0.0 or reduce the number of time maskings to 2
# with time_width=100. It may help when you want to train for fewer epochs and need faster convergence.
# With weight_decay=0.0, learning rate may need to get reduced to 2.0.

name: "Conformer-SSL"
phoneme_dict_path: "scripts/tts_dataset_files/cmudict-0.7b_nv22.01"
heteronyms_path: "scripts/tts_dataset_files/heteronyms-030921"
init_from_pretrained_model: "ssl_en_conformer_large"

model:
sample_rate: 16000
combined_loss: true
train_ds:
manifest_filepath: "/home/shehzeenh/datasets/speaker_verification_full/vox1/train_formatted.json"
manifest_speaker_verification_fp: "/home/shehzeenh/datasets/speaker_verification_full/vox1/train_formatted.json"
manifest_content_fp: "/home/shehzeenh/datasets/All_LibriSpeech/train_clean_100.json"
sample_rate: ${model.sample_rate}
batch_size: 16 # you may increase batch_size if your memory allows
shuffle: true
num_workers: 8
pin_memory: false
use_start_end_token: true
trim_silence: false
max_duration: 16.7
min_duration: 8.0
# tarred datasets
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
# bucketing params
bucketing_strategy: "synced_randomized"
bucketing_batch_size: null

validation_ds:
manifest_filepath: "/home/shehzeenh/datasets/speaker_verification_full/vox1/dev_formatted.json"
manifest_speaker_verification_fp: "/home/shehzeenh/datasets/speaker_verification_full/vox1/dev_formatted.json"
manifest_content_fp: "/home/shehzeenh/datasets/All_LibriSpeech/dev_clean.json"
sample_rate: ${model.sample_rate}
batch_size: 16 # you may increase batch_size if your memory allows
shuffle: false
num_workers: 8
pin_memory: true
use_start_end_token: false
min_duration: 8.0

# text_tokenizer:
# _target_: nemo.collections.tts.torch.tts_tokenizers.EnglishPhonemesTokenizer
# punct: true
# stresses: true
# chars: true
# apostrophe: true
# pad_with_space: true
# g2p:
# _target_: nemo.collections.tts.torch.g2ps.EnglishG2p
# phoneme_dict: ${phoneme_dict_path}
# heteronyms: ${heteronyms_path}
# phoneme_probability: 0.5

preprocessor:
_target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
sample_rate: ${model.sample_rate}
normalize: "per_feature"
window_size: 0.025
window_stride: 0.01
window: "hann"
features: 80
n_fft: 512
log: true
frame_splicing: 1
dither: 0.00001
pad_to: 16
pad_value: 0.0

spec_augment:
_target_: nemo.collections.asr.modules.MaskedPatchAugmentation
freq_masks: 3
freq_width: 20
patch_size: 48
mask_patches: 0.5

downstream_heads:
task_names: ['speaker_verification', 'content']
speaker_embed_size: 256
num_speakers: 1211
content_embed_size: 256

encoder:
_target_: nemo.collections.asr.modules.ConformerEncoder
feat_in: ${model.preprocessor.features}
feat_out: -1 # you may set it if you need different output size other than the default d_model
n_layers: 18
d_model: 512

# Sub-sampling params
subsampling: striding # vggnet or striding, vggnet may give better results but needs more memory
subsampling_factor: 4 # must be power of 2
subsampling_conv_channels: -1 # -1 sets it to d_model

# Feed forward module's params
ff_expansion_factor: 4

# Multi-headed Attention Module's params
self_attention_model: rel_pos # rel_pos or abs_pos
n_heads: 8 # may need to be lower for smaller d_models
# [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
att_context_size: [-1, -1] # -1 means unlimited context
xscaling: true # scales up the input embeddings by sqrt(d_model)
untie_biases: true # unties the biases of the TransformerXL layers
pos_emb_max_len: 5000

# Convolution module's params
conv_kernel_size: 31
conv_norm_type: 'batch_norm' # batch_norm or layer_norm

### regularization
dropout: 0.1 # The dropout used in most of the Conformer Modules
dropout_emb: 0.0 # The dropout used for embeddings
dropout_att: 0.1 # The dropout for multi-headed attention modules

decoder_out: 128


optim_backbone:
_target_: torch.optim.Adam
lr: 1e-5
# optimizer arguments
# betas: [0.9, 0.98]
sched:
min_lr: 1e-6
warmup_steps: 1000

optim_downstream:
_target_: torch.optim.Adam
lr: 1e-4
sched:
min_lr: 1e-6
warmup_steps: 1000


trainer:
devices: -1 # number of GPUs, -1 would use all available GPUs
num_nodes: 1
max_epochs: 1000
max_steps: null # computed at runtime if not set
val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
accelerator: auto
strategy: ddp
accumulate_grad_batches: 1
# gradient_clip_val: 1.0
precision: 32 # Should be set to 16 for O1 and O2 to enable the AMP.
log_every_n_steps: 10 # Interval of logging.
progress_bar_refresh_rate: 10
resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
num_sanity_val_steps: 0 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
sync_batchnorm: true
enable_checkpointing: False # Provided by exp_manager
logger: false # Provided by exp_manager
benchmark: false # needs to be false for models with variable-length speech input as it slows down training

exp_manager:
exp_dir: null
name: ${name}
create_tensorboard_logger: true
create_checkpoint_callback: true
checkpoint_callback_params:
# in case of multiple validation sets, first one is used
monitor: "val_loss"
mode: "min"
save_top_k: 5

# you need to set these two to True to continue the training
resume_if_exists: false
resume_ignore_no_checkpoint: false

# You may use this section to create a W&B logger
create_wandb_logger: false
wandb_logger_kwargs:
name: null
project: null
188 changes: 188 additions & 0 deletions examples/tts/conf/ssl_tts_ngc.yaml
@@ -0,0 +1,188 @@
# This config contains the default values for self-supervised pre-training of a Conformer ASR model, large size (~120M).

# Architecture and training config:
# Default learning parameters in this config are set for effective batch size of 2K. To train it with smaller effective
# batch sizes, you may need to re-tune the learning parameters or use higher accumulate_grad_batches.
# Here are the recommended configs for different variants of Conformer-CTC, other parameters are the same as in this config file.
# One extra layer (compared to original paper) is added to the medium and large variants to compensate for replacing the LSTM decoder with a linear one.
#
# +-------------+---------+---------+----------+------------+-----+
# | Model | d_model | n_heads | n_layers | time_masks | lr |
# +=============+=========+========+===========+============+=====+
# | Small (13M)| 176 | 4 | 16 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Medium (30M)| 256 | 4 | 18 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Large (121M)| 512 | 8 | 18 | 10 | 2.0 |
# +---------------------------------------------------------------+
#
# If you do not want to train with AMP, you may use weight decay of 0.0 or reduce the number of time maskings to 2
# with time_width=100. It may help when you want to train for fewer epochs and need faster convergence.
# With weight_decay=0.0, learning rate may need to get reduced to 2.0.

name: "Conformer-SSL"
phoneme_dict_path: "scripts/tts_dataset_files/cmudict-0.7b_nv22.01"
heteronyms_path: "scripts/tts_dataset_files/heteronyms-030921"
init_from_pretrained_model: "ssl_en_conformer_large"

model:
sample_rate: 16000

train_ds:
manifest_filepath: "/raid/ssl_tts_tmp/train_vox_ngc.json"
manifest_speaker_verification_fp: "/raid/ssl_tts_tmp/train_vox_ngc.json"
manifest_content_fp: "/raid/ssl_tts_tmp/train_libri_ngc.json"
sample_rate: ${model.sample_rate}
batch_size: 12 # you may increase batch_size if your memory allows
shuffle: true
num_workers: 8
pin_memory: false
use_start_end_token: true
trim_silence: false
max_duration: 16.7
min_duration: 8.0
# tarred datasets
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
# bucketing params
bucketing_strategy: "synced_randomized"
bucketing_batch_size: null

validation_ds:
manifest_filepath: "/raid/ssl_tts_tmp/dev_vox_ngc.json"
manifest_speaker_verification_fp: "/raid/ssl_tts_tmp/dev_vox_ngc.json"
manifest_content_fp: "/raid/ssl_tts_tmp/dev_libri_ngc.json"
sample_rate: ${model.sample_rate}
batch_size: 12 # you may increase batch_size if your memory allows
shuffle: false
num_workers: 8
pin_memory: true
use_start_end_token: false
min_duration: 8.0

# text_tokenizer:
# _target_: nemo.collections.tts.torch.tts_tokenizers.EnglishPhonemesTokenizer
# punct: true
# stresses: true
# chars: true
# apostrophe: true
# pad_with_space: true
# g2p:
# _target_: nemo.collections.tts.torch.g2ps.EnglishG2p
# phoneme_dict: ${phoneme_dict_path}
# heteronyms: ${heteronyms_path}
# phoneme_probability: 0.5

preprocessor:
_target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
sample_rate: ${model.sample_rate}
normalize: "per_feature"
window_size: 0.025
window_stride: 0.01
window: "hann"
features: 80
n_fft: 512
log: true
frame_splicing: 1
dither: 0.00001
pad_to: 16
pad_value: 0.0

spec_augment:
_target_: nemo.collections.asr.modules.MaskedPatchAugmentation
freq_masks: 3
freq_width: 20
patch_size: 48
mask_patches: 0.5

downstream_heads:
task_names: ['speaker_verification', 'content']
speaker_embed_size: 256
num_speakers: 1211
content_embed_size: 256

encoder:
_target_: nemo.collections.asr.modules.ConformerEncoder
feat_in: ${model.preprocessor.features}
feat_out: -1 # you may set it if you need different output size other than the default d_model
n_layers: 18
d_model: 512

# Sub-sampling params
subsampling: striding # vggnet or striding, vggnet may give better results but needs more memory
subsampling_factor: 4 # must be power of 2
subsampling_conv_channels: -1 # -1 sets it to d_model

# Feed forward module's params
ff_expansion_factor: 4

# Multi-headed Attention Module's params
self_attention_model: rel_pos # rel_pos or abs_pos
n_heads: 8 # may need to be lower for smaller d_models
# [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
att_context_size: [-1, -1] # -1 means unlimited context
xscaling: true # scales up the input embeddings by sqrt(d_model)
untie_biases: true # unties the biases of the TransformerXL layers
pos_emb_max_len: 5000

# Convolution module's params
conv_kernel_size: 31
conv_norm_type: 'batch_norm' # batch_norm or layer_norm

### regularization
dropout: 0.1 # The dropout used in most of the Conformer Modules
dropout_emb: 0.0 # The dropout used for embeddings
dropout_att: 0.1 # The dropout for multi-headed attention modules

decoder_out: 128


optim:
name: adam
lr: 1e-5
# optimizer arguments
# betas: [0.9, 0.98]


trainer:
devices: -1 # number of GPUs, -1 would use all available GPUs
num_nodes: 1
max_epochs: 1000
max_steps: null # computed at runtime if not set
val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
accelerator: auto
strategy: ddp
accumulate_grad_batches: 1
# gradient_clip_val: 1.0
precision: 32 # Should be set to 16 for O1 and O2 to enable the AMP.
log_every_n_steps: 10 # Interval of logging.
progress_bar_refresh_rate: 10
resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
num_sanity_val_steps: 0 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
sync_batchnorm: true
enable_checkpointing: False # Provided by exp_manager
logger: false # Provided by exp_manager
benchmark: false # needs to be false for models with variable-length speech input as it slows down training

exp_manager:
exp_dir: null
name: ${name}
create_tensorboard_logger: true
create_checkpoint_callback: true
checkpoint_callback_params:
# in case of multiple validation sets, first one is used
monitor: "val_loss"
mode: "min"
save_top_k: 5

# you need to set these two to True to continue the training
resume_if_exists: false
resume_ignore_no_checkpoint: false

# You may use this section to create a W&B logger
create_wandb_logger: false
wandb_logger_kwargs:
name: null
project: null

0 comments on commit e465941

Please sign in to comment.