Skip to content

Commit

Permalink
GPT extrapolatable position embedding (xpos/sandwich/alibi/kerple) an…
Browse files Browse the repository at this point in the history
…d Flash Attention (#6666)

* move to nvidia megatron repo (#6465) (#6475)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Megatron KERPLE positional embeddings (#6478) (#6480)

* [TTS] FastPitch adapter fine-tune and conditional layer normalization (#6416)

[TTS] FastPitch adapter fine-tune and conditional layer normalization (#6416)

---------




* [TTS] whitelist broken path fix. (#6412)

* [TTS] whitelist broken path fix.



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------




* [TTS] FastPitch speaker encoder (#6417)

* Add initial codes



* Remove wemb



* Fix import



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Restore aligner loss



* Add ConditionalInput



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix error and support pre-trained config



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Follow comments



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Rename config



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change copyright and random weight test



* Add initial codes



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Fix import error



* Add initial codes



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Fix dataset error



* Remove reference speaker embedding



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Remove SV encoder



* Follow comments



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Fix length type



* Fix append



* Move error msg



* Add look-up into speaker encoder



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Add valueerror msg



* Move lookup



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Remove unused



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



* Fix error



* Rebase and Fix error



* Fix spk encoder



* Rename n_speakers



* Follow comments



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix n_speakers None error



---------




* Sharded manifests for tarred datasets (#6395)

* testing sharded manifests



* compatibility



* proper fixes



* adding flag tot convert_to_tarred_audio_dataset



* shard_manifests conf param



* propagating the shard_manifests param



* propagating the shard_manifests param



* distributed checks



* typo



* typo



* fixes



* fixes



* fixes



* fixes



* fixes



* fixes



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes based on PR comments and tests



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes to convert_to_tarred_audio_dataset.py



* reversing manifest shards flag



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* tests



* excluding manifests from webdataset url expansion



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* expand manifest paths before attempting to cache from datastore



* explicit use of UTF-8 for manifest i/o



---------




* Update wfst_text_normalization.rst (#6374)

Add Hungarian (incoming in NeMo-text-processing)



* Support Swiglu in TP PP Conversion (#6437) (#6451)

* Support Swiglu in TP PP Conversion



* Guard activation



* Guard activation



---------




* Update NeMo_TTS_Primer.ipynb (#6436)

* Update NeMo_TTS_Primer.ipynb

Changed a mistake in line 782. Instead of frequency band (ie. pitch) we should write frequency bin. Note that frequency bins in FFT are not related to pitch.



* Update NeMo_TTS_Primer.ipynb

Corrected the description of spectrogram and mel spectrogram calculations in lines 782 & 783 and added a fourth point to the description and added a reference for more mathematical details at the end of this point.



---------



* add rampup batch size support for Megatron GPT (#6424)

* added rampup batch size support



* added tests for rampup batch size



* fixed the typos



* added assertions



* changed assertion rules



* deleted unused imports



* changed tests for rampup batch size



* updated rampup batch size tests



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed styling



* rampup batch size tests changes



---------







* Meagtron encoder decoder fix for empty validation outputs (#6459) (#6461)

* 1. Meagtron encoder decoder fix for empty validation outputs.



* 1. Debugging.

---------





* Code-Switching dataset creation - upgrading to aggregate tokenizer manifest format (#6448)

* added functionality to create agg tokenizer compatible manifest for CS, flag to use this mode by default



* updated README with the new agg_tokenizer_manifest flag



* fixed typo in scripts/speech_recognition/code_switching/README.md



* changed agg_tokenizer_manifest to is_lid_manifest



---------




* Added/updated new Conformer configs (#6426) (#6467)

* Update script for ngram rnnt and hat beam search decoding (#6370)

* add rnnt ngram beamsearch script



* add return encoding embedding option



* update script



* add rnnt and hat ngram decoding script



* add some parameters



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add return_encoder_embeddings parameter to RNNTDecodingConfig



* replace return_encoder_embeddings parameter



* generalization of scipt behavior



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove return_encoder_embeddings parameter



* remove return_encoder_embeddings parameter



* add manual encoder_embeddings calculation



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix beam_width value to 8



* fix rescoring description



---------






* BERT pre-training mp fork to spawn (#6442) (#6454)

* change bert fork to spawn



* num_workers=0 fix



---------




* fix replace_bos_with_pad not found (#6443) (#6450)




* reduce workers on NMT CI (#6472) (#6474)




* 1. Added KERPLE positional embeddings to encoder-decoder.



* 1. Added a missing file.



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 1. Fixing commits.



* 1. Debugging.

* 1. Debugging.

* 1. Debugging.

* 1. Debugging.

---------

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>
Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: Dima Rekesh <bmwshop@gmail.com>
Signed-off-by: Jim O’Regan <jaoregan@tcd.ie>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: Mostafa Ghorbandoost <mos.ghorbandoost@gmail.com>
Signed-off-by: Dmytro Pykhtar <dpykhtar@nvidia.com>
Signed-off-by: Dmytro Pykhtar <37850217+dimapihtar@users.noreply.github.com>
Signed-off-by: Micha Livne <mlivne@nvidia.com>
Signed-off-by: Kunal Dhawan <kunaldhawan97@gmail.com>
Signed-off-by: andrusenkoau <andrusenkoau@gmail.com>
Signed-off-by: Andrei Andrusenko <52885736+andrusenkoau@users.noreply.github.com>
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Cheng-Ping Hsieh <37269846+hsiehjackson@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Co-authored-by: Dima Rekesh <bmwshop@gmail.com>
Co-authored-by: Jim O’Regan <jaoregan@tcd.ie>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Mostafa Ghorbandoost <mos.ghorbandoost@gmail.com>
Co-authored-by: Dmytro Pykhtar <37850217+dimapihtar@users.noreply.github.com>
Co-authored-by: Dmytro Pykhtar <dpykhtar@nvidia.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Kunal Dhawan <kunaldhawan97@gmail.com>
Co-authored-by: Andrei Andrusenko <52885736+andrusenkoau@users.noreply.github.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix an invalid link in get_data.py of ljspeech (#6456)

Usage of the link in line 63 leads to downloading a html file not a tsv file, so we need to change it to a raw link.

Signed-off-by: Mostafa Ghorbandoost <mos.ghorbandoost@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* 1. Added external index sample. (#6462) (#6483)

Signed-off-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update README to add core installation (#6488) (#6489)

* update README for megatron-core



* fix



---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix cache aware hybrid bugs (#6466) (#6484)

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix typos (#6494) (#6495)

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add disclaimer about dataset for ASR (#6496)

Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix (#6502)

datastore_path_to_webdataset_url(p) if is_datastore_path(p) and is_tarred_path(p) else p
NameError: name 'is_tarred_path' is not defined

Co-authored-by: George <gzelenfroind@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix broken links r1.18.0 (#6501) (#6504)

* fix broken links



* fix broken links



---------

Signed-off-by: Evelina <ebakhturina@nvidia.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Create functions for TTS preprocessing without dataloader (#6317)

* [TTS] Create functions for TTS preprocessing without dataloader

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Cache aware streaming nfa (#6209)

* add cache aware streaming to nemo aligner

Signed-off-by: Slyne Deng <slyned@nvidia.com>

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [BugFix] Force _get_batch_preds() to keep logits in decoder timestamps generator (#6499)

* [BugFix] _get_batch_preds() is forced to keep logits in  decoder timestamps generators

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Ingnore keep_logits boolean in FrameASRBatchLogits

Signed-off-by: Taejin Park <tango4j@gmail.com>

---------

Signed-off-by: Taejin Park <tango4j@gmail.com>
Co-authored-by: Jagadeesh Balam <4916480+jbalam-nv@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Fix FastPitch energy code (#6511)

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix custom forward_torch_softmax (#6512) (#6517)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] fixed broken path. (#6514) (#6518)

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix normalization of impulse response in ImpulsePerturbation (#6505)

Signed-off-by: Ante Jukić <ajukic@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add interleaved pp support (#6498)

* Add support for Virtual Pipeline Parallel conversion

Signed-off-by: smajumdar <titu1994@gmail.com>

* Add support for Virtual Pipeline Parallel conversion

Signed-off-by: smajumdar <titu1994@gmail.com>

* Switch to megatron core

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix typos (#6523)

* Fix typos

Signed-off-by: smajumdar <titu1994@gmail.com>

* Fix typos

Signed-off-by: smajumdar <titu1994@gmail.com>

---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* New noise_norm perturbation based on Riva work (#6445)

* Initial commit for new noise_norm perturbation

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Minor fix to random seed in perturb

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Updated code to reflect feedback

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Updates for feedback given by code reviewers

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Updates in response to PR feedback

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Added comment about ref_mic being None

Signed-off-by: Daniel Egert <degert@nvidia.com>

* Updated perturb to use inspect module

Signed-off-by: Daniel Egert <degert@nvidia.com>

---------

Signed-off-by: Daniel Egert <degert@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Add script for computing feature stats (#6508)

* [TTS] Add script for computing feature stats

Signed-off-by: Ryan <rlangman@nvidia.com>

* [TTS] Add overwrite config

Signed-off-by: Ryan <rlangman@nvidia.com>

---------

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add Frame-VAD model and datasets (#6441)

* add model, dataset, necessary utils and tests

Signed-off-by: stevehuang52 <heh@nvidia.com>

* fix tarred data

Signed-off-by: stevehuang52 <heh@nvidia.com>

* fix typo

Signed-off-by: stevehuang52 <heh@nvidia.com>

* update docstring

Signed-off-by: stevehuang52 <heh@nvidia.com>

* update doc

Signed-off-by: stevehuang52 <heh@nvidia.com>

* update doc

Signed-off-by: stevehuang52 <heh@nvidia.com>

* update pretrained model info

Signed-off-by: stevehuang52 <heh@nvidia.com>

---------

Signed-off-by: stevehuang52 <heh@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Support dynamic length batches with GPT SFT (#6510)

* Support synamic length with GPT SFT

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* make branch functional

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* added back the fast emit section to the configs. (#6540) (#6542)

* added back the fast emit section to the configs.



* added back the fast emit section to the configs.



---------

Signed-off-by: Vahid <vnoroozi@nvidia.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* removing unnessary avoid_bfloat16_autocast_context (#6481)

Signed-off-by: Dima Rekesh <bmwshop@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* FC models in menu (#6473)

* FC models in menu

Signed-off-by: Dima Rekesh <bmwshop@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Dima Rekesh <bmwshop@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Add tutorials for FastPitch TTS speaker adaptation with adapters (#6431)

* Add tts adapter tutorial

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update main tutorial

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add tts adapter tutorial

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update main tutorial

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update tutorial

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Follow comments

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Follow comments

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix load .nemo error

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Support multi-speaker fine-tune

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Follow comments

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Use .nemo

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Follow Comments

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix bug

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix bug

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix bug

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add precomputed speaker emb

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix space

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Remove repeated argument

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* optional batch size

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix comments in notebook

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

---------

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Create initial TTS dataset feature processors (#6507)

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix (#6529) (#6546)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add FastConformer Hybrid ASR models for EN, ES, IT, DE, PL, HR, UA, BY (#6549) (#6553)

* Added fastconfomer hybrid asr models for en, es, it, de, pl, hr, ua, by



* updated ASR docs with the fastconformer hybrid checkpoints



* added the fastconformer RNNT and CTC models



---------

Signed-off-by: KunalDhawan <kunaldhawan97@gmail.com>
Co-authored-by: Kunal Dhawan <kunaldhawan97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add scores for FastConformer models (#6557) (#6558)

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix fp16 (#6543) (#6544)

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Patch transcribe and support offline transcribe for hybrid model (#6550) (#6559)

Signed-off-by: fayejf <fayejf07@gmail.com>
Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix notebook bad json (#6561)

Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Change Megatron Enc Dec model to use persistent_workers (#6548) (#6552)

* persistent workers



* fix



---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Make KenLM with PC for AggregateTokenizer and merge it (#6081)

* do_lowercase, rm_punctuation

Signed-off-by: Nikolay Karpov <nkarpov@nvidia.com>

* support beam_strategy = beam

Signed-off-by: Nikolay Karpov <nkarpov@nvidia.com>

* black

Signed-off-by: Nikolay Karpov <nkarpov@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix config and^Cunctuation capitalization

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* rm math

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* update kenlm

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* black

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add opengrm

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* mv install_beamsearch_decoders

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* punctuation_to_preserve

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Only tikenizer opion

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* Black

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* DEFAULT_TOKEN_OFFSET

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* aggregate_tokenizer

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* install kenlm with more than 5gram

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* install_beamsearch_decoders

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* ngram_bin_path kenlm_bin_path

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* black

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* fix greedy PC bug

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* move global params

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* fix description and perplexity

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* fix description

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* NEMO_PATH

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* nemo:23.01

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* License

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* description

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* isinstance

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* refactor kenlm stdin

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* black

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* add cmd arg

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* use new iter_files

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* EncDecHybridRNNTCTCModel

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* punctuation

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* train_kenlm args

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* add docstrings

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add ngram_merge docs

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* ngram_prune

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rename to ngram_merge

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rename to ngram

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* add comments

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* Ngram

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* nemo_model_file

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* install_opengrm_ngram

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* install opengrm

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rename to install_opengrm.sh

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rm extra import

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* train_paths

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* text_processing

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* fix ngram_bin_path

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* DECODERS_PATH

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* farcompile

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rm text processing

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* text_processing

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* AggregateTokenizer.DummyTokenizer

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* comments

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* TextProcessingConfig

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* typo

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* doc

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* types

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* nemo_model_file

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* rm assert

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* import kenlm_utils

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* return None

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* Copyright

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* 2022

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

* 2023

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>

---------

Signed-off-by: Nikolay Karpov <nkarpov@nvidia.com>
Signed-off-by: Nikolay Karpov <karpnv@gmail.com>
Co-authored-by: Nikolay Karpov <nkarpov@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix for running on 1 GPU.

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* temp rtd fix (#6568) (#6569)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Add script for mapping speaker names to indices (#6509)

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* whitespace (#6574)

Signed-off-by: Nikolay Karpov <karpnv@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update manifest.py for speedup (#6565) (#6573)

* Update manifest.py

Re-order the checks for faster processing audio filepaths that are already absolute paths



* Update manifest.py



---------

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Co-authored-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* More streaming conformer export fixes (#6567) (#6578)

Signed-off-by: Greg Clark <grclark@nvidia.com>
Co-authored-by: Greg Clark <grclark@nvidia.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* user selected max_seq_len should be less than model's max_seq_len (#6333) (#6386)

* user selection should not break model max limit



* eval max seq length



---------

Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: Adi Renduchintala <108822655+arendu@users.noreply.github.com>
Co-authored-by: Adi Renduchintala <108822655+arendu@users.noreply.github.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Framework for PEFT via mixins  (#6391)

* init commit ptuning via mixin

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* gpt ptuning places virtual tokens on the left only

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* encoder input modified when pre_process is true

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* optimizer group and state dict updates

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* adapter ptuning working for pp>1

Signed-off-by: arendu <adithya.r@gmail.com>

* adapter defaults

Signed-off-by: arendu <adithya.r@gmail.com>

* adapter ptuining config defaults

Signed-off-by: arendu <adithya.r@gmail.com>

* training works

Signed-off-by: arendu <adithya.r@gmail.com>

* loading and saving adapter only params during training

Signed-off-by: arendu <adithya.r@gmail.com>

* added checks and comments

Signed-off-by: arendu <adithya.r@gmail.com>

* clean up

Signed-off-by: arendu <adithya.r@gmail.com>

* checks for grad is None before calling all_reduce

Signed-off-by: arendu <adithya.r@gmail.com>

* load adapter .nemo file working

Signed-off-by: arendu <adithya.r@gmail.com>

* resume training for adapters

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* peft tuning

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* minor

Signed-off-by: arendu <adithya.r@gmail.com>

* file not needed

Signed-off-by: arendu <adithya.r@gmail.com>

* undo prompt learning dataset changes

Signed-off-by: arendu <adithya.r@gmail.com>

* undo updates to gpt prompt learning model

Signed-off-by: arendu <adithya.r@gmail.com>

* naming updates

Signed-off-by: arendu <adithya.r@gmail.com>

* decoding

Signed-off-by: arendu <adithya.r@gmail.com>

* predict_step in gpt_sft_model

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* removed inference from tuning config

Signed-off-by: arendu <adithya.r@gmail.com>

* no test in peft training

Signed-off-by: arendu <adithya.r@gmail.com>

* answer only loss and correct defaults for val_loss

Signed-off-by: arendu <adithya.r@gmail.com>

* hybrid adapters and ptuning

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* eval working..

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* prepending tokens for ptuning

Signed-off-by: arendu <adithya.r@gmail.com>

* cleaned up eval config

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* clean up

Signed-off-by: arendu <adithya.r@gmail.com>

* update

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* default prompt template

Signed-off-by: arendu <adithya.r@gmail.com>

* Lora added

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Support synamic length with GPT SFT

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* make branch functional

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* defaults to max_pad_length=False in GPT SFT dataset

Signed-off-by: arendu <adithya.r@gmail.com>

* adapter parallel_adapters to support Lora

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* added early stopping by default

Signed-off-by: arendu <adithya.r@gmail.com>

* eval script for peft and eval config. bug fixes in predict step and added out_features to t5 adapter config

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* docs

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* better defaults

Signed-off-by: arendu <adithya.r@gmail.com>

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* update

Signed-off-by: arendu <adithya.r@gmail.com>

* docs

Signed-off-by: arendu <adithya.r@gmail.com>

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: Adi Renduchintala <108822655+arendu@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* cache and reuse inputs (#6422) (#6452)

Co-authored-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add patches for Virtual Parallel conversion (#6589)

* Add patches for Virtual Parllel conversion

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Pass `.scale` instead of scaler object to core (#6551)

* pass .scale instead of scaler object to core (#6545)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>

* Update megatron_gpt_model.py

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* scale changes for main

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Documentation for ASR-TTS models (#6594) (#6595)

* Add docs about hybrid ASR-TTS models



* Add docs about text-only datasets



* Add docs about ASR-TTS checkpoints



* Add docs about ASR-TTS configs and training



* Clean up



* ASR-TTS docs: add to api, fix imports



* Clean up



* Wrap optional import



* Revert general ASR import



---------

Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Co-authored-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Fix aligner nan loss in fp32 (#6435)

* Fix nan loss in fp32

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update SDP docs (#6485) (#6596)

* add info about SDP e.g. processor classes in docs



* add link to SDP docs in README



* address code review comments and add SDP overview diagram



* Fix spelling typo



---------

Signed-off-by: Elena Rastorgueva <erastorgueva@nvidia.com>
Co-authored-by: Elena Rastorgueva <80532067+erastorgueva-nv@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Bug/typo fixes (#6599)

Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Manual garbage collection with an interval (#6469) (#6482)

* Manual garbage collection with an interval



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* use trainer.global_step for tracking the interval of GC



---------

Signed-off-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Make tensor split contiguous (#6580) (#6593)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [ASR] Fix for old models in change_attention_model (#6608)

* fixes

Signed-off-by: sam1373 <samuelkriman@gmail.com>

* done already

Signed-off-by: sam1373 <samuelkriman@gmail.com>

---------

Signed-off-by: sam1373 <samuelkriman@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Update manifest.py to use os.path for get_full_path (#6598)

* Update manifest.py to use os.path for get_full_path

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update manifest.py to get rid of pathlib

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update manifest.py

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* Update manifest.py

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Cherry pick commits in #6601 to main (#6611)

* fix write

Signed-off-by: fayejf <fayejf07@gmail.com>

* decoding ctc

Signed-off-by: fayejf <fayejf07@gmail.com>

* temp set rnnt decoding return_best_hypothesis to true

Signed-off-by: fayejf <fayejf07@gmail.com>

* add wer cal back to transcribe_speech as requested

Signed-off-by: fayejf <fayejf07@gmail.com>

* add wer cal back to speech_to_text_buffered_infer_rnnt  as requested

Signed-off-by: fayejf <fayejf07@gmail.com>

* add wer cal back to speech_to_text_buffered_infer_ctc as requested

Signed-off-by: fayejf <fayejf07@gmail.com>

* style fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* reflect change in asr_evaluator

Signed-off-by: fayejf <fayejf07@gmail.com>

* reflect som and vahid comment

Signed-off-by: fayejf <fayejf07@gmail.com>

* remove return_best_hy=true in transcribe_speech

Signed-off-by: fayejf <fayejf07@gmail.com>

* no text skip

Signed-off-by: fayejf <fayejf07@gmail.com>

* revert partial

Signed-off-by: fayejf <fayejf07@gmail.com>

---------

Signed-off-by: fayejf <fayejf07@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Create dummy iters to satisy len checks (#6600) (#6603)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* add GPT eval mode fix for interleaved to main (#6610)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix batch size reconf for T5 FT for multi-validation (#6582) (#6588)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Not doing CastToFloat by default (#6524) (#6563)

* Not doing CastToFloat by default



* Added docustring



* Dummy commit



---------

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Turn autocast off when precision is fp32 (#6576)

* Turn autocast off when precision is fp32 (#6554)

* Turn autocast off when precision is fp32

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* address review

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixes

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* merge

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>

* correct auto-merge

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* correct auto-merge

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

* add to GPT SFT

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>

---------

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* update core commit hash in readme (#6622) (#6623)

Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* add hat image to docs (#6619) (#6621)

Signed-off-by: andrusenkoau <andrusenkoau@gmail.com>
Co-authored-by: Andrei Andrusenko <52885736+andrusenkoau@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Allow indices exchange via distributed (#6618) (#6624)

Signed-off-by: Mikołaj Błaż <mblaz@nvidia.com>
Co-authored-by: mikolajblaz <mikolajblaz@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Offline and streaming inference support for hybrid model (#6570)

* streaming buffered for hybrid + ctc

Signed-off-by: fayejf <fayejf07@gmail.com>

* change default model_stride in eval.yaml

Signed-off-by: fayejf <fayejf07@gmail.com>

* add fc model_stride

Signed-off-by: fayejf <fayejf07@gmail.com>

* small fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* check whether model and decoding match

Signed-off-by: fayejf <fayejf07@gmail.com>

* small fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* streaming buffered for hybrid + rnnt

Signed-off-by: fayejf <fayejf07@gmail.com>

* style fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* fix yaml

Signed-off-by: fayejf <fayejf07@gmail.com>

* reflect comment wip

Signed-off-by: fayejf <fayejf07@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* refactor and verified

Signed-off-by: fayejf <fayejf07@gmail.com>

* add get_full_path to buffered

Signed-off-by: fayejf <fayejf07@gmail.com>

* small fix

Signed-off-by: fayejf <fayejf07@gmail.com>

* add RNNTDecodingConfig

Signed-off-by: fayejf <fayejf07@gmail.com>

* model name & instruction of changing decoding

Signed-off-by: fayejf <fayejf07@gmail.com>

---------

Signed-off-by: fayejf <fayejf07@gmail.com>
Signed-off-by: fayejf <36722593+fayejf@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Patch decoding for PC models (#6630) (#6631)

* Patch decoding logic for PC models



* Patch decoding logic for PC models



---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix wer.py where 'errors' variable was not set (#6633) (#6634)

Fix wer.py where 'errors' variable was not set when both reference and hypothesis are empty strings

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Co-authored-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Restore GPT support for interleaved pipeline parallelism (#6528) (#6613)

* Restore logic for data-parallel communication with pipeline parallelism in GPT



* Support dynamic attention masks in GPT



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Debug typos



* Debug data iterator caching with interleaved pipeline parallelism

Each model chunk accesses the data iterator multiple times, so we need to cache multiple samples.



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update Megatron-LM commit



* Distinguish between list of data iterators and data iterator that is a list



* Create dummy iters to satisy len checks



* Kludge while waiting for Megatron-LM update



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* set transformers offline to avoid rate limiting



---------

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Eric Harper <complex451@gmail.com>
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add FA

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix XPOS

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add warning

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix bugs

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix attention

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix comment

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Fix cast dtype

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Undo xpos

Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* bugfix (#6636)

Signed-off-by: fayejf <fayejf07@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Disable interctc tests (#6638)

Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add megatron_core to requirements (#6639) (#6640)

* add megatron_core to requirements



* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Remove from jenkins (#6642)

* Remove from jenkins (#6641)

* add megatron_core to requirements

Signed-off-by: ericharper <complex451@gmail.com>

* remove from jenkins

Signed-off-by: ericharper <complex451@gmail.com>

---------

Signed-off-by: ericharper <complex451@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* remove dup

Signed-off-by: ericharper <complex451@gmail.com>

---------

Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* sft model can use this script for eval (#6637)

* sft model can use this script for eval

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* please fix me

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* minor

Signed-off-by: arendu <adithya.r@gmail.com>

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Fix TTS audio preprocessing bugs (#6628)

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Move black parameters to pyproject.toml (#6647)

Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* ASR-TTS Models: Support hybrid RNNT-CTC, improve docs. (#6620)

* ASR-TTS: support hybrid RNNT-CTC models
* Do not warn on optional import
* Explain adding options to config
* Fix import guard docs
* Add docs for ConcatDataset
* Add explanation for sampling parameters
* Initial docs for the enhancer model
* Fix use_start_end_token parameter usage

---------

Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* fix conversion and eval (#6648)

* fix conversion and eval

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Confidence ensembles implementation (#6614)

* Working version to train conf model + save ensemble class

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Working version

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Remove copy of transcribe_speech.py

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Move models parameter to config

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add explicit parameters to transcribe

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Small cleanups

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add temperature and integration tests

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add more tests

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add pc removal config

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Cleanup

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Fix typo

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Address review comments

Signed-off-by: Igor Gitman <igitman@nvidia.com>

---------

Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Patch memory used for NeMo Megatron models (#6615)

* Patch memory used for NeMo Megatron models

Signed-off-by: smajumdar <titu1994@gmail.com>

* Cleanup the dtype of embeddings

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Refactor util function for parsing precision

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Refactor util function for parsing precision

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Try patch for Megatron O2

Signed-off-by: smajumdar <titu1994@gmail.com>

* Refactor to incorporate megatron amp 02 state

Signed-off-by: smajumdar <titu1994@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Refactor to incorporate megatron amp 02 state

Signed-off-by: smajumdar <titu1994@gmail.com>

* Correct indent

Signed-off-by: smajumdar <titu1994@gmail.com>

* Correct utils import

Signed-off-by: smajumdar <titu1994@gmail.com>

---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* handle artifacts when path is dir (#6658)

Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* remove upgrading setuptools in reinstall.sh (#6659)

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* merge lora weights into base model (#6597)

* merge lora weights into base model

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* typo fix

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* minor update

Signed-off-by: arendu <adithya.r@gmail.com>

* update copyright

Signed-off-by: arendu <adithya.r@gmail.com>

* eval needs to know the PEFT class

Signed-off-by: arendu <adithya.r@gmail.com>

* add target class in training script so that we can use it in eval

Signed-off-by: arendu <adithya.r@gmail.com>

* update

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update to work for tp1

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* set restore model path

Signed-off-by: arendu <adithya.r@gmail.com>

* peft can be none

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updated merge script so that eval works easily

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* eval with peft or sft model

Signed-off-by: arendu <adithya.r@gmail.com>

* keep sentences in jsonl format

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* convert sft using correct classpath

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updated to force sft yaml to have the correct target

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* updated docs

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix conversion and eval

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* upgrade to 23.04 (#6660)

Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Merge r1.18.0 bugfixes and doc updates to main (#6655)

* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* Remove from jenkins (#6641)

* add megatron_core to requirements

Signed-off-by: ericharper <complex451@gmail.com>

* remove from jenkins

Signed-off-by: ericharper <complex451@gmail.com>

---------

Signed-off-by: ericharper <complex451@gmail.com>

* remove dup

Signed-off-by: ericharper <complex451@gmail.com>

* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* [TTS] reformat NeMo versions in the tts logging messages to avoid batch process them when upgrading NeMo versions.

Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>

---------

Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Confidence ensembles: fix issues and add tuning functionality (#6657)

* Implement compute confidence to properly handle blanks

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Implement proper confidence for transducers

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Implement tuning logic

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add tests for confidence tuning

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Remove unused imports

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add types/docs

Signed-off-by: Igor Gitman <igitman@nvidia.com>

* Add comment about the main conf compute loop

Signed-off-by: Igor Gitman <igitman@nvidia.com>

---------

Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* [TTS] Implement new TextToSpeech dataset (#6575)

* [TTS] Implement new TextToSpeech dataset

Signed-off-by: Ryan <rlangman@nvidia.com>

* [TTS] Add unit tests

Signed-off-by: Ryan <rlangman@nvidia.com>

* [TTS] Fix defaulting of use_log_energy

Signed-off-by: Ryan <rlangman@nvidia.com>

* [TTS] Fix TTS export test

Signed-off-by: Ryan <rlangman@nvidia.com>

---------

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Dialogue dataset  (#6654)

* chatbot interface

Signed-off-by: Yi Dong <yidong@nvidia.com>

* latest gradio

Signed-off-by: Yi Dong <yidong@nvidia.com>

* default greedy

Signed-off-by: Yi Dong <yidong@nvidia.com>

* better chatbot

Signed-off-by: Yi Dong <yidong@nvidia.com>

* handle preamble

Signed-off-by: Yi Dong <yidong@nvidia.com>

* added chatbot training capablity

Signed-off-by: Yi Dong <yidong@nvidia.com>

* added chatbot ui

Signed-off-by: Yi Dong <yidong@nvidia.com>

* remove debug code

Signed-off-by: Yi Dong <yidong@nvidia.com>

* default human

Signed-off-by: Yi Dong <yidong@nvidia.com>

* use special token for roles

Signed-off-by: Yi Dong <yidong@nvidia.com>

* special tokens

Signed-off-by: Yi Dong <yidong@nvidia.com>

* fix name

Signed-off-by: Yi Dong <yidong@nvidia.com>

* new chat dataset

Signed-off-by: Yi Dong <yidong@nvidia.com>

* fix the system token

Signed-off-by: Yi Dong <yidong@nvidia.com>

* upgrade gradio

Signed-off-by: Yi Dong <yidong@nvidia.com>

* save the chat history

Signed-off-by: Yi Dong <yidong@nvidia.com>

* update ui

Signed-off-by: root <you@example.com>

* update chat interface

Signed-off-by: Yi Dong <yidong@nvidia.com>

* handles canonical form

Signed-off-by: Yi Dong <yidong@nvidia.com>

* new sft chatbot

Signed-off-by: Yi Dong <yidong@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change format

Signed-off-by: Yi Dong <yidong@nvidia.com>

* check extra_id in the tokenizer

Signed-off-by: Yi Dong <yidong@nvidia.com>

* added vocab property check

Signed-off-by: Yi Dong <yidong@nvidia.com>

* added missing file

Signed-off-by: Yi Dong <yidong@nvidia.com>

---------

Signed-off-by: Yi Dong <yidong@nvidia.com>
Signed-off-by: root <you@example.com>
Co-authored-by: root <you@example.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu>

* Add support for RNNT/hybrid models to partial transcribe (#6609)

* Add support for RNNT/hybrid models to partial transcribe

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* Update transcribe_utils.py

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* Update transcribe_speech.py

Signed-off-by: He Huang (Steve) <105218074+stevehuang52@users.noreply.github.com>

* Update transcr…
  • Loading branch information
Show file tree
Hide file tree
Showing 36 changed files with 1,843 additions and 277 deletions.
5 changes: 5 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,11 @@ WORKDIR /tmp/nemo
COPY requirements .
RUN for f in $(ls requirements*.txt); do pip3 install --disable-pip-version-check --no-cache-dir -r $f; done

# install flash attention dependencies
RUN pip install flash-attn
# pinned triton version for flash-attention https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attn_triton.py#L3
RUN pip install triton==2.0.0.dev20221202

# install k2, skip if installation fails
COPY scripts /tmp/nemo/scripts/
RUN INSTALL_MSG=$(/bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh); INSTALL_CODE=$?; \
Expand Down
346 changes: 346 additions & 0 deletions Jenkinsfile

Large diffs are not rendered by default.

10 changes: 10 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,16 @@ It is highly recommended to use the NVIDIA PyTorch or NeMo container if having i

Transformer Engine requires PyTorch to be built with CUDA 11.8.


Flash Attention
~~~~~~~~~~~~~~~~~~~~
Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.

.. code-block:: bash
pip install flash-attn
pip install triton==2.0.0.dev20221202
NeMo Text Processing
~~~~~~~~~~~~~~~~~~~~
NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
Expand Down
5 changes: 4 additions & 1 deletion examples/nlp/language_modeling/conf/megatron_gpt_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ model:
transformer_block_type: 'pre_ln' # Options ['pre_ln', 'post_ln', 'normformer']
openai_gelu: False # Use OpenAI's GELU instead of the default GeLU
normalize_attention_scores: True # Whether to scale the output Q * K^T by 1 / sqrt(hidden_size_per_head). This arg is provided as a configuration option mostly for compatibility with models that have been weight-converted from HF. You almost always want to se this to True.
position_embedding_type: 'learned_absolute' # Position embedding type. Options ['learned_absolute', 'rope']
position_embedding_type: 'learned_absolute' # Position embedding type. Options ['learned_absolute', 'rope', 'alibi', 'kerple' , 'xpos', 'sandwich'] xpos and sandwich are experimental.
rotary_percentage: 1.0 # If using position_embedding_type=rope, then the per head dim is multiplied by this.
attention_type: 'multihead' # Attention type. Options ['multihead']
share_embeddings_and_output_weights: True # Share embedding and output layer weights.
Expand Down Expand Up @@ -167,6 +167,9 @@ model:
reduce_amax: True # Perform reduction to sync amax tensors across GPUs after every iteration
use_emha: False # Use fused multi-head attention for large sequence-length. Note this is not yet supported. Please set to False.

## Flash Attention
use_flash_attention: False # Use flash attention in self-attention module, this config does nothing when transformer_engine=True

data:
# Path to data must be specified by the user.
# Supports List, String and Dictionary
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,5 @@ megatron_legacy: False # Whether to use the legacy Megatron model. This affects
normalize_attention_scores: True # Whether to scale the output Q * K^T by 1 / sqrt(hidden_size_per_head). This arg is provided as a configuration option mostly for compatibility with models that have been weight-converted from HF. You almost always want to se this to True.
num_moe_experts: 1 # When >1, FFNs are changed to MoE layers
moe_frequency: 1 # every Nth ffn layer will be made MoE
moe_dropout: 0.0 # Dropout value for MoE layers
moe_dropout: 0.0 # Dropout value for MoE layers
use_flash_attention: false # Use flash attention in self-attention module
Original file line number Diff line number Diff line change
Expand Up @@ -129,4 +129,5 @@ inference:
repetition_penalty: 1.2 # The parameter for repetition penalty. 1.0 means no penalty.
min_tokens_to_generate: 0 # The minimum length of the sequence to be generated.
compute_logprob: False # a flag used to compute logprob of all the input text, a very special case of running inference, default False
outfile_path: output.txt
outfile_path: output.txt
compute_attention_mask: True
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ def __init__(
gradient_accumulation_fusion=False,
persist_layer_norm=False,
openai_gelu=False,
megatron_legacy=False,
onnx_safe=False,
sequence_parallel=False,
transformer_engine=False,
Expand All @@ -163,6 +164,7 @@ def __init__(
fp8_amax_compute_algo='most_recent',
reduce_amax=True,
use_emha=False,
use_flash_attention=False,
):
super(GPTModel, self).__init__(share_token_embeddings=share_embeddings_and_output_weights)

Expand Down Expand Up @@ -232,6 +234,7 @@ def __init__(
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
megatron_legacy=megatron_legacy,
sequence_parallel=sequence_parallel,
transformer_engine=transformer_engine,
fp8=fp8,
Expand All @@ -243,6 +246,7 @@ def __init__(
fp8_amax_compute_algo=fp8_amax_compute_algo,
reduce_amax=reduce_amax,
use_emha=use_emha,
use_flash_attention=use_flash_attention,
)

if self.share_embeddings_and_output_weights:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
from pytorch_lightning.trainer.trainer import Trainer

from nemo.collections.nlp.models.nlp_model import NLPModel
from nemo.collections.nlp.modules.common.megatron.attention import HAVE_FLASH_ATTENTION
from nemo.collections.nlp.modules.common.megatron.clip_grads import (
clip_grad_norm_distributed_optimizer,
clip_grad_norm_fp32,
Expand Down Expand Up @@ -84,6 +85,12 @@ def __init__(self, cfg: DictConfig, trainer: Trainer, no_lm_init=True):
if trainer is None:
raise ValueError(f"Trainer cannot be None for Megatron-based models. Please provide a PTL trainer object.")

if cfg.get('use_flash_attention', False) and not HAVE_FLASH_ATTENTION:
raise ImportError(
"flash_attn was not found. Please see the installation instructions: https://github.com/HazyResearch/flash-attention."
"If you use flash_attn with triton. Please install triton==2.0.0.dev20221202."
)

# this prevents base constructor from initializing tokenizer
self.tokenizer = None

Expand Down Expand Up @@ -205,9 +212,10 @@ def _build_tokenizer(self):
self.tokenizer = get_nmt_tokenizer(
library=self._cfg.tokenizer.library,
model_name=self._cfg.tokenizer.type,
tokenizer_model=self.register_artifact("tokenizer.model", self._cfg.tokenizer.model),
vocab_file=self.register_artifact("tokenizer.vocab_file", self._cfg.tokenizer.vocab_file),
merges_file=self.register_artifact("tokenizer.merge_file", self._cfg.tokenizer.merge_file),
tokenizer_model=self.register_artifact("tokenizer.model", self._cfg.tokenizer.get('model', None)),
vocab_file=self.register_artifact("tokenizer.vocab_file", self._cfg.tokenizer.get('vocab_file', None)),
merges_file=self.register_artifact("tokenizer.merge_file", self._cfg.tokenizer.get('merge_file', None)),
use_fast=self.cfg.tokenizer.get('use_fast', False),
delimiter=self.cfg.tokenizer.get('delimiter', None),
legacy=legacy,
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ def get_inference_config(self):
def model_provider_func(self, pre_process, post_process):
"""Model depends on pipeline paralellism."""
model = GPTModel(
vocab_size=self.padded_vocab_size,
vocab_size=self.cfg.get('override_vocab_size', self.padded_vocab_size),
hidden_size=self.cfg.hidden_size,
max_position_embeddings=self.cfg.max_position_embeddings,
num_layers=self.cfg.num_layers,
Expand Down Expand Up @@ -357,6 +357,8 @@ def model_provider_func(self, pre_process, post_process):
fp8_amax_compute_algo=self.cfg.get('fp8_amax_compute_algo', 'most_recent'),
reduce_amax=self.cfg.get('reduce_amax', True),
use_emha=self.cfg.get('use_emha', False),
use_flash_attention=self.cfg.get('use_flash_attention', False),
megatron_legacy=self.cfg.get('megatron_legacy', False),
)

return model
Expand Down Expand Up @@ -765,7 +767,6 @@ def fwd_output_and_loss_func(dataloader_iter, model, checkpoint_activations_all_
if self.get_attention_mask_from_fusion:
required_keys.remove('attention_mask')
batch = {key: val.cuda(non_blocking=True) if key in required_keys else None for key, val in batch.items()}

# Model forward pass
output_tensor = model(
batch['tokens'],
Expand Down Expand Up @@ -822,9 +823,10 @@ def fwd_output_only_func(dataloader_iter, model):
inference_max_sequence_len,
) = batch
tokens = tokens.cuda()
attention_mask = attention_mask.cuda()
position_ids = position_ids.cuda()
attention_mask = attention_mask[0:1]
if attention_mask is not None:
attention_mask = attention_mask.cuda()
attention_mask = attention_mask[0:1]
extra_arg['set_inference_key_value_memory'] = set_inference_key_value_memory[0].item()
extra_arg['inference_max_sequence_len'] = inference_max_sequence_len[0].item()
output_tensor = model(tokens, position_ids, attention_mask, **extra_arg)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -753,6 +753,7 @@ def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: Optional[int]
"add_BOS": inference_config["add_BOS"],
"all_probs": inference_config["all_probs"],
"compute_logprob": inference_config["compute_logprob"],
"compute_attention_mask": inference_config.get("compute_attention_mask", True),
}

task_ids, processed_inputs = batch
Expand Down

0 comments on commit a87702a

Please sign in to comment.