Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NeVA_Mixtral Tutorial (with new NeVA features) #9912

Closed
wants to merge 87 commits into from

Conversation

paul-gibbons
Copy link
Collaborator

What does this PR do ?

This PR adds an additional notebook covering new features in Neva:
-Mistral+Mixtral support
-Token Fusion support
-SigLIP support
-Video modality support

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

borisfom and others added 30 commits July 8, 2024 14:50
* Nemotron ONNX export fixed

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Cleanup

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

* Addressing code review comments

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>

---------

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: huvunvidia <86480512+huvunvidia@users.noreply.github.com>
* add slurm files to .gitignore

* add differentiable decode to SDXL VAE

* Optionally return predicted noise during the single step sampling process
* also change  `get_gamma` as a new function to use inside other
  functions which may interact with sampling (e.g. draft+)

* debugging sdunet converter script

* Added SD/SDXL conversion script from HF to NeMo
* added 'from_nemo' config for VAE

* tmp commit, please make changes (oci is super slow, cannot even run vim)

* new inference yaml works

* add logging to autoencoder

* !(dont squash) Added enabling support for LinearWrapper for SDLoRA

* added samples_per_batch and fsdp arguments to SDXL inference

* added extra optionally wrapper to FSDP

* remove unncessary comments

* remove unnecessary comments

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

---------

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Co-authored-by: Rohit Jena <rohitkumarj@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
* add NemoQueryLLMPyTorch class for triton query of in-framework models

* nemo_export.py changes to better support in-framework models

* separate out in-framework version of triton deploy script

* add generate() function to MegatronLLMDeployable to allow for direct use in export tests

* use NemoQueryLLMPyTorch in deploy tests

* add warning message for when MegatronLLMDeployable overrides transformer_engine

* remove enable_streaming argument from deploy_inframework_triton.py since MegatronLLMDeployable does not support streaming
add query_inframework.py since original query.py does not work with in-framework deployments

* Apply isort and black reformatting

Signed-off-by: jukim-nv <jukim-nv@users.noreply.github.com>

* skip trtllm support check if in_framework testing

* remove unused imports

* run_existing_checkpoints was passing wrong prompts argument for in-framework mode

* fix unused import in query_inframework.py

---------

Signed-off-by: jukim-nv <jukim-nv@users.noreply.github.com>
Co-authored-by: jukim-nv <jukim-nv@users.noreply.github.com>
Co-authored-by: Onur Yilmaz <35306097+oyilmaz-nvidia@users.noreply.github.com>
* Use FP8 in GPT TP2 test

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Add hydra options to use TE, TP overlap and FP8

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Override presence checks in hydra

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* WIP: Add debug code

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: jbaczek <jbaczek@users.noreply.github.com>

* Add more debug code

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: jbaczek <jbaczek@users.noreply.github.com>

* Add more debug code

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: jbaczek <jbaczek@users.noreply.github.com>

* Remove debug code and change underlying transformer layer to TE

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Override hydra error

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Remove tp overlap from the test

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Change runner for fp8 tests

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* fix

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Add tp overlap test

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Remove TP overlap from tests. It is unsupported in docker environment

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Adjust GPT PP2 test to use FP8. Change optimizer in TP2 test

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

* Remove env overrides form GPT PP2 test

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>

---------

Signed-off-by: Jan Baczek <jbaczek@nvidia.com>
Signed-off-by: jbaczek <jbaczek@users.noreply.github.com>
Co-authored-by: jbaczek <jbaczek@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
…variety of tensors (NVIDIA#9641)

* enables default data step in megatron parallel to operate on a wider variety of tensors coming out of the dataloader

* handles the case where a batch is empty

* Apply isort and black reformatting

Signed-off-by: jomitchellnv <jomitchellnv@users.noreply.github.com>

* Allows the default data step to operate on more types
than just dictionaries

Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>

---------

Signed-off-by: jomitchellnv <jomitchellnv@users.noreply.github.com>
Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>
Co-authored-by: jomitchellnv <jomitchellnv@users.noreply.github.com>
Co-authored-by: Marc Romeyn <mromeijn@nvidia.com>
* wip contrastive reranker

Signed-off-by: arendu <adithya.r@gmail.com>

* wip

Signed-off-by: arendu <adithya.r@gmail.com>

* wip

Signed-off-by: arendu <adithya.r@gmail.com>

* working reranker training and validation

Signed-off-by: arendu <adithya.r@gmail.com>

* default peft for reranker

Signed-off-by: arendu <adithya.r@gmail.com>

* validation time update

Signed-off-by: arendu <adithya.r@gmail.com>

* reranker test

Signed-off-by: arendu <adithya.r@gmail.com>

* reranker inference

Signed-off-by: arendu <adithya.r@gmail.com>

* reranker inference

Signed-off-by: arendu <adithya.r@gmail.com>

* Apply isort and black reformatting

Signed-off-by: arendu <arendu@users.noreply.github.com>

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* Apply isort and black reformatting

Signed-off-by: arendu <arendu@users.noreply.github.com>

* updates

Signed-off-by: arendu <adithya.r@gmail.com>

* Apply isort and black reformatting

Signed-off-by: arendu <arendu@users.noreply.github.com>

* also can support rlhf style reward model loss

Signed-off-by: arendu <adithya.r@gmail.com>

* Apply isort and black reformatting

Signed-off-by: arendu <arendu@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: arendu <arendu@users.noreply.github.com>

* typo in cicd

Signed-off-by: arendu <adithya.r@gmail.com>

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: arendu <arendu@users.noreply.github.com>
Signed-off-by: Adi Renduchintala <adithya.r@gmail.com>
Co-authored-by: arendu <arendu@users.noreply.github.com>
* unpin transformers

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* guard deprecated imports

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* Apply isort and black reformatting

Signed-off-by: dimapihtar <dimapihtar@users.noreply.github.com>

* fix import guards

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* fix import guards

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* Apply isort and black reformatting

Signed-off-by: dimapihtar <dimapihtar@users.noreply.github.com>

* try fixing

Signed-off-by: Chen Cui <chcui@nvidia.com>

* disable HF tests

Signed-off-by: Dmytro Pykhtar <dpykhtar@login-eos01.eos.clusters.nvidia.com>

* try fixing

Signed-off-by: Chen Cui <chcui@nvidia.com>

* hard code model lists

Signed-off-by: Chen Cui <chcui@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>

* hard code model lists

Signed-off-by: Chen Cui <chcui@nvidia.com>

---------

Signed-off-by: dimapihtar <dpihtar@gmail.com>
Signed-off-by: dimapihtar <dimapihtar@users.noreply.github.com>
Signed-off-by: Chen Cui <chcui@nvidia.com>
Signed-off-by: Dmytro Pykhtar <dpykhtar@login-eos01.eos.clusters.nvidia.com>
Signed-off-by: cuichenx <cuichenx@users.noreply.github.com>
Co-authored-by: dimapihtar <dimapihtar@users.noreply.github.com>
Co-authored-by: Chen Cui <chcui@nvidia.com>
Co-authored-by: Dmytro Pykhtar <dpykhtar@login-eos01.eos.clusters.nvidia.com>
Co-authored-by: cuichenx <cuichenx@users.noreply.github.com>
* Added CPU offloading docs

Signed-off-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>

* Tech writer review

Signed-off-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>

---------

Signed-off-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
* Update llama-3 PEFT notebook to download model from NGC

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

* Fix broken link in llama-3 PEFT tutorial README

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

* Fix broken code block in llama 3 PEFT tutorial README

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

* Copy-edits to Llama-3 8B PEFT tutorial README

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

* Fix broken link

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

* Minor formatting fixes

Signed-off-by: Shashank Verma <shashank3959@gmail.com>

---------

Signed-off-by: Shashank Verma <shashank3959@gmail.com>
Signed-off-by: ashors1 <ashors@nvidia.com>
Co-authored-by: Anna Shors <71393111+ashors1@users.noreply.github.com>
Co-authored-by: Marc Romeyn <mromeijn@nvidia.com>
Co-authored-by: ashors1 <ashors@nvidia.com>
* add lita

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Slyne <Slyne@users.noreply.github.com>

* add part of the tutorial and fix format

Signed-off-by: slyne deng <slyned@nvidia.com>

* add tutorial

Signed-off-by: slyne deng <slyned@nvidia.com>

* fix Tutorial ckpt conversion

Signed-off-by: slyne deng <slyned@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Slyne <Slyne@users.noreply.github.com>

* update cicd

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* add to CIICD test

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* changes based on review comments

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* fix bot warning

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* update cicd main

Signed-off-by: Slyne Deng <slyned@nvidia.com>

* fix cicd ckpt conversion

Signed-off-by: Slyne Deng <slyned@nvidia.com>

---------

Signed-off-by: Slyne Deng <slyned@nvidia.com>
Signed-off-by: Slyne <Slyne@users.noreply.github.com>
Signed-off-by: slyne deng <slyned@nvidia.com>
Co-authored-by: Slyne Deng <slyned@nvidia.com>
Co-authored-by: Slyne <Slyne@users.noreply.github.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
* Parametrize FPS group



* Apply isort and black reformatting



* Change deafult to False



* Add logic to new ckptIO



* Turn on parallel save by default



---------

Signed-off-by: Mikołaj Błaż <mblaz@nvidia.com>
Signed-off-by: mikolajblaz <mikolajblaz@users.noreply.github.com>
Co-authored-by: mikolajblaz <mikolajblaz@users.noreply.github.com>
Co-authored-by: Dmytro Pykhtar <37850217+dimapihtar@users.noreply.github.com>
* huvu/mcore_t5 first commit from local

* removing DEBUGGING prints

* cleaning megatron_lm_encoder_decoder_model.py code

* cleaning code

* adding Github action test

* only run mcore T5 test

* only run mcore T5 test

* only run mcore T5 test

* only run mcore T5 test

* reset .github/workflows/cicd-main.yml

* reset .github/workflows/cicd-main.yml

* adding condition self.mcore_t5 when running self.build_transformer_config()

* refractor megatron_lm_encoder_decoder_model.py to not use self.model

* only run T5-related tests

* remove all self.model

* reset cicd file

* reset cicd file

* updating codes remove duplicate if/else; adding mcore/transformer_engine to config file

* adjust +model.mcore_t5=True

* fix training for non-mcore, bf16, O2

* reset cicd-main.yml

---------

Co-authored-by: Huy Vu2 <huvu@login-eos01.eos.clusters.nvidia.com>
Signed-off-by: Oliver Koenig <okoenig@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
* adding mamba support

* fix import mixins

* rm convert jamba

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* more cleanups

* use GPT text gen

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* fixing gbs in TP convetor

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* add reqs

* add tutorial

* minor fix to tutorial

* moving finetuning files

Signed-off-by: arendu <adithya.r@gmail.com>

* moving finetuning files

Signed-off-by: arendu <adithya.r@gmail.com>

* address comments

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* address comments

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* address comments

* add mamba dependancies

* add mcore tag

* modify dockerfile ci

* modify dockerfile ci

* fix TP>1 to TP1

* add inference, update based on latest mcore commits

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* minor fix

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* minor fix

* Apply isort and black reformatting

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>

* bug fix, tutorial update

---------

Signed-off-by: JRD971000 <JRD971000@users.noreply.github.com>
Signed-off-by: arendu <adithya.r@gmail.com>
Co-authored-by: Ali Taghibakhshi <ataghibakhsh@login-eos01.eos.clusters.nvidia.com>
Co-authored-by: JRD971000 <JRD971000@users.noreply.github.com>
Co-authored-by: arendu <adithya.r@gmail.com>
Signed-off-by: Ryan <rlangman@nvidia.com>
* commit to eval/sft/peft

* update MCORE_COMMIT

* address Chen's comments, updating retro unit test

* Apply isort and black reformatting

Signed-off-by: huvunvidia <huvunvidia@users.noreply.github.com>

---------

Signed-off-by: huvunvidia <huvunvidia@users.noreply.github.com>
Co-authored-by: Huy Vu2 <huvu@login-eos01.eos.clusters.nvidia.com>
Co-authored-by: huvunvidia <huvunvidia@users.noreply.github.com>
…IDIA#9715)

* Allow non-strict load



* Point to non-stric load MCore branch



* Avoid module level StrictHandling



* Use MCore fork



* Update to MCore fix



* Restore ackward compatibility



* Update flag defaults



* Update MCore tag



* Update PyT Dist interface



* Update to latest core_r0.8.0



---------

Signed-off-by: Mikołaj Błaż <mblaz@nvidia.com>
Co-authored-by: mikolajblaz <mikolajblaz@users.noreply.github.com>
Signed-off-by: Oliver Koenig <okoenig@nvidia.com>
* fix legacy ds padding bug

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* Apply isort and black reformatting

Signed-off-by: dimapihtar <dimapihtar@users.noreply.github.com>

* avoid code repetition

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* fix typo

Signed-off-by: dimapihtar <dpihtar@gmail.com>

---------

Signed-off-by: dimapihtar <dpihtar@gmail.com>
Signed-off-by: dimapihtar <dimapihtar@users.noreply.github.com>
Co-authored-by: dimapihtar <dimapihtar@users.noreply.github.com>
…variety of tensors - second try (NVIDIA#9671)

* enables default data step in megatron parallel to operate on a wider variety of tensors coming out of the dataloader

Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>

* handles the case where a batch is empty

Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: jomitchellnv <jomitchellnv@users.noreply.github.com>
Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>

* Allows the default data step to operate on more types
than just dictionaries

Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: jomitchellnv <jomitchellnv@users.noreply.github.com>

---------

Signed-off-by: Jonathan Mitchell <jomitchell@nvidia.com>
Signed-off-by: jomitchellnv <jomitchellnv@users.noreply.github.com>
Co-authored-by: jomitchellnv <jomitchellnv@users.noreply.github.com>
Co-authored-by: John St. John <jstjohn@users.noreply.github.com>
…A#9647)

* Fix when optimizers are setup for PEFT

* Apply isort and black reformatting



* Init DDP inside PEFT

* Apply isort and black reformatting



* Some fixes, loss seems to become nan with peft for some reason

* Apply isort and black reformatting



* Loss goes down on fp32

* Apply isort and black reformatting



* Simplifying FNMixin

* Apply isort and black reformatting



* Fix bug with new checkpoint-io

* Apply isort and black reformatting



* Fix failing test: test_peft_on_train_epoch_start_with_adapter

* Apply isort and black reformatting



---------

Signed-off-by: marcromeyn <marcromeyn@users.noreply.github.com>
Signed-off-by: ashors1 <ashors@nvidia.com>
Co-authored-by: Marc Romeyn <mromeijn@nvidia.com>
Co-authored-by: marcromeyn <marcromeyn@users.noreply.github.com>
Co-authored-by: Chen Cui <chcui@nvidia.com>
Co-authored-by: ashors1 <ashors@nvidia.com>
* refactor: README
* refactor: Use new README in `setup.py`

Signed-off-by: Oliver Koenig <okoenig@nvidia.com>
* Remove mask if use fusion mask

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: hsiehjackson <hsiehjackson@users.noreply.github.com>

---------

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>
Signed-off-by: hsiehjackson <hsiehjackson@users.noreply.github.com>
Co-authored-by: hsiehjackson <hsiehjackson@users.noreply.github.com>
BuyuanCui and others added 18 commits July 23, 2024 17:31
* adding japanese text preprocessing
* japanese phoneme tokenizer
* japanese tests
* japanese g2p model
* japanese word to ipa dictionary
* add requirements

Signed-off-by: Alex Cui <alexcui1994@gmail.com>

---------

Signed-off-by: Alex Cui <alexcui1994@gmail.com>
Signed-off-by: BuyuanCui <BuyuanCui@users.noreply.github.com>
Co-authored-by: BuyuanCui <BuyuanCui@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
)

* Query TransformerConfig attributes when copying btw configs

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
* MoE docs

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* additional fixes

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
* Update Huggingface Hub support



* Update hf hub



* Update hf hub



* Apply isort and black reformatting



---------

Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: Somshubra Majumdar <titu1994@gmail.com>
Signed-off-by: titu1994 <titu1994@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
* Make alignments tests work on any machine



---------

Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
* Update arch check for SD

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: minitu <minitu@users.noreply.github.com>

---------

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>
Signed-off-by: minitu <minitu@users.noreply.github.com>
Co-authored-by: Jaemin Choi <jaeminc@nvidia.com>
Co-authored-by: minitu <minitu@users.noreply.github.com>
* Revert "Jpg2p jun18 (NVIDIA#9538)"

This reverts commit 53d7a91.

* Apply isort and black reformatting

Signed-off-by: pablo-garay <pablo-garay@users.noreply.github.com>

---------

Signed-off-by: pablo-garay <pablo-garay@users.noreply.github.com>
Co-authored-by: pablo-garay <pablo-garay@users.noreply.github.com>
* Revert "Jpg2p jun18 (NVIDIA#9538)"

This reverts commit 53d7a91.

* Need first jobs to succeed

* Make failing jobs optional

* Apply isort and black reformatting

Signed-off-by: pablo-garay <pablo-garay@users.noreply.github.com>

---------

Signed-off-by: pablo-garay <pablo-garay@users.noreply.github.com>
Co-authored-by: pablo-garay <pablo-garay@users.noreply.github.com>
* Change decord to guard import

* Apply isort and black reformatting

Signed-off-by: meatybobby <meatybobby@users.noreply.github.com>

---------

Signed-off-by: meatybobby <meatybobby@users.noreply.github.com>
Co-authored-by: meatybobby <meatybobby@users.noreply.github.com>
)

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>
* add nemo fundamentals page



* remove unused reference tag



* add link to checkpoints intro



* clarify postprocessing and mention loss function



* rephrase key parameters



* fix typo



* mention trainer accelerator param



* fix bulletpoint formatting



* fix bullet points part 2



* quick formatting fixes



* fix phrasing



* update based on review plus other small fixes



---------

Signed-off-by: Elena Rastorgueva <erastorgueva@nvidia.com>
Co-authored-by: Elena Rastorgueva <80532067+erastorgueva-nv@users.noreply.github.com>
* Torch major and minor versions set to current year and month if YY.MM formatting is not met

Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com>

* Update nvidia torch version check

Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com>

* Remove redundant import

Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com>

* Formatting fix

Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com>

---------

Signed-off-by: Dong Hyuk Chang <donghyukc@nvidia.com>
Co-authored-by: Dong Hyuk Chang <donghyukc@nvidia.com>
* fix arg name

Signed-off-by: Sangkug Lym <slym@nvidia.com>

* cleanup

Signed-off-by: Sangkug Lym <slym@nvidia.com>

* cleanup

Signed-off-by: Sangkug Lym <slym@nvidia.com>

---------

Signed-off-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com>
* Added defer wgrad support with mcore optim

Signed-off-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>

* Apply isort and black reformatting

Signed-off-by: sanandaraj5597 <sanandaraj5597@users.noreply.github.com>

---------

Signed-off-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>
Signed-off-by: sanandaraj5597 <sanandaraj5597@users.noreply.github.com>
Co-authored-by: Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: sanandaraj5597 <sanandaraj5597@users.noreply.github.com>
…, videoneva

Signed-off-by: paul-gibbons <paul@gibbonspaul.com>
Signed-off-by: paul-gibbons <paul@gibbonspaul.com>
Signed-off-by: paul-gibbons <paul@gibbonspaul.com>
Signed-off-by: paul-gibbons <paul@gibbonspaul.com>
@paul-gibbons paul-gibbons changed the base branch from main to r2.0.0rc1 July 26, 2024 16:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet