Skip to content

Conversation

@chtruong814
Copy link
Collaborator

ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Copy link
Contributor

@abhinavg4 abhinavg4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to revert my changes


# Build the command for the mock run
cmd = [
"uv",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small request can you add this to L2_Function_Tests_GPU_Wan_Mock_Data.sh please? That way we are using uv at one place only and it's not confusing, I verified that it works too.

Copy link
Contributor

@abhinavg4 abhinavg4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to revert My changes before merging

capture_output=True,
text=True,
timeout=300, # 5 minute timeout
timeout=3000, # 5 minute timeout
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abhinavg4 why did we need to change this?

…meout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.
@abhinavg4
Copy link
Contributor

/ok to tests a209623

@abhinavg4
Copy link
Contributor

\ok to test a209623

@abhinavg4 abhinavg4 requested review from a team and removed request for a team November 16, 2025 02:54
@abhinavg4
Copy link
Contributor

/ok to test f2a61c1

Copy link
Contributor

@abhinavg4 abhinavg4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except the commented code whcih should be uncommented.

@abhinavg4 abhinavg4 merged commit 1cb4679 into pablo-garay/mbridge-test-init Nov 16, 2025
10 of 13 checks passed
pablo-garay added a commit that referenced this pull request Nov 16, 2025
* ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Use uv run in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Ensure uv group megatron-bridge is used for test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Revert GHA changes

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Move uv run group call to L2_Mcore_Mock_Tests_GPU

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Set test back to 5 minute timeout

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Megatron fixes (#49)

* Enhance DiT and Wan layer specifications

- Updated `get_query_key_value_tensors` method in `dit_attention.py` to include an `output_gate` parameter and set `split_qkv` to default to `True`.
- Modified `WanLayerWithAdaLN` class in `wan_layer_spec.py` to add `rotary_pos_cos_sin` parameter for improved positional encoding handling.

* Implement ProcessGroupCollection initialization in DiT and Wan models

- Added initialization of `pg_collection` in both `DiTCrossAttentionModel` and `WanModel` to ensure proper handling of process groups.
- This change checks if `pg_collection` exists and is not None before assigning it, enhancing the robustness of the models.

* Update CONTRIBUTING.md to include detailed setup instructions for development environment and Docker container usage. Added sections for building and running the container, as well as setting the PYTHONPATH for DFM.

* Refactor import statements in dit_model.py to streamline dependencies. Removed redundant import of ProcessGroupCollection, enhancing code clarity and maintainability.

* Refactor code style in DiT and Wan models

- Updated string quotes in `dit_model.py` and `wan_model.py` for consistency, changing from single to double quotes.
- Reformatted the `get_query_key_value_tensors` method call in `dit_attention.py` for improved readability by breaking it into multiple lines.

* Revert M4 changes

* Ruff

* Ruff

* Lint

---------

Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>

* Revert "Revert GHA changes"

This reverts commit d7ad1ab.

* tempfortest: timeout setting

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* workflow dispatch

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* add logging

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update test configuration for Mcore WAN pretraining

- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.

* More changes

* Lint

---------

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
pablo-garay added a commit that referenced this pull request Nov 16, 2025
* ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Use uv run in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Ensure uv group megatron-bridge is used for test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Revert GHA changes

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Move uv run group call to L2_Mcore_Mock_Tests_GPU

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Set test back to 5 minute timeout

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Megatron fixes (#49)

* Enhance DiT and Wan layer specifications

- Updated `get_query_key_value_tensors` method in `dit_attention.py` to include an `output_gate` parameter and set `split_qkv` to default to `True`.
- Modified `WanLayerWithAdaLN` class in `wan_layer_spec.py` to add `rotary_pos_cos_sin` parameter for improved positional encoding handling.

* Implement ProcessGroupCollection initialization in DiT and Wan models

- Added initialization of `pg_collection` in both `DiTCrossAttentionModel` and `WanModel` to ensure proper handling of process groups.
- This change checks if `pg_collection` exists and is not None before assigning it, enhancing the robustness of the models.

* Update CONTRIBUTING.md to include detailed setup instructions for development environment and Docker container usage. Added sections for building and running the container, as well as setting the PYTHONPATH for DFM.

* Refactor import statements in dit_model.py to streamline dependencies. Removed redundant import of ProcessGroupCollection, enhancing code clarity and maintainability.

* Refactor code style in DiT and Wan models

- Updated string quotes in `dit_model.py` and `wan_model.py` for consistency, changing from single to double quotes.
- Reformatted the `get_query_key_value_tensors` method call in `dit_attention.py` for improved readability by breaking it into multiple lines.

* Revert M4 changes

* Ruff

* Ruff

* Lint

---------

Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>

* Revert "Revert GHA changes"

This reverts commit d7ad1ab.

* tempfortest: timeout setting

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* workflow dispatch

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* add logging

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update test configuration for Mcore WAN pretraining

- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.

* More changes

* Lint

---------

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
pablo-garay added a commit that referenced this pull request Nov 17, 2025
* Explicit mcore path override to use Megatron-Bridge's pinned submodule commit

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update Megatron-Bridge submodule to latest main with correct Megatron-LM commit (3cbe5c68)

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Add Mcore WAN pretrain mock test to CI/CD

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Fix slow Docker build from Megatron-LM source

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* ci: Update gpu runners to use self-hosted-nemo (#48)

* ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Use uv run in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Ensure uv group megatron-bridge is used for test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Revert GHA changes

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Move uv run group call to L2_Mcore_Mock_Tests_GPU

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Set test back to 5 minute timeout

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Megatron fixes (#49)

* Enhance DiT and Wan layer specifications

- Updated `get_query_key_value_tensors` method in `dit_attention.py` to include an `output_gate` parameter and set `split_qkv` to default to `True`.
- Modified `WanLayerWithAdaLN` class in `wan_layer_spec.py` to add `rotary_pos_cos_sin` parameter for improved positional encoding handling.

* Implement ProcessGroupCollection initialization in DiT and Wan models

- Added initialization of `pg_collection` in both `DiTCrossAttentionModel` and `WanModel` to ensure proper handling of process groups.
- This change checks if `pg_collection` exists and is not None before assigning it, enhancing the robustness of the models.

* Update CONTRIBUTING.md to include detailed setup instructions for development environment and Docker container usage. Added sections for building and running the container, as well as setting the PYTHONPATH for DFM.

* Refactor import statements in dit_model.py to streamline dependencies. Removed redundant import of ProcessGroupCollection, enhancing code clarity and maintainability.

* Refactor code style in DiT and Wan models

- Updated string quotes in `dit_model.py` and `wan_model.py` for consistency, changing from single to double quotes.
- Reformatted the `get_query_key_value_tensors` method call in `dit_attention.py` for improved readability by breaking it into multiple lines.

* Revert M4 changes

* Ruff

* Ruff

* Lint

---------

Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>

* Revert "Revert GHA changes"

This reverts commit d7ad1ab.

* tempfortest: timeout setting

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* workflow dispatch

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* add logging

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update test configuration for Mcore WAN pretraining

- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.

* More changes

* Lint

---------

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Reapply "Revert GHA changes"

This reverts commit fdb911f.

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update path per request

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update CONTRIBUTING.md

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* adjustments

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

---------

Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
huvunvidia added a commit that referenced this pull request Nov 18, 2025
* adding tests

* ruff lint

* ruff lint

* ruff lint

* Explicit mcore path override to use Megatron-Bridge's pinned submodule commit

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update Megatron-Bridge submodule to latest main with correct Megatron-LM commit (3cbe5c68)

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Add Mcore WAN pretrain mock test to CI/CD

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Fix slow Docker build from Megatron-LM source

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* ci: Update gpu runners to use self-hosted-nemo (#48)

* ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Use uv run in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Ensure uv group megatron-bridge is used for test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Revert GHA changes

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Move uv run group call to L2_Mcore_Mock_Tests_GPU

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Set test back to 5 minute timeout

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Megatron fixes (#49)

* Enhance DiT and Wan layer specifications

- Updated `get_query_key_value_tensors` method in `dit_attention.py` to include an `output_gate` parameter and set `split_qkv` to default to `True`.
- Modified `WanLayerWithAdaLN` class in `wan_layer_spec.py` to add `rotary_pos_cos_sin` parameter for improved positional encoding handling.

* Implement ProcessGroupCollection initialization in DiT and Wan models

- Added initialization of `pg_collection` in both `DiTCrossAttentionModel` and `WanModel` to ensure proper handling of process groups.
- This change checks if `pg_collection` exists and is not None before assigning it, enhancing the robustness of the models.

* Update CONTRIBUTING.md to include detailed setup instructions for development environment and Docker container usage. Added sections for building and running the container, as well as setting the PYTHONPATH for DFM.

* Refactor import statements in dit_model.py to streamline dependencies. Removed redundant import of ProcessGroupCollection, enhancing code clarity and maintainability.

* Refactor code style in DiT and Wan models

- Updated string quotes in `dit_model.py` and `wan_model.py` for consistency, changing from single to double quotes.
- Reformatted the `get_query_key_value_tensors` method call in `dit_attention.py` for improved readability by breaking it into multiple lines.

* Revert M4 changes

* Ruff

* Ruff

* Lint

---------

Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>

* Revert "Revert GHA changes"

This reverts commit d7ad1ab.

* tempfortest: timeout setting

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* workflow dispatch

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* add logging

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update test configuration for Mcore WAN pretraining

- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.

* More changes

* Lint

---------

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Reapply "Revert GHA changes"

This reverts commit fdb911f.

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update path per request

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update CONTRIBUTING.md

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* adding v run --group megatron-bridge

* update test

* ruff lint

* restore Dockerfile.ci

* update  .github/workflows/cicd-main.yml

---------

Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Huy Vu2 <huvu@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
lbliii pushed a commit that referenced this pull request Nov 19, 2025
* adding tests

* ruff lint

* ruff lint

* ruff lint

* Explicit mcore path override to use Megatron-Bridge's pinned submodule commit

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update Megatron-Bridge submodule to latest main with correct Megatron-LM commit (3cbe5c68)

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Add Mcore WAN pretrain mock test to CI/CD

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Fix slow Docker build from Megatron-LM source

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* ci: Update gpu runners to use self-hosted-nemo (#48)

* ci: Update gpu runners to use self-hosted-nemo

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Use uv run in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Ensure uv group megatron-bridge is used for test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

* Update TRANSFORMERS_OFFLINE environment variable to 0 and increase timeout in test_mcore_wan_pretrain

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Revert GHA changes

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Move uv run group call to L2_Mcore_Mock_Tests_GPU

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Set test back to 5 minute timeout

Signed-off-by: Charlie Truong <chtruong@nvidia.com>

* Megatron fixes (#49)

* Enhance DiT and Wan layer specifications

- Updated `get_query_key_value_tensors` method in `dit_attention.py` to include an `output_gate` parameter and set `split_qkv` to default to `True`.
- Modified `WanLayerWithAdaLN` class in `wan_layer_spec.py` to add `rotary_pos_cos_sin` parameter for improved positional encoding handling.

* Implement ProcessGroupCollection initialization in DiT and Wan models

- Added initialization of `pg_collection` in both `DiTCrossAttentionModel` and `WanModel` to ensure proper handling of process groups.
- This change checks if `pg_collection` exists and is not None before assigning it, enhancing the robustness of the models.

* Update CONTRIBUTING.md to include detailed setup instructions for development environment and Docker container usage. Added sections for building and running the container, as well as setting the PYTHONPATH for DFM.

* Refactor import statements in dit_model.py to streamline dependencies. Removed redundant import of ProcessGroupCollection, enhancing code clarity and maintainability.

* Refactor code style in DiT and Wan models

- Updated string quotes in `dit_model.py` and `wan_model.py` for consistency, changing from single to double quotes.
- Reformatted the `get_query_key_value_tensors` method call in `dit_attention.py` for improved readability by breaking it into multiple lines.

* Revert M4 changes

* Ruff

* Ruff

* Lint

---------

Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>

* Revert "Revert GHA changes"

This reverts commit d7ad1ab.

* tempfortest: timeout setting

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* workflow dispatch

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* add logging

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Update test configuration for Mcore WAN pretraining

- Increased the number of processes per node from 1 to 2 for distributed training.
- Set the number of training iterations to 10 to enhance the training process.

* More changes

* Lint

---------

Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* Reapply "Revert GHA changes"

This reverts commit fdb911f.

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update path per request

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* update CONTRIBUTING.md

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* lintfix

Signed-off-by: Pablo Garay <pagaray@nvidia.com>

* adding v run --group megatron-bridge

* update test

* ruff lint

* restore Dockerfile.ci

* update  .github/workflows/cicd-main.yml

---------

Signed-off-by: Pablo Garay <pagaray@nvidia.com>
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Huy Vu2 <huvu@login-eos02.eos.clusters.nvidia.com>
Co-authored-by: Pablo Garay <pagaray@nvidia.com>
Co-authored-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Abhinav Garg <abhinavg@stanford.edu>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants