Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[do-not-merge] SpeechLLM dev branch #9474

Closed
wants to merge 17 commits into from
Closed

[do-not-merge] SpeechLLM dev branch #9474

wants to merge 17 commits into from

Conversation

pzelasko
Copy link
Collaborator

What does this PR do ?

This PR is for tracking the changes in speech-llm main development branch w.r.t. main.

Collection: multimodal

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

pzelasko and others added 16 commits June 14, 2024 10:07
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
predict

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
…omized_round_robin

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
…own batch settings that can be merged with zip sampler to enjoy max batch sizes for both modalities in a single training step. Each modality runs fwd+bwd in turn to save GPU memory (instead of running fwd separately and bwd together).

Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: pzelasko <pzelasko@users.noreply.github.com>
Comment on lines +517 to +520
# elif cur_idx + tokenized_len < tgt_len:
# # Check whether the mask is applied to the correct position, the first token is turn start tokens
# if not torch.equal(target[cur_idx + 1 : cur_idx + tokenized_len], s_id[1:]):
# logging.warning("a sentence mismatches the corresponding piece " "in the conversation")

Check notice

Code scanning / CodeQL

Commented-out code Note

This comment appears to contain commented-out code.
audio_batch = {k: v for k, v in batch.items() if not k.startswith("text_")}
text_batch = {k: v for k, v in batch.items() if k.startswith("text_")}

output, loss_mask = None, None

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'output' is unnecessary as it is
redefined
before this value is used.
audio_batch = {k: v for k, v in batch.items() if not k.startswith("text_")}
text_batch = {k: v for k, v in batch.items() if k.startswith("text_")}

output, loss_mask = None, None

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'loss_mask' is unnecessary as it is
redefined
before this value is used.
@@ -15,28 +15,30 @@
import warnings
from dataclasses import dataclass
from functools import partial
from typing import Any, Optional, TypeVar, Union
from typing import Any, List, Optional, TypeVar, Union

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'List' is not used.
from lhotse.lazy import LazyFlattener
from lhotse.utils import fastcopy, fix_random_seed
from omegaconf import DictConfig, OmegaConf
from omegaconf import DictConfig, ListConfig, OmegaConf

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'ListConfig' is not used.
@@ -1,7 +1,11 @@
from typing import Optional

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'Optional' is not used.
from lhotse.utils import Pathlike

from nemo.collections.common.data.lhotse.nemo_adapters import expand_sharded_filepaths
from nemo.collections.common.tokenizers.aggregate_tokenizer import AggregateTokenizer, TokenizerWrapper
from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
from nemo.utils import logging

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'logging' is not used.
Comment on lines +388 to +392
def forward(
self,
batch,
checkpoint_activations_all_layers,
):

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

Overriding method 'forward' has signature mismatch with
overridden method
.
Comment on lines +665 to +671
# if torch.distributed.is_initialized():
# global_max_len = torch.tensor([seq_length], dtype=torch.float32, device=device)

# Update across all ranks in the distributed system
torch.distributed.all_reduce(global_max_len, op=torch.distributed.ReduceOp.MAX)
# # Update across all ranks in the distributed system
# torch.distributed.all_reduce(global_max_len, op=torch.distributed.ReduceOp.MAX)

seq_length = global_max_len.int().item()
# seq_length = global_max_len.int().item()

Check notice

Code scanning / CodeQL

Commented-out code Note

This comment appears to contain commented-out code.
Comment on lines +478 to +480
# if log_token_counts:
# self.log('seq_length_padded', seq_length, prog_bar=True, batch_size=1)
# self.log('tokens_avg', token_count_avg, prog_bar=True, sync_dist=True, batch_size=1)

Check notice

Code scanning / CodeQL

Commented-out code Note

This comment appears to contain commented-out code.
Copy link
Contributor

This PR is stale because it has been open for 14 days with no activity. Remove stale label or comment or update or this will be closed in 7 days.

@github-actions github-actions bot added the stale label Jun 29, 2024
Copy link
Contributor

github-actions bot commented Jul 7, 2024

This PR was closed because it has been inactive for 7 days since being marked as stale.

@github-actions github-actions bot closed this Jul 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants