[speechm2] Support indexed sharegpt JSONL and webdataset formats#15410
[speechm2] Support indexed sharegpt JSONL and webdataset formats#15410
Conversation
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
|
[🤖]: Hi @pzelasko 👋, We wanted to let you know that a CICD pipeline for this PR just finished successfully. So it might be time to merge this PR or get some approvals. |
tbartley94
left a comment
There was a problem hiding this comment.
Formatting and structural changes. General logic looks good though.
| bits += 1 | ||
| self._half = bits // 2 | ||
| self._mask = (1 << self._half) - 1 | ||
| self._rounds = 6 |
| left = (x >> self._half) & self._mask | ||
| right = x & self._mask | ||
| for key in self._keys: | ||
| left, right = right, left ^ (((right * 2654435761) ^ key) >> 32 & self._mask) |
There was a problem hiding this comment.
make global var at top of file.
| for line in f_in: | ||
| current_offset += len(line) | ||
| write_buffer.extend(struct.pack('<Q', current_offset)) | ||
| if len(write_buffer) > 8 * 1024 * 1024: |
There was a problem hiding this comment.
very nitpicky, but just write out the full multiplication as a var above and comment. no need to do the extra ops for every line.
|
|
||
| { | ||
| "id": str, | ||
| "id": str, # not optional, but we will tolerate if it's missing |
There was a problem hiding this comment.
bit cryptic, provide line where we tolerate this?
| raise FileNotFoundError(f"No wids-meta.json and no .tar files found under {self.data_dir}") | ||
| self.audio_placeholders = _normalize_audio_placeholders(self.audio_placeholders) | ||
| self._has_index = all(Path(p + ".idx").exists() for p in self._shard_paths) | ||
| self.epoch = 0 |
There was a problem hiding this comment.
hmm, is there anyway we can sync this with the trainer? having an adapter maintaining epoch on its lonesome sounds like a pending desync issue that will be annoying to hunt down
There was a problem hiding this comment.
No, there is no way. Each dataset iterator keeps its own track and there is no shared concept of a global epoch in our setups.
There was a problem hiding this comment.
hmmm, for sanity can you add to the unit tests to check trainer.epoch == text_adapter.epoch (if not already done)? This seems like something potentially painful in a year.
|
Thanks. This PR looks good to me! |
- Extract magic constant to module-level _KNUTH_HASH - Make Feistel round count a configurable init arg (num_rounds) - Refactor _load_index to always return sentinel-inclusive offsets and validate offset bounds, simplifying callers - Replace raw tuple with TarSample NamedTuple for clarity - Add docstring explaining why custom tar reading vs stdlib tarfile - Pre-compute flush_threshold in create_index - Raise ValueError on missing audio path instead of silent skip - Clarify "tolerate" comment with concrete fallback reference - Add comment explaining force_finite flag purpose Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
[🤖]: Hi @pzelasko 👋, We wanted to let you know that a CICD pipeline for this PR just finished successfully. So it might be time to merge this PR or get some approvals. |
tbartley94
left a comment
There was a problem hiding this comment.
request for simple unit test. else LGTM
| raise FileNotFoundError(f"No wids-meta.json and no .tar files found under {self.data_dir}") | ||
| self.audio_placeholders = _normalize_audio_placeholders(self.audio_placeholders) | ||
| self._has_index = all(Path(p + ".idx").exists() for p in self._shard_paths) | ||
| self.epoch = 0 |
There was a problem hiding this comment.
hmmm, for sanity can you add to the unit tests to check trainer.epoch == text_adapter.epoch (if not already done)? This seems like something potentially painful in a year.
Important
The
Update branchbutton must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
The first PR to support indexed datasets. It reads a binary index (sequence of uint64 byte offsets marking beginning of each sample in a file), generates random permutation of indexes on the fly, and looks up the right sample.
This implementation pretends it's a sequential IO dataset for compatibility, but will be used as a building block for a resumable dataloader in the future.
Supported formats:
share_gptdata.jsonlanddata.jsonl.idx{ "id": f"audio_convo", "sound": f"audio.wav", "conversations": [ {"from": "human", "value": f"Listen to this: <sound> What do you think?"}, {"from": "gpt", "value": f"Response"}, ], }share_gpt_webdatasetshard_0.tar+shard_0.tar.idxshare_gptCollection: speechlm2
Changelog
Usage
# Add a code snippet demonstrating how to use thisGitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information