-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix LOCAL_RANK to be RANK in if_main_process #2506
fix LOCAL_RANK to be RANK in if_main_process #2506
Conversation
@Adel-Moumen this change is basically reverting #2101 and most likely will break DDP multi-node training if no other fixes were made in the meantime. The assumption here is that If we now have operations that should run only on the master node, we should then have 2 separate functions, one to check the global master process and one to check the local master process and refactor the DDP code accordingly. |
Hi @lucadellalib,
I really doubt on this. Your fix using LOCAL_RANK was making multinodes not working as you had N multiple running experiments at the same time where N is the number of nodes. I don't think you should use LOCAL_RANK for initialisation DDP because it creates the aforementioned issue... Now everything works smoothly.
why the data would be missed ? If you are doing the data prep on the master node, which has always been the case, then the other nodes can also get access to the actual data ? If your issue is due to wav path that may be different, then just have to use the
Ping @TParcollet on this. |
@Adel-Moumen I see what you mean, this works on Compute Canada because the filesystem is shared. In general it might not be the case (in a recent use case I had multiple nodes with no shared storage, so I had to upload the data on each node, generate the manifest files on each nodes, let the local master process save checkpoints on each node etc. Without the local master process doing the necessary I/O operations things do not work properly in this setup). We are still correctly initializing DDP stuff using the global rank: What error do you get on Compute Canada with the current implementation of |
OK, I see. I didn't had this use case in mind.
Yep, but here for instance: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L161 we are using the definition of
It was mostly what I described previously. Having everything duplicated was very weird to me since I wasn't expecting SpeechBrain to do so. Maybe we should try to isolate what are the operations that we want to be perform only on the global rank (e.g. creating the SB experiment, init WANDB etc) and what can be run on each nodes (e.g. data prep) to be under different functions. Wdyt? (basically what you suggested) |
I agree, probably logging should be done only the on the |
I tried to run the multinodes DDP training uising what you did (i.e. LOCAL_RANK instead of RANK for if_main_proc), I have the following error which seems to appear linked to checkpointing. I don't know if this all of our recipes that are affected or only mine (which is just a wav2vec CTC librispeech training) but its a bit concerning. When you run the code with my definition of if_main_proc (i.e. RANK instead of LOCAL_RANK) there's no issues and I can continue the training (i.e. no ckpt issues etc).
|
Looks like a race condition due to the shared storage... it's probably better then to revert the change since Compute Canada is the main platform... We should keep in mind that this solution is not general and can break things on other cluster setups. |
I think in many libraries there is a separate check As for the shared filesystem vs. node-local storage, recipes could state if they are written with an assumption of a globally shared filesystem, or with an assumption node-locally shared temp filestorages. Or perhaps we can even help switching between the two by making a third checker function like |
Okay, so what I propose to do is first to merge this PR with |
I agree with @Gastron. Here is my analysis of the situation: we agreed a while back that the use of functions like if_local_main or if_global_main should be avoided in user-visible code. Instead, we should have run_on_main-like functions. In that case, we need to create a run_on_local_main and run_on_global_main OR we could add a flag to run_on_main to toggle to the local main instead of global (which make more sense imho to avoid too many changes in the lib). @Adel-Moumen we should revert and do this in a single PR. |
Hi, I made the requested feature using Titouan's proposal (i.e., using a flag). Let me know if you think this PoC is fine. I tried initially to implement multiple functions (one for local_rank and one for global_rank); however, I believe this would require a non-negligible commit, as there are many things to modify and adjust in our codebase (and could introduce backward incompatibilities) |
I had a more in-depth discussion with @TParcollet. We think that in the meantime it is better to just revert the |
* Skip lazy imports when the caller is inspect.py This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work. This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that. * SSL_Semantic_Token _ new PR (speechbrain#2509) * remove unnecassry files and move to dasb * remove extra recepie from test * update ljspeech qunatization recepie * add discrete_ssl and remove extra files * fix precommit * update kmeans and add tokeizer for postprocessing * fix precommit * Update discrete_ssl.py * fix clone warning --------- Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> * _ensure_module Raises docstring * Expose `ensure_module` so that docs get generated for it This is already an internal class anyway, and this is safe to call. * Update actions/setup-python * Use `uv` in test CI + merge some dep installs The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway. * Use `uv` in doc CI + merge some dep installs Similar rationale as for the test CI * Parallelize doc generation with Sphinx This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers. * Enable `uv` caching on the test CI * Enable `uv` caching on the docs CI * CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290) CTC-only pre-training of conformer and branchformer. --------- Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> * Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465) * Update CV transformer recipes to match latest results with conformer. --------- Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> * Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450) Whisper improvements: - flash attention - kv caching - lang identifaction - translation - finetuning amelioration ... and more ... * Update README.md * precommit * update zed download link (speechbrain#2514) * `RelPosEncXL` refactor and precision fixes (speechbrain#2498) * Add `RelPosEncXL.make_pe`, rework precision handling * Rework RelPosEncXL output dtype selection * Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504) * fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506) * Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524) * Fix Separation and Enhancement recipes behavior when NaN encountered * Formatting using precommit hooks * Lock torch version in requirements.txt (speechbrain#2528) * Fix compatibility for torchaudio versions without `.io` (speechbrain#2532) This avoids having the Python interpreter attempt to resolve the type annotation directly. * fix docstrings * consistency tests - classification * consistency tests - classification * consistency tests - interpret * default to no wham * fix after tests pass * fix after tests pass * tests after that * fix consistency --------- Co-authored-by: asu <sdelang@sdelang.fr> Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com> Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com> Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
What does this PR do?
This PR fix one issue that I encountered while using SpeechBrain on Compute Canada. Basically, I found that the
LOCAL_RANK
variable was being0
on two different processes hence leading to having two main process. Why ? Because our definition of main process isLOCAL_RANK == 0
. I went a bit further in the PyTorch an PyTorch lightning documentation and found that we should not useLOCAL_RANK
as a way to determine the main process. Indeed, as explained here: pytorch/pytorch#12042 (comment),LOCAL_RANK
is actually the ID within a worker; multiple workers have aLOCAL_RANK
of 0.As mentioned here: pytorch/pytorch#12042 (comment), we should use
RANK == 0
as a way to find the master process. This is also what is being done with PyTorch Lightning here: pytorch/pytorch#12042 (comment) withglobal_rank
.With my fix, now everything works as expected. Only one main process and everything is synchronised.
Logs to help better understanding
I launched on compute canada one sbatch with 2 nodes and 1 gpu per node, I printed some informations about each nodes:
However, if you do print the
LOCAL_RANK
you'll see that both proc has the sameLOCAL_RANK
of 0 which cause the issue of having two different speechbrain experiments.When you switch to
RANK == 0
definition of main proc, everything works as expected, only one proc is the master and you get this for one epoch:Before submitting
PR review
Reviewer checklist