Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix LOCAL_RANK to be RANK in if_main_process #2506

Merged
merged 10 commits into from
Apr 25, 2024

Conversation

Adel-Moumen
Copy link
Collaborator

@Adel-Moumen Adel-Moumen commented Apr 13, 2024

What does this PR do?

This PR fix one issue that I encountered while using SpeechBrain on Compute Canada. Basically, I found that the LOCAL_RANK variable was being 0 on two different processes hence leading to having two main process. Why ? Because our definition of main process is LOCAL_RANK == 0. I went a bit further in the PyTorch an PyTorch lightning documentation and found that we should not use LOCAL_RANK as a way to determine the main process. Indeed, as explained here: pytorch/pytorch#12042 (comment), LOCAL_RANK is actually the ID within a worker; multiple workers have a LOCAL_RANK of 0.

As mentioned here: pytorch/pytorch#12042 (comment), we should use RANK == 0 as a way to find the master process. This is also what is being done with PyTorch Lightning here: pytorch/pytorch#12042 (comment) with global_rank.

With my fix, now everything works as expected. Only one main process and everything is synchronised.

Logs to help better understanding

I launched on compute canada one sbatch with 2 nodes and 1 gpu per node, I printed some informations about each nodes:

***************************
r1 SLURM_TMPDIR: /localscratch/adelmou.28991044.0
nnodes=2
node_rank=1
master=172.16.146.25
master_port=3456
***************************

***************************
r0 SLURM_TMPDIR: /localscratch/adelmou.28991044.0
nnodes=2
node_rank=0
master=172.16.146.25
master_port=3456
***************************

However, if you do print the LOCAL_RANK you'll see that both proc has the same LOCAL_RANK of 0 which cause the issue of having two different speechbrain experiments.

When you switch to RANK == 0 definition of main proc, everything works as expected, only one proc is the master and you get this for one epoch:

1 GPU  2 nodes
100%|██████████| 2379/2379 [14:15<00:00, 2.78it/s, train_loss=0.288]]

1 GPU 1 node
With 1 GPU
100%|███████████████| 4757/4757 [26:27<00:00, 3.00it/s, train_loss=0.234]
Before submitting
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Does your code adhere to project-specific code style and conventions?

PR review

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform)
  • Review the self-review checklist to ensure the code is ready for review

@Adel-Moumen Adel-Moumen marked this pull request as ready for review April 13, 2024 14:25
@Adel-Moumen Adel-Moumen added the bug Something isn't working label Apr 13, 2024
@lucadellalib
Copy link
Collaborator

@Adel-Moumen this change is basically reverting #2101 and most likely will break DDP multi-node training if no other fixes were made in the meantime.

The assumption here is that if_main_process function is the one returning the LOCAL master process rank. This is necessary when you do multi-node training, otherwise you data preparation is done on the master node only and data will hence be missing on the other nodes.

If we now have operations that should run only on the master node, we should then have 2 separate functions, one to check the global master process and one to check the local master process and refactor the DDP code accordingly.

@Adel-Moumen
Copy link
Collaborator Author

@Adel-Moumen this change is basically reverting #2101 and most likely will break DDP multi-node training if no other fixes were made in the meantime.

The assumption here is that if_main_process function is the one returning the LOCAL master process rank. This is necessary when you do multi-node training, otherwise you data preparation is done on the master node only and data will hence be missing on the other nodes.

If we now have operations that should run only on the master node, we should then have 2 separate functions, one to check the global master process and one to check the local master process and refactor the DDP code accordingly.

Hi @lucadellalib,

most likely will break DDP multi-node training if no other fixes were made in the meantime.

I really doubt on this. Your fix using LOCAL_RANK was making multinodes not working as you had N multiple running experiments at the same time where N is the number of nodes. I don't think you should use LOCAL_RANK for initialisation DDP because it creates the aforementioned issue... Now everything works smoothly.

The assumption here is that if_main_process function is the one returning the LOCAL master process rank. This is necessary when you do multi-node training, otherwise you data preparation is done on the master node only and data will hence be missing on the other nodes.

why the data would be missed ? If you are doing the data prep on the master node, which has always been the case, then the other nodes can also get access to the actual data ? If your issue is due to wav path that may be different, then just have to use the replacements feature on CSV/Json that will map a special token (e.g; $data_root) to your data_folder path on each nodes. And generally, you do the data prep on CPUs nodes before running the training.

If we now have operations that should run only on the master node, we should then have 2 separate functions, one to check the global master process and one to check the local master process and refactor the DDP code accordingly.

Ping @TParcollet on this.

@lucadellalib
Copy link
Collaborator

@Adel-Moumen I see what you mean, this works on Compute Canada because the filesystem is shared. In general it might not be the case (in a recent use case I had multiple nodes with no shared storage, so I had to upload the data on each node, generate the manifest files on each nodes, let the local master process save checkpoints on each node etc. Without the local master process doing the necessary I/O operations things do not work properly in this setup).

We are still correctly initializing DDP stuff using the global rank:
https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L806

What error do you get on Compute Canada with the current implementation of if_main_process?

@Adel-Moumen
Copy link
Collaborator Author

Adel-Moumen commented Apr 13, 2024

@Adel-Moumen I see what you mean, this works on Compute Canada because the filesystem is shared. In general it might not be the case (in a recent use case I had multiple nodes with no shared storage, so I had to upload the data on each node, generate the manifest files on each nodes, let the local master process save checkpoints on each node etc. Without the local master process doing the necessary I/O operations things do not work properly in this setup).

OK, I see. I didn't had this use case in mind.

We are still correctly initializing DDP stuff using the global rank: develop/speechbrain/core.py#L806

Yep, but here for instance: https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/core.py#L161 we are using the definition of LOCAL_RANK == 0 to define if the proc will create a new SB experiment. This leads to having the console being flood by logs since each proc having LOCAL_RANK==0 is going to create a new experiment, a new tqdm bar etc.

What error do you get on Compute Canada with the current implementation of if_main_process?

It was mostly what I described previously. Having everything duplicated was very weird to me since I wasn't expecting SpeechBrain to do so.

Maybe we should try to isolate what are the operations that we want to be perform only on the global rank (e.g. creating the SB experiment, init WANDB etc) and what can be run on each nodes (e.g. data prep) to be under different functions. Wdyt? (basically what you suggested)

@lucadellalib
Copy link
Collaborator

I agree, probably logging should be done only the on the RANK == 0 process, while other operations like data preparation etc. on all LOCAL_RANK == 0 processes.

@Adel-Moumen
Copy link
Collaborator Author

I tried to run the multinodes DDP training uising what you did (i.e. LOCAL_RANK instead of RANK for if_main_proc), I have the following error which seems to appear linked to checkpointing. I don't know if this all of our recipes that are affected or only mine (which is just a wav2vec CTC librispeech training) but its a bit concerning. When you run the code with my definition of if_main_proc (i.e. RANK instead of LOCAL_RANK) there's no issues and I can continue the training (i.e. no ckpt issues etc).

100%|██████████| 2379/2379 [14:45<00:00,  2.69it/s, train_loss=0.288]
 98%|█████████▊| 442/451 [01:4speechbrain.utils.train_logger - epoch: 1, lr_model: 9.00e-01, lr_wav2vec: 1.00e-04 - train loss: 2.85e-01 - valid loss: 1.19e-01, valid CER: 14.87, valid WER: 92.72
100%|██████████| 451/451 [01:54<00:00,  3.94it/s]
speechbrain.utils.train_logger - epoch: 1, lr_model: 9.00e-01, lr_wav2vec: 1.00e-04 - train loss: 2.88e-01 - valid loss: 1.19e-01, valid CER: 14.87, valid WER: 92.72
speechbrain.utils.checkpoints - Saved an end-of-epoch checkpoint in /home/adelmou/scratch/ddp/debug/save/CKPT+2024-04-14+07-30-22+00
speechbrain.utils.checkpoints - Saved an end-of-epoch checkpoint in /home/adelmou/scratch/ddp/debug/save/CKPT+2024-04-14+07-30-22+00
speechbrain.utils.checkpoints - Deleted checkpoint in /home/adelmou/scratch/ddp/debug/save/CKPT+2024-04-14+07-30-21+00
speechbrain.core - Exception:
Traceback (most recent call last):
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/recipes/LibriSpeech/ASR/CTC/train_with_wav2vec.py", line 394, in <module>
    asr_brain.fit(
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/core.py", line 1584, in fit
    self._fit_valid(valid_set=valid_set, epoch=epoch, enable=enable)
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/core.py", line 1494, in _fit_valid
    self.on_stage_end(Stage.VALID, avg_valid_loss, epoch)
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/recipes/LibriSpeech/ASR/CTC/train_with_wav2vec.py", line 183, in on_stage_end
    self.checkpointer.save_and_keep_only(
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/utils/checkpoints.py", line 707, in save_and_keep_only
    self.delete_checkpoints(
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/utils/checkpoints.py", line 1020, in delete_checkpoints
    Checkpointer._delete_checkpoint(ckpt, verbosity=verbosity)
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/utils/distributed.py", line 117, in main_proc_wrapped_func
    result = function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adelmou/speechbrain/pr/branchformer_CTC_debug/speechbrain/speechbrain/utils/checkpoints.py", line 1031, in _delete_checkpoint
    raise RuntimeError("Checkpoint does not appear valid for deletion.")
RuntimeError: Checkpoint does not appear valid for deletion.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 170285) of binary: /home/adelmou/debug_ddp/bin/python
ERROR:torch.distributed.elastic.agent.server.api:Error waiting on exit barrier. Elapsed: 301.00105118751526 seconds
Traceback (most recent call last):
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 920, in _exit_barrier
    store_util.barrier(
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/elastic/utils/store.py", line 78, in barrier
    synchronize(store, data, rank, world_size, key_prefix, barrier_timeout)
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/elastic/utils/store.py", line 64, in synchronize
    agent_data = get_all(store, rank, key_prefix, world_size)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/elastic/utils/store.py", line 34, in get_all
    data = store.get(f"{prefix}{idx}")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Socket Timeout
Traceback (most recent call last):
  File "/home/adelmou/debug_ddp/bin/torchrun", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adelmou/debug_ddp/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
train_with_wav2vec.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-04-14_07:30:48
  host      : cdr2574.int.cedar.computecanada.ca
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 170285)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Training finished.

@lucadellalib
Copy link
Collaborator

Looks like a race condition due to the shared storage... it's probably better then to revert the change since Compute Canada is the main platform... We should keep in mind that this solution is not general and can break things on other cluster setups.

@Gastron
Copy link
Collaborator

Gastron commented Apr 15, 2024

I think in many libraries there is a separate check is_local_main_process (or maybe if_local_main_process since we have got used to that function name). This function for if_main_process should be used for things that need to happen just once, while if_local_main_process should be used for things that need to happen once on every node.

As for the shared filesystem vs. node-local storage, recipes could state if they are written with an assumption of a globally shared filesystem, or with an assumption node-locally shared temp filestorages. Or perhaps we can even help switching between the two by making a third checker function like def if_dataprep_writer_process(globally_shared_filesystem: bool).

@Adel-Moumen
Copy link
Collaborator Author

Okay, so what I propose to do is first to merge this PR with RANK instead of LOCAL_RANK in if_main_process so that we fix race condition issues. Then, I will open a new PR and will start to add two functions (e.g. if_global_rank and if_local_rank) and will try to identify which part of SpeechBrain should be run with RANK and which one with LOCAL_RANK.

@TParcollet
Copy link
Collaborator

TParcollet commented Apr 15, 2024

I agree with @Gastron. Here is my analysis of the situation: we agreed a while back that the use of functions like if_local_main or if_global_main should be avoided in user-visible code. Instead, we should have run_on_main-like functions. In that case, we need to create a run_on_local_main and run_on_global_main OR we could add a flag to run_on_main to toggle to the local main instead of global (which make more sense imho to avoid too many changes in the lib). @Adel-Moumen we should revert and do this in a single PR.

@Adel-Moumen
Copy link
Collaborator Author

Hi, I made the requested feature using Titouan's proposal (i.e., using a flag). Let me know if you think this PoC is fine. I tried initially to implement multiple functions (one for local_rank and one for global_rank); however, I believe this would require a non-negligible commit, as there are many things to modify and adjust in our codebase (and could introduce backward incompatibilities)

@Adel-Moumen
Copy link
Collaborator Author

I had a more in-depth discussion with @TParcollet. We think that in the meantime it is better to just revert the LOCAL_RANK PR, release a new SB version and then propose a more elegant fix (this is not really a fix since it does not introduce new bugs but it's more a new feature) on that. I think having a new function that will run on local_main is what is required. I tried having a flag but it was not that great.

@TParcollet TParcollet merged commit a024d3d into speechbrain:develop Apr 25, 2024
4 checks passed
fpaissan added a commit to fpaissan/speechbrain that referenced this pull request May 2, 2024
* Skip lazy imports when the caller is inspect.py

This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work.

This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that.

* SSL_Semantic_Token _ new PR (speechbrain#2509)

* remove unnecassry  files and move to dasb

* remove extra recepie from test

* update ljspeech qunatization recepie

* add discrete_ssl and remove extra files

* fix precommit

* update kmeans and add tokeizer for postprocessing

* fix precommit

* Update discrete_ssl.py

* fix clone warning

---------

Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>

* _ensure_module Raises docstring

* Expose `ensure_module` so that docs get generated for it

This is already an internal class anyway, and this is safe to call.

* Update actions/setup-python

* Use `uv` in test CI + merge some dep installs

The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway.

* Use `uv` in doc CI + merge some dep installs

Similar rationale as for the test CI

* Parallelize doc generation with Sphinx

This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers.

* Enable `uv` caching on the test CI

* Enable `uv` caching on the docs CI

* CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290)

CTC-only pre-training of conformer and branchformer.

---------

Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>

* Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465)

* Update CV transformer recipes to match latest results with conformer.

---------

Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>

* Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450)

Whisper improvements:
- flash attention
- kv caching
- lang identifaction
- translation
- finetuning amelioration 
... and more ...

* Update README.md

* precommit

* update zed download link (speechbrain#2514)

* `RelPosEncXL` refactor and precision fixes (speechbrain#2498)

* Add `RelPosEncXL.make_pe`, rework precision handling

* Rework RelPosEncXL output dtype selection

* Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504)

* fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506)

* Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524)

* Fix Separation and Enhancement recipes behavior when NaN encountered

* Formatting using precommit hooks

* Lock torch version in requirements.txt (speechbrain#2528)

* Fix compatibility for torchaudio versions without `.io` (speechbrain#2532)

This avoids having the Python interpreter attempt to resolve the type annotation directly.

* fix docstrings

* consistency tests - classification

* consistency tests - classification

* consistency tests - interpret

* default to no wham

* fix after tests pass

* fix after tests pass

* tests after that

* fix consistency

---------

Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com>
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants