Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WARNING:torch.distributed.elastic.agent.server.api:Received 2 death signal, shutting down workers WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4797 closing signal SIGINT #31

Open
StevenD07 opened this issue Feb 22, 2024 · 0 comments

Comments

@StevenD07
Copy link

[2024-02-22 19:20:17,149][datasets.builder][WARNING] - Found cached dataset json (/root/.cache/huggingface/datasets/json/default-70dc00f935d3701b/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 138.50it/s]
[2024-02-22 19:20:22,163][torch.nn.parallel.distributed][INFO] - Reducer buckets have been rebuilt in this iteration.

#my get model stuck here, after I interrupt it, it shows below. I want to ask whether this wrong is caused by code or environment

WARNING:torch.distributed.elastic.agent.server.api:Received 2 death signal, shutting down workers
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 4797 closing signal SIGINT
Traceback (most recent call last):
File "main.py", line 84, in
main()
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/main.py", line 94, in decorated_main
_run_hydra(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
_run_app(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/_internal/utils.py", line 457, in _run_app
run_and_report(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
return func()
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/_internal/utils.py", line 458, in
lambda: hydra.run(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 119, in run
ret = run_job(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "main.py", line 77, in main
train(model, train_dataloader, validation_dataloader, test_dataloader, accelerator,
File "/root/Biot5/biot5/utils/train_utils.py", line 289, in train
accelerator.backward(loss / args.optim.grad_acc)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/accelerate/accelerator.py", line 1966, in backward
loss.backward(**kwargs)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
KeyboardInterrupt
Traceback (most recent call last):
File "/root/anaconda3/envs/biot5/bin/torchrun", line 8, in
sys.exit(main())
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 241, in launch_agent
result = agent.run()
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 723, in run
result = self._invoke_run(role)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 864, in _invoke_run
time.sleep(monitor_interval)
File "/root/anaconda3/envs/biot5/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 62, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 4763 got signal: 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant