Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torchrun c10d backend doesn't seem to work with python 3.12, giving segmentation fault because of calling obmalloc without holding GIL #125990

Closed
TanyaAdams1 opened this issue May 11, 2024 · 10 comments
Assignees
Labels
high priority oncall: distributed Add this issue/PR to distributed oncall triage queue triage review

Comments

@TanyaAdams1
Copy link

TanyaAdams1 commented May 11, 2024

🐛 Describe the bug

TLDR: It seems like Python 3.12 updated the way GIL works, and now using torch distributed (especially c10d rdzv backend) will trigger a segmentation fault. After debugging, I believe that this error was triggered by calling object allocation function without holding GIL.

To reproduce this bug, first create any new conda environment: conda create -n torch, then follow the installation instruction on torch website: conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia. During this step, conda will by default download a very new version of python (which is python 3.12.3 for me), then run torchrun with any random script name: torchrun --standalone --nproc-per-node 4 random_name.py (because the program will crash even before launching the script!) Here's the error message I got:

[2024-05-10 22:43:34,776] torch.distributed.run: [WARNING] *****************************************
[2024-05-10 22:43:34,776] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
[2024-05-10 22:43:34,776] torch.distributed.run: [WARNING] *****************************************
Fatal Python error: Segmentation fault

Current thread 0x00002b7933234740 (most recent call first):
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113 in _call_store
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 64 in __init__
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 253 in create_backend
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/registry.py", line 36 in _create_c10d_handler
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/api.py", line 258 in create_handler
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/registry.py", line 66 in get_rendezvous_handler
  File ".../lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 238 in launch_agent
  File ".../lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 135 in __call__
  File ".../lib/python3.12/site-packages/torch/distributed/run.py", line 803 in run
  File ".../lib/python3.12/site-packages/torch/distributed/run.py", line 812 in main
  File ".../lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347 in wrapper
  File ".../bin/torchrun", line 33 in <module>

Extension modules: mkl._mklinit, mkl._py_mkl_service, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special (total: 22)
Segmentation fault (core dumped)

I tried to debug this using gdb: gdb --args python -m torch.distributed.launch --standalone --nproc-per-node 4 random_name.py, and here's the output:

0x000055555574498d in _PyInterpreterState_GET () at /usr/local/src/conda/python-3.12.3/Include/internal/pycore_pystate.h:133
warning: 133    /usr/local/src/conda/python-3.12.3/Include/internal/pycore_pystate.h: No such file or directory
(gdb) bt
#0  0x000055555574498d in _PyInterpreterState_GET () at /usr/local/src/conda/python-3.12.3/Include/internal/pycore_pystate.h:133
#1  get_state () at /usr/local/src/conda/python-3.12.3/Objects/obmalloc.c:866
#2  _PyObject_Malloc (nbytes=45, ctx=<optimized out>) at /usr/local/src/conda/python-3.12.3/Objects/obmalloc.c:1563
#3  PyObject_Malloc (size=45) at /usr/local/src/conda/python-3.12.3/Objects/obmalloc.c:801
#4  0x000055555575d125 in _PyBytes_FromSize (use_calloc=0, size=12) at /usr/local/src/conda/python-3.12.3/Objects/bytesobject.c:102
#5  PyBytes_FromStringAndSize (str=0x5555582ba040 "Y2FuaW1hZGFtUU", size=12) at /usr/local/src/conda/python-3.12.3/Objects/bytesobject.c:134
#6  0x00002aaab4aeda35 in pybind11::cpp_function::initialize<torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10d::Store&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#28}, pybind11::bytes, c10d::Store&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [888]>(torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10d::Store&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#28}&&, pybind11::bytes (*)(c10d::Store&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [888])::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
   from .../lib/python3.12/site-packages/torch/lib/libtorch_python.so
#7  0x00002aaab42a7123 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
   from .../lib/python3.12/site-packages/torch/lib/libtorch_python.so
...

Downgrading python back to 3.10 solves the problem for me now, but given that 3.12.3 is downloaded by conda by default, updating how pytorch handles GIL should be the right way to go.

Versions

Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35

Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB

Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Address sizes:       48 bits physical, 48 bits virtual
Byte Order:          Little Endian
CPU(s):              128
On-line CPU(s) list: 0-127
Vendor ID:           AuthenticAMD
Model name:          AMD EPYC 7763 64-Core Processor
CPU family:          25
Model:               1
Thread(s) per core:  1
Core(s) per socket:  64
Socket(s):           2
Stepping:            1
BogoMIPS:            4890.76
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd rsb_ctxsw ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq overflow_recov succor smca
Virtualization:      AMD-V
L1d cache:           4 MiB (128 instances)
L1i cache:           4 MiB (128 instances)
L2 cache:            64 MiB (128 instances)
L3 cache:            512 MiB (16 instances)
NUMA node(s):        8
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
NUMA node4 CPU(s):   64-79
NUMA node5 CPU(s):   80-95
NUMA node6 CPU(s):   96-111
NUMA node7 CPU(s):   112-127

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.1
[pip3] torchmetrics==1.3.1
[pip3] torchnet==0.0.4
[pip3] torchvision==0.17.1
[conda] blas                      1.0                         mkl  
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2023.1.0         h213fc3f_46344  
[conda] mkl-service               2.4.0           py312h5eee18b_1  
[conda] mkl_fft                   1.3.8           py312h5eee18b_0  
[conda] mkl_random                1.2.4           py312hdb19cb5_0  
[conda] numpy                     1.26.4          py312hc5e2394_0  
[conda] numpy-base                1.26.4          py312h0da6c21_0  
[conda] pytorch                   2.2.1           py3.12_cuda11.8_cudnn8.7.0_0    pytorch
[conda] pytorch-cuda              11.8                 h7e8668a_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchaudio                2.2.1               py312_cu118    pytorch
[conda] torchmetrics              1.3.1                    pypi_0    pypi
[conda] torchnet                  0.0.4                    pypi_0    pypi
[conda] torchvision               0.17.1              py312_cu118    pytorch

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k

@tringwald tringwald added the oncall: distributed Add this issue/PR to distributed oncall triage queue label May 12, 2024
@tringwald
Copy link
Collaborator

Thank you for your bug report. I can reproduce the crash in a clean Python 3.12 environment.

@wconstab
Copy link
Contributor

@kurman is this bug specific to one rendezvous method? I'm not sure why that would be the case but if so I wonder if we are planning to keep this rendezvous method after cleaning up/consolidation work?

@kurman
Copy link
Contributor

kurman commented May 13, 2024

is this bug specific to one rendezvous method?

I believe @XilunWu was able to isolate to segfault in TCPStore: #116423. If so, could be a larger issue.

@wconstab
Copy link
Contributor

wonder if this issue can be reproduced when specifying USE_LIBUV=1 env?

@c-p-i-o
Copy link
Contributor

c-p-i-o commented May 13, 2024

wonder if this issue can be reproduced when specifying USE_LIBUV=1 env?

Issue still reproduces with USE_LIBUV=1. Same core.

USE_LIBUV=1 torchrun --standalone --nproc-per-node 4 random_name.py
OR
export USE_LIBUV=1 && torchrun --standalone --nproc-per-node 4 random_name.py
OR
(torch-3.12) [cpio@devvm17556.vll0 ~]$ env |grep LIBUV
USE_LIBUV=1
(torch-3.12) [cpio@devvm17556.vll0 ~]$ torchrun --standalone --nproc-per-node 4 random_name.py

W0513 14:57:46.757000 140207180518464 torch/distributed/run.py:757] *****************************************
Fatal Python error: Segmentation fault

Current thread 0x00007f8487309440 (most recent call first):
  File "/home/cpio/.conda/envs/torch-3.12/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113 in _call_store

@kurman
Copy link
Contributor

kurman commented May 13, 2024

Tried isolating Store type using single test and all of them are segfaulting:

pytest test/distributed/test_store.py -k "FileStoreTest and test_compare_set"
pytest test/distributed/test_store.py -k "HashStoreTest and test_compare_set"
pytest test/distributed/test_store.py -k "PrefixFileStoreTest and test_compare_set"
pytest test/distributed/test_store.py -k "TCPStoreTest and test_compare_set"
pytest test/distributed/test_store.py -k "LibUvTCPStoreTest and test_compare_set"
pytest test/distributed/test_store.py -k "PrefixTCPStoreTest and test_compare_set"

@kurman
Copy link
Contributor

kurman commented May 13, 2024

Basic repro on TCP store (both libuv and non-libuv):

import torch.distributed as dist
from datetime import timedelta
store = dist.TCPStore("localhost", 0, 1, True, timeout=timedelta(seconds=2))
store.compare_set('k', 'v1', 'v2')
Segmentation fault (core dumped)

GDB:

Thread 1 "pt_main_thread" received signal SIGSEGV, Segmentation fault.
0x00000000005042c9 in _PyInterpreterState_GET () at /usr/local/src/conda/python-3.12.0/Include/internal/pycore_pystate.h:118
118     /usr/local/src/conda/python-3.12.0/Include/internal/pycore_pystate.h: No such file or directory.

@XilunWu
Copy link
Contributor

XilunWu commented Jun 5, 2024

@TanyaAdams1 Thanks a lot for the debugging info. Yeah the update of Per-Interpreter GIL in 3.12 is causing issues. Do you think adding a python-level lock would solve the issue?

@albanD
Copy link
Collaborator

albanD commented Jun 6, 2024

FYI running the repro with a debug build of CPython points to the real issue: you're calling into cpython APIs without holding the GIL.

See the logs below for full details:

Fatal Python error: _PyMem_DebugMalloc: Python memory allocator called without holding the GIL
Python runtime state: initialized

Current thread 0x00007ffff7ce0740 (most recent call first):
  File "/home/albandes/local/pytorch/3.12_debug_source/test/foo.py", line 4 in <module>

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special (total: 20)

Thread 1 "pt_main_thread" received signal SIGABRT, Aborted.
__pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0)
    at pthread_kill.c:44
Downloading source file /usr/src/debug/glibc-2.38-18.fc39.x86_64/nptl/pthread_kill.c
44            return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0;                               
(gdb) bt
#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0)
    at pthread_kill.c:44
#1  0x00007ffff7d738a3 in __pthread_kill_internal (signo=6, threadid=<optimized out>) at pthread_kill.c:78
#2  0x00007ffff7d218ee in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#3  0x00007ffff7d098ff in __GI_abort () at abort.c:79
#4  0x00000000005f4fa5 in fatal_error_exit (status=<optimized out>) at Python/pylifecycle.c:2735
#5  0x00000000005f669d in fatal_error (fd=2, header=header@entry=1, 
    prefix=prefix@entry=0x6c40b0 <__func__.5> "_PyMem_DebugMalloc", 
    msg=msg@entry=0x6c4938 "Python memory allocator called without holding the GIL", status=status@entry=-1)
    at Python/pylifecycle.c:2916
#6  0x00000000005f6707 in _Py_FatalErrorFunc (func=func@entry=0x6c40b0 <__func__.5> "_PyMem_DebugMalloc", 
    msg=msg@entry=0x6c4938 "Python memory allocator called without holding the GIL") at Python/pylifecycle.c:2932
#7  0x00000000004fe028 in _PyMem_DebugCheckGIL (func=func@entry=0x6c40b0 <__func__.5> "_PyMem_DebugMalloc")
    at Objects/obmalloc.c:2271
#8  0x00000000004fe03f in _PyMem_DebugMalloc (ctx=0x9b7d18 <_PyRuntime+312>, nbytes=35) at Objects/obmalloc.c:2280
#9  0x00000000004ff00c in PyObject_Malloc (size=size@entry=35) at Objects/obmalloc.c:801
#10 0x00000000004a5397 in _PyBytes_FromSize (size=size@entry=2, use_calloc=use_calloc@entry=0)
    at Objects/bytesobject.c:102
#11 0x00000000004a7357 in PyBytes_FromStringAndSize (str=0x4b04a80 "v1\260\004", size=2) at Objects/bytesobject.c:134
#12 0x00007fffe99e27dc in pybind11::cpp_function::initialize<torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10d::Store&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#1}, pybind11::bytes, c10d::Store&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [888]>(torch::distributed::c10d::(anonymous namespace)::c10d_init(_object*, _object*)::{lambda(c10d::Store&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#1}&&, pybind11::bytes (*)(c10d::Store&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [888])::{lambda(pybind11::detail::function_call&)#1}::_FUN(pybind11::detail::function_call&) ()
   from /home/albandes/local/pytorch/3.12_debug_source/torch/lib/libtorch_python.so
#13 0x00007fffe91eb133 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
   from /home/albandes/local/pytorch/3.12_debug_source/torch/lib/libtorch_python.so
#14 0x00000000004f6ee1 in cfunction_call (
    func=func@entry=<built-in method compare_set of PyCapsule object at remote 0x7fffd9931c10>, args=args@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
, 
    kwargs=kwargs@entry=0x0) at Objects/methodobject.c:537
#15 0x00000000004afa0f in _PyObject_MakeTpCall (tstate=tstate@entry=0xa2bdf8 <_PyRuntime+475672>, 
    callable=callable@entry=<built-in method compare_set of PyCapsule object at remote 0x7fffd9931c10>, 
    args=args@entry=0x7ffff7fb6070, nargs=<optimized out>, keywords=keywords@entry=0x0) at Objects/call.c:240
#16 0x00000000004afc22 in _PyObject_VectorcallTstate (tstate=0xa2bdf8 <_PyRuntime+475672>, 
    callable=callable@entry=<built-in method compare_set of PyCapsule object at remote 0x7fffd9931c10>, 
    args=args@entry=0x7ffff7fb6070, nargsf=<optimized out>, kwnames=kwnames@entry=0x0)
    at ./Include/internal/pycore_call.h:90
#17 0x00000000004afc70 in PyObject_Vectorcall (
    callable=callable@entry=<built-in method compare_set of PyCapsule object at remote 0x7fffd9931c10>, 
    args=args@entry=0x7ffff7fb6070, nargsf=<optimized out>, kwnames=kwnames@entry=0x0) at Objects/call.c:325
#18 0x00000000005a02e3 in _PyEval_EvalFrameDefault (tstate=0xa2bdf8 <_PyRuntime+475672>, frame=0x7ffff7fb6020, 
    throwflag=0) at Python/bytecodes.c:2706
#19 0x00000000005a5bc8 in _PyEval_EvalFrame (tstate=tstate@entry=0xa2bdf8 <_PyRuntime+475672>, 
    frame=<optimized out>, throwflag=throwflag@entry=0) at ./Include/internal/pycore_ceval.h:89
#20 0x00000000005a5cdb in _PyEval_Vector (tstate=tstate@entry=0xa2bdf8 <_PyRuntime+475672>, 
    func=func@entry=0x7fffea4262d0, locals=locals@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
, args=args@entry=0x0, argcount=argcount@entry=0, 
    kwnames=kwnames@entry=0x0) at Python/ceval.c:1683
#21 0x00000000005a5d8b in PyEval_EvalCode (co=co@entry=<code at remote 0x7fffea5f9a60>, globals=globals@entry=Python Exception <class 'gdb.error'>: There is no member named ready.
, 
--Type <RET> for more, q to quit, c to continue without paging--q
Quit
(gdb) py-bt
Traceback (most recent call first):
  <built-in method compare_set of PyCapsule object at remote 0x7fffd9931c10>
Python Exception <class 'gdb.error'>: There is no member named ready.
Error occurred in Python: There is no member named ready.

wconstab added a commit that referenced this issue Jun 7, 2024
Fixes #125990

ghstack-source-id: d5a6ca1739db27141a3fec192fd2cf6dd4011895
Pull Request resolved: #128212
wconstab added a commit that referenced this issue Jun 7, 2024
Fixes #125990

ghstack-source-id: 174676302e14274de7571ca1cc0acbb670008c67
Pull Request resolved: #128212
@XilunWu
Copy link
Contributor

XilunWu commented Jun 7, 2024

close the issue with @wconstab 's fix #128212

@XilunWu XilunWu closed this as completed Jun 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority oncall: distributed Add this issue/PR to distributed oncall triage queue triage review
Projects
None yet
Development

No branches or pull requests

8 participants