Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program is hanging with active threadpool even after drop(pool)? #776

Closed
jamestwebber opened this issue Jul 13, 2020 · 13 comments
Closed

Program is hanging with active threadpool even after drop(pool)? #776

jamestwebber opened this issue Jul 13, 2020 · 13 comments

Comments

@jamestwebber
Copy link

I'm not sure if this is a rayon issue or something else but it feels like an issue with the threads not exiting.

I am using rayon as part of a tool for processing sequencing data (code is here). It mostly works, but on occasion the program seems to hang at the end instead of exiting (specifically it's this program that hangs, the others seem okay).

Everything in the code has finished (I see the last logging message, and the output file is present) but the program never exits. When I check htop it appears the threadpool is still there, not doing anything, but the main thread is using CPU for something (not sure what, maybe spinning on a lock?).

I tried adding an explicit drop(pool) to the code and it executes that and still all the threads are around, so I'm not sure what's happening. This program configures global threadpool because I don't want to use all the threads on the machine, so maybe it cannot be explicitly dropped.

Has anyone encountered behavior like this? If there's any useful info I can provide let me know. I'm probably doing something wrong here but I'm not sure what. Currently I am re-running with flamegraph to try to get an idea of what it's doing in this mysterious post-run time.

@cuviper
Copy link
Member

cuviper commented Jul 13, 2020

It's known that dropping the pool doesn't actually wait for all threads to exit -- see #688. And we never stop the global pool (#483), because I was never able to find a safe way to do so. I haven't thought about it in a while though...

Still, I wouldn't expect that to block program exit. Can you attach a debugger and see what it's doing?

@jamestwebber
Copy link
Author

I will look into it (the flamegraph was not useful). It's odd because it sometimes seems to work fine, but certain input fails. Which maybe means there's an unhandled error somewhere that prevents the exit, but I'm not sure where that might be.

@jamestwebber
Copy link
Author

jamestwebber commented Jul 14, 2020

I just ran the program with rust-gdb in debug mode, and I see the error ../sysdeps/unix/sysv/linux/futex-internal.h: No such file or directory., coming from condvar.rs. I didn't see that when in release mode, although I had debug = true in my cargo.toml. Also saw some additional debug messages, so maybe the compiler had done something fancy with optimizations earlier.

Going to rerun to confirm this is reproducible. I'll keep looking–not sure if this is a rayon issue or something in that my code that was being hidden.

@cuviper
Copy link
Member

cuviper commented Jul 14, 2020

That sounds like your current thread was captured at the point which waits for a lock (implemented by futexes), and you just don't have the debug sources for glibc available. No big deal.

Try thread all apply backtrace in gdb to see the rough state of all threads.

@jamestwebber
Copy link
Author

Yeah I am re-running (unfortunately debug mode is slower and the run I know reproduces this is large). Also trying to get more error info because I think I'm getting error(s) inside the thread that weren't showing up in logs when in release mode.

@jamestwebber
Copy link
Author

Hm, I'm not sure this is a rayon issue but I'm still seeing it. The output of thread apply all backtrace makes it look like all the threads are just waiting, and the main thread is blocked on dropping a vector.

example of a thread (I believe they all look the same)

Thread 33 (Thread 0x7fff9edf6700 (LWP 17538)):
#0  0x00007ffff77b79f3 in futex_wait_cancelable (private=<optimized out>, expected=0, futex_word=0x5555558d5178) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x5555558d5120, cond=0x5555558d5150) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=0x5555558d5150, mutex=0x5555558d5120) at pthread_cond_wait.c:655
#3  0x00005555555d551a in std::sys::unix::condvar::Condvar::wait (mutex=0xfffffffffffffe00, self=<optimized out>)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/sys/unix/condvar.rs:73
#4  std::sys_common::condvar::Condvar::wait (mutex=0x5555558d5120, self=<optimized out>)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/sys_common/condvar.rs:50
#5  std::sync::condvar::Condvar::wait (self=<optimized out>, guard=...) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/sync/condvar.rs:200
#6  rayon_core::sleep::Sleep::sleep (self=0x5555558d5398, worker_index=<optimized out>)
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/sleep/mod.rs:267
#7  0x00005555555d1466 in rayon_core::sleep::Sleep::no_work_found (self=0x5555558d5398, worker_index=31, yields=<optimized out>)
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/sleep/mod.rs:91
#8  rayon_core::registry::WorkerThread::wait_until_cold (self=0x7fff9edf5900, latch=0x5555558d53f0)
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/registry.rs:671
#9  0x00005555555d00e0 in rayon_core::registry::WorkerThread::wait_until (self=0x7fff9edf5900, latch=0x80)
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/registry.rs:643
#10 rayon_core::registry::main_loop (registry=..., index=31, worker=...)
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/registry.rs:763
#11 rayon_core::registry::ThreadBuilder::run (self=...) at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/registry.rs:56
#12 0x00005555555d5d11 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} ()
    at /home/seqbot/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.1/src/registry.rs:101
#13 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/sys_common/backtrace.rs:130
#14 0x00005555555d2d5f in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} ()
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/thread/mod.rs:475
#15 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/panic.rs:318
#16 std::panicking::try::do_call (data=<optimized out>) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/panicking.rs:331
#17 std::panicking::try (f=...) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/panicking.rs:274
#18 std::panic::catch_unwind (f=...) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/panic.rs:394
#19 std::thread::Builder::spawn_unchecked::{{closure}} () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libstd/thread/mod.rs:474
#20 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ops/function.rs:232
#21 0x0000555555642b4a in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once ()
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/boxed.rs:1008
#22 <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/boxed.rs:1008
#23 std::sys::unix::thread::Thread::new::thread_start () at src/libstd/sys/unix/thread.rs:87
#24 0x00007ffff77b16db in start_thread (arg=0x7fff9edf6700) at pthread_create.c:463
#25 0x00007ffff72c288f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Main thread

Thread 1 (Thread 0x7ffff7fd2880 (LWP 17503)):
#0  0x00007ffff72319a5 in malloc_consolidate (av=av@entry=0x7fffd0000020) at malloc.c:4456
#1  0x00007ffff723903b in _int_free (have_lock=0, p=<optimized out>, av=0x7fffd0000020) at malloc.c:4362
#2  __GI___libc_free (mem=0x7fffd13892d0) at malloc.c:3124
#3  0x000055555556a0e3 in alloc::alloc::dealloc (ptr=0x7fffd0000020 "\001", layout=...)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/alloc.rs:102
#4  <alloc::alloc::Global as core::alloc::AllocRef>::dealloc (self=<optimized out>, ptr=..., layout=...)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/alloc.rs:186
#5  <alloc::raw_vec::RawVec<T,A> as core::ops::drop::Drop>::drop (self=<optimized out>)
    at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/raw_vec.rs:595
#6  core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#7  core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#8  core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#9  <alloc::vec::Vec<T> as core::ops::drop::Drop>::drop (self=<optimized out>) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/liballoc/vec.rs:2382
#10 core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#11 core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#12 core::ptr::mut_ptr::<impl *mut T>::drop_in_place (self=<optimized out>) at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mut_ptr.rs:746
#13 hashbrown::raw::Bucket<T>::drop (self=<optimized out>) at /cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.2/src/raw/mod.rs:307
#14 <hashbrown::raw::RawTable<T> as core::ops::drop::Drop>::drop (self=<optimized out>)
    at /cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.2/src/raw/mod.rs:1041
#15 0x000055555556a408 in core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#16 core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#17 core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#18 core::ptr::drop_in_place () at /rustc/c7087fe00d2ba919df1d813c040a5d47e43b0fe7/src/libcore/ptr/mod.rs:177
#19 0x000055555556c7af in bcl2index::main () at src/bin/index.rs:131

@cuviper
Copy link
Member

cuviper commented Jul 22, 2020

If all the rayon threads look like the example you picked, then they're just idle.

For the dropping backtrace in your main thread, if you step through does it make progress? Maybe this is just a lot of nested containers that take a while to drop in debug mode?

@jamestwebber
Copy link
Author

It does seem like it's spending it's time in malloc.c (specifically in malloc_consolidate) but I don't think it's making progress. I've seen the program sit like this for at least an hour in the past.

This program does allocate a very large shared ndarray that all of the threads use. Is it possible that it's having trouble freeing it because it was used by so many threads, even though they should be done?

@cuviper
Copy link
Member

cuviper commented Jul 22, 2020

If that's one large allocation, I wouldn't think threads would matter much, but it's possible. If it's an ndarray where all the items have allocations, created on many different threads, then this seems even more plausible that it is triggering something bad in the allocator. Either way, it might be worth trying on a different target (different system allocator) or with something like jemallocator.

@jamestwebber
Copy link
Author

It's one big allocation, and then the threads operate on separate chunks of it in parallel. I'm going to try running with jemallocator. I can also try running this on a ppc64le machine, as we have those available...haven't tried that in a while though so I'm not sure it works.

@jamestwebber
Copy link
Author

Update: both of those attempts worked. jemallocator exited cleanly and running the same program on a ppc64le machine was also fine. So I guess it's something to do with how malloc is freeing up memory (or..not doing that).

I guess this is probably unrelated to rayon, unless something to do with shared memory access has caused it to lock? It seems like it can't obtain a mutex when it's trying to free.

@cuviper
Copy link
Member

cuviper commented Jul 24, 2020

Rayon doesn't do anything to "lock" memory, and mere access from separate threads shouldn't be a problem. Is suspect a race condition in the allocation, but even then it's pretty weird given that you allocate it all up front. So I don't know, but this problem doesn't seem to be a rayon issue.

Maybe you could try the LLVM address sanitizer, though I think that's only available on nightly x86_64 rustc. I guess if the system allocator is broken, it would also have to be compiled with the sanitizer to find this. Valgrind memcheck is more general, but it's not great with threads, especially the spinning that rayon does in work stealing.

I'm going to close here, but good luck!

@cuviper cuviper closed this as completed Jul 24, 2020
@jamestwebber
Copy link
Author

Yes closing makes sense and thanks for your help debugging a non-rayon problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants