Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dust hangs on a single-core computer #52

Closed
tavianator opened this issue Jan 15, 2020 · 5 comments
Closed

dust hangs on a single-core computer #52

tavianator opened this issue Jan 15, 2020 · 5 comments

Comments

@tavianator
Copy link

Even on an empty directory, dust hangs:

$ mkdir empty
$ dust empty
^C

It works fine if I specify the number of threads:

$ dust -t1 empty
 4.0K ─┬ empty

It works fine on all my other boxes, so I'm guessing it's the fact that this one has only 1 core that is to blame. Here's a stack trace:

Thread 1 "dust" received signal SIGINT, Interrupt.
0x00007ffff7e9562b in sched_yield () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007ffff7e9562b in sched_yield () from /usr/lib/libc.so.6
#1  0x00005555555ff045 in jwalk::core::ordered_queue::OrderedQueueIter<T>::next_strict (self=0x7fffffff6d38) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/ordered_queue.rs:150
#2  0x00005555555ff817 in <jwalk::core::ordered_queue::OrderedQueueIter<T> as core::iter::traits::iterator::Iterator>::next (self=0x7fffffff6d38) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/ordered_queue.rs:170
#3  0x0000555555618ec0 in <jwalk::core::iterators::ReadDirIter as core::iter::traits::iterator::Iterator>::next (self=0x7fffffff6d30) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/iterators.rs:40
#4  0x00005555555cd3b5 in <core::iter::adapters::Peekable<I> as core::iter::traits::iterator::Iterator>::next (self=0x7fffffff6d30) at /build/rust/src/rustc-1.40.0-src/src/libcore/iter/adapters/mod.rs:1260
#5  0x00005555556194a6 in jwalk::core::iterators::DirEntryIter::push_next_read_dir_iter (self=0x7fffffff6d18) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/iterators.rs:65
#6  0x0000555555619984 in <jwalk::core::iterators::DirEntryIter as core::iter::traits::iterator::Iterator>::next (self=0x7fffffff6d18) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/iterators.rs:96
#7  0x000055555559cf38 in dust::utils::examine_dir (top_dir=..., apparent_size=false, inodes=0x7fffffff7638, data=0x7fffffff7670, file_count_no_permission=0x7fffffff7630, threads=...) at src/utils/mod.rs:112
#8  0x000055555559cbc4 in dust::utils::get_dir_tree (top_level_names=0x7fffffffdd58, apparent_size=false, threads=...) at src/utils/mod.rs:79
#9  0x0000555555590fdb in dust::main () at src/main.rs:115
(gdb) thread 2
[Switching to thread 2 (Thread 0x7ffff7dac700 (LWP 4624))]
#0  0x00007ffff7e9562b in sched_yield () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007ffff7e9562b in sched_yield () from /usr/lib/libc.so.6
#1  0x00005555555fe9ad in jwalk::core::ordered_queue::OrderedQueueIter<T>::next_relaxed (self=0x7ffff7da9960) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/ordered_queue.rs:122
#2  0x00005555555ff7cb in <jwalk::core::ordered_queue::OrderedQueueIter<T> as core::iter::traits::iterator::Iterator>::next (self=0x7ffff7da9960) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/ordered_queue.rs:169
#3  0x000055555560ab64 in <rayon::iter::par_bridge::IterParallelProducer<Iter> as rayon::iter::plumbing::UnindexedProducer>::fold_with (self=..., folder=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/par_bridge.rs:177
#4  0x00005555555e4184 in rayon::iter::plumbing::bridge_unindexed_producer_consumer (migrated=false, splitter=..., producer=..., consumer=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/plumbing/mod.rs:482
#5  0x00005555555e468a in rayon::iter::plumbing::bridge_unindexed_producer_consumer::{{closure}} (context=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/plumbing/mod.rs:474
#6  0x00005555555f9abe in rayon_core::join::join_context::call_a::{{closure}} () at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/join/mod.rs:125
#7  0x00005555555d8a36 in <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=()) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:317
#8  0x00005555555e30d1 in std::panicking::try::do_call (data=0x7ffff7da7e20 "\250\223\332\367\377\177\000") at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:287
#9  0x000055555581463a in __rust_maybe_catch_panic ()
#10 0x00005555555e2778 in std::panicking::try (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:265
#11 0x00005555555d9bc6 in std::panic::catch_unwind (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:396
#12 0x00005555555ffe23 in rayon_core::unwind::halt_unwinding (func=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/unwind.rs:17
#13 0x00005555555f91df in rayon_core::join::join_context::{{closure}} (worker_thread=0x7ffff7dab100, injected=false) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/join/mod.rs:146
#14 0x000055555560a2ff in rayon_core::registry::in_worker (op=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:799
#15 0x00005555555f8ea7 in rayon_core::join::join_context (oper_a=..., oper_b=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/join/mod.rs:133
#16 0x00005555555e3f49 in rayon::iter::plumbing::bridge_unindexed_producer_consumer (migrated=false, splitter=..., producer=..., consumer=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/plumbing/mod.rs:473
#17 0x00005555555e3804 in rayon::iter::plumbing::bridge_unindexed (producer=..., consumer=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/plumbing/mod.rs:452
#18 0x000055555560f44d in <rayon::iter::par_bridge::IterBridge<Iter> as rayon::iter::ParallelIterator>::drive_unindexed (self=..., consumer=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/par_bridge.rs:87
#19 0x00005555555f6d20 in <rayon::iter::map_with::MapWith<I,T,F> as rayon::iter::ParallelIterator>::drive_unindexed (self=..., consumer=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/map_with.rs:53
#20 0x00005555555f63c7 in rayon::iter::from_par_iter::<impl rayon::iter::FromParallelIterator<()> for ()>::from_par_iter (par_iter=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/from_par_iter.rs:226
#21 0x00005555555f6404 in rayon::iter::ParallelIterator::collect (self=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/mod.rs:1886
#22 0x000055555560ee79 in rayon::iter::ParallelIterator::for_each_with (self=..., init=..., op=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/iter/mod.rs:393
#23 0x000055555561b334 in jwalk::core::multi_threaded_walk::{{closure}} () at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/jwalk-0.4.0/src/core/mod.rs:142
#24 0x00005555555d8ab5 in <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=()) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:317
#25 0x00005555555e2ff1 in std::panicking::try::do_call (data=0x7ffff7daa800 "Њ\217UUU\000") at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:287
#26 0x000055555581463a in __rust_maybe_catch_panic ()
#27 0x00005555555e2dd8 in std::panicking::try (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:265
#28 0x00005555555d9c26 in std::panic::catch_unwind (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:396
#29 0x00005555555ffea3 in rayon_core::unwind::halt_unwinding (func=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/unwind.rs:17
#30 0x00005555555fa018 in rayon_core::spawn::spawn_job::{{closure}} () at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/spawn/mod.rs:98
#31 0x00005555555fcb98 in <rayon_core::job::HeapJob<BODY> as rayon_core::job::Job>::execute (this=0x5555558f80b0) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/job.rs:167
#32 0x0000555555631d06 in rayon_core::job::JobRef::execute (self=0x7ffff7daadc0) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/job.rs:59
#33 0x00005555556233fd in rayon_core::registry::WorkerThread::execute (self=0x7ffff7dab100, job=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:681
#34 0x0000555555622d1c in rayon_core::registry::WorkerThread::wait_until_cold (self=0x7ffff7dab100, latch=0x5555558f7cf0) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:665
#35 0x0000555555622b16 in rayon_core::registry::WorkerThread::wait_until (self=0x7ffff7dab100, latch=0x5555558f7cf0) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:639
#36 0x0000555555623c78 in rayon_core::registry::main_loop (worker=..., registry=..., index=0) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:759
#37 0x00005555556206c0 in rayon_core::registry::ThreadBuilder::run (self=...) at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:56
#38 0x0000555555620d01 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} () at /home/tavianator/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/registry.rs:101
#39 0x000055555562c652 in std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/sys_common/backtrace.rs:129
#40 0x00005555556478c1 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /build/rust/src/rustc-1.40.0-src/src/libstd/thread/mod.rs:469
#41 0x00005555556246c1 in <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=()) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:317
#42 0x000055555564122e in std::panicking::try::do_call (data=0x7ffff7dab900 "\000") at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:287
#43 0x000055555581463a in __rust_maybe_catch_panic ()
#44 0x0000555555640ea8 in std::panicking::try (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panicking.rs:265
#45 0x0000555555625683 in std::panic::catch_unwind (f=...) at /build/rust/src/rustc-1.40.0-src/src/libstd/panic.rs:396
#46 0x00005555556476a6 in std::thread::Builder::spawn_unchecked::{{closure}} () at /build/rust/src/rustc-1.40.0-src/src/libstd/thread/mod.rs:468
#47 0x000055555562cdf4 in core::ops::function::FnOnce::call_once{{vtable-shim}} () at /build/rust/src/rustc-1.40.0-src/src/libcore/ops/function.rs:227
#48 0x000055555580699f in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once ()
#49 0x0000555555813b10 in std::sys::unix::thread::Thread::new::thread_start ()
#50 0x00007ffff7f994cf in start_thread () from /usr/lib/libpthread.so.0
#51 0x00007ffff7eae2d3 in clone () from /usr/lib/libc.so.6
@tavianator
Copy link
Author

It works if I set RAYON_NUM_THREADS=2, so I think it's a jwalk bug assuming the pool has at least two threads.

@bootandy
Copy link
Owner

Hmm, that sucks.

Good find.

I can reproduce this by setting RAYON_NUM_THREADS=1

I wonder if I can hack round this problem for now.

Thanks.

@bootandy
Copy link
Owner

I've looked at the jwalk code and I'm sure there's a multithreading edgecase bug in there but I can't see it.

I think I'll use the num_cpus library to look for 1 CPU and force the threads=1 if it isn't set to something else. It's an ugly solution but I can't think of anything else,

What do you think @tavianator ?

@bootandy
Copy link
Owner

Fixed in new version.

@tavianator
Copy link
Author

Yeah I think that workaround makes sense. It would have been fine to use 1 thread I think, rather than 2, since -t1 explicitly works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants