Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race between GC_reclaim_block and GC_realloc #240

Closed
ivmai opened this issue Oct 11, 2018 · 1 comment
Closed

Race between GC_reclaim_block and GC_realloc #240

ivmai opened this issue Oct 11, 2018 · 1 comment

Comments

@ivmai
Copy link
Owner

ivmai commented Oct 11, 2018

Build: https://travis-ci.org/ivmai/bdwgc/jobs/439999104
Source: latest master
Host: Ubuntu/x64
How to build: ./autogen.sh && ./configure --enable-gc-assertions --enable-handle-fork=manual && make -j check CFLAGS_EXTRA="-fsanitize=thread -D NO_CANCEL_SAFE -D NO_INCREMENTAL -D USE_SPIN_LOCK -fno-omit-frame-pointer -D TEST_FORK_WITHOUT_ATFORK"

Race details:

WARNING: ThreadSanitizer: data race (pid=10411)
Read of size 8 at 0x7f5a7a5594a0 by main thread:
#0 GC_reclaim_block /home/travis/build/ivmai/bdwgc/extra/../reclaim.c:390:23 (libgc.so.1+0x33eb9)
# 1 GC_apply_to_all_blocks /home/travis/build/ivmai/bdwgc/extra/../headers.c:330:21 (libgc.so.1+0x193dc)
# 2 GC_start_reclaim /home/travis/build/ivmai/bdwgc/extra/../reclaim.c:662 (libgc.so.1+0x193dc)
# 3 GC_finish_collection /home/travis/build/ivmai/bdwgc/extra/../alloc.c:1085:5 (libgc.so.1+0x16cf7)
# 4 GC_try_to_collect_inner /home/travis/build/ivmai/bdwgc/extra/../alloc.c:553:5 (libgc.so.1+0x155e1)
# 5 GC_try_to_collect_general /home/travis/build/ivmai/bdwgc/extra/../alloc.c:1161:14 (libgc.so.1+0x1ae0e)
# 6 GC_gcollect /home/travis/build/ivmai/bdwgc/extra/../alloc.c:1185:11 (libgc.so.1+0x1b377)
# 7 run_one_test /home/travis/build/ivmai/bdwgc/tests/test.c:1475:10 (lt-gctest+0x4a76e6)
# 8 main /home/travis/build/ivmai/bdwgc/tests/test.c:2351:5 (lt-gctest+0x4a89b0)

Previous atomic write of size 8 at 0x7f5a7a5594a0 by thread T6:
#0 __tsan_atomic64_store /local/mnt/workspace/tmp/ubuntu_rel/llvm/utils/release/final/llvm.src/projects/compiler-rt/lib/tsan/rtl/tsan_interface_atomic.cc:567:3 (lt-gctest+0x467b21)
# 1 GC_realloc /home/travis/build/ivmai/bdwgc/extra/../mallocx.c:126:11 (libgc.so.1+0x204da)
# 2 reverse_test_inner /home/travis/build/ivmai/bdwgc/tests/test.c:766:18 (lt-gctest+0x4a5417)
# 3 GC_call_with_gc_active /home/travis/build/ivmai/bdwgc/extra/../pthread_support.c:1452:19 (libgc.so.1+0x43565)
# 4 reverse_test_inner /home/travis/build/ivmai/bdwgc/tests/test.c:722:14 (lt-gctest+0x4a58b1)
# 5 GC_do_blocking_inner /home/travis/build/ivmai/bdwgc/extra/../pthread_support.c:1392:24 (libgc.so.1+0x3b2e5)
# 6 GC_with_callee_saves_pushed /home/travis/build/ivmai/bdwgc/extra/../mach_dep.c:330:3 (libgc.so.1+0x32b6d)
# 7 GC_do_blocking /home/travis/build/ivmai/bdwgc/extra/../misc.c:2242:5 (libgc.so.1+0x3b131)
# 8 reverse_test /home/travis/build/ivmai/bdwgc/tests/test.c:839:11 (lt-gctest+0x4a7922)
# 9 run_one_test /home/travis/build/ivmai/bdwgc/tests/test.c:1533 (lt-gctest+0x4a7922)
# 10 thr_run_one_test /home/travis/build/ivmai/bdwgc/tests/test.c:2256:5 (lt-gctest+0x4a8743)
# 11 GC_inner_start_routine /home/travis/build/ivmai/bdwgc/pthread_start.c:57:12 (libgc.so.1+0x459b0)
# 12 GC_call_with_stack_base /home/travis/build/ivmai/bdwgc/extra/../misc.c:2130:14 (libgc.so.1+0x44b91)
# 13 GC_start_routine /home/travis/build/ivmai/bdwgc/extra/../pthread_support.c:1818 (libgc.so.1+0x44b91)

@ivmai
Copy link
Owner Author

ivmai commented Oct 11, 2018

+ @hboehm

ivmai added a commit that referenced this issue Feb 26, 2019
Issue #240 (bdwgc).

GC_realloc might be changing the block size while GC_reclaim_block
is examining it.  The change to the size field is benign, i.e.
GC_reclaim would work correctly with either value, since we are not
changing the number of objects in the block.  But seeing a half-updated
value (though unlikely to occur in practice) could be probably bad.
Using unordered atomic fetch of hb_sz field should solve the issue.

* reclaim.c (GC_block_nearly_full, GC_reclaim_small_nonempty_block):
Add sz argument; use sz instead of hhdr->hb_sz.
* reclaim.c (GC_reclaim_clear, GC_reclaim_uninit, GC_reclaim_check):
Skip the assertion about hhdr->hb_sz if THREADS.
* reclaim.c [ENABLE_DISCLAIM] (GC_disclaim_and_reclaim): Likewise.
* reclaim.c [AO_HAVE_load] (GC_reclaim_block): Use AO_load to access
hhdr->hb_sz; add comments.
* reclaim.c (GC_reclaim_block): Pass sz to
GC_reclaim_small_nonempty_block() and GC_block_nearly_full().
* reclaim.c (GC_continue_reclaim, GC_reclaim_all): Pass hhdr->hb_sz to
GC_reclaim_small_nonempty_block().
* reclaim.c [!EAGER_SWEEP && ENABLE_DISCLAIM]
(GC_reclaim_unconditionally_marked): Likewise.
@ivmai ivmai closed this as completed Feb 26, 2019
ivmai added a commit that referenced this issue Feb 27, 2019
Issue #240 (bdwgc).

GC_realloc might be changing the block size while GC_reclaim_block
is examining it.  The change to the size field is benign, i.e.
GC_reclaim would work correctly with either value, since we are not
changing the number of objects in the block.  But seeing a half-updated
value (though unlikely to occur in practice) could be probably bad.
Using unordered atomic fetch of hb_sz field should solve the issue.

* reclaim.c (GC_block_nearly_full, GC_reclaim_small_nonempty_block):
Add sz argument; use sz instead of hhdr->hb_sz.
* reclaim.c (GC_reclaim_clear, GC_reclaim_uninit, GC_reclaim_check):
Skip the assertion about hhdr->hb_sz if THREADS.
* reclaim.c [ENABLE_DISCLAIM] (GC_disclaim_and_reclaim): Likewise.
* reclaim.c [AO_HAVE_load] (GC_reclaim_block): Use AO_load to access
hhdr->hb_sz; add comments.
* reclaim.c (GC_reclaim_block): Pass sz to
GC_reclaim_small_nonempty_block() and GC_block_nearly_full().
* reclaim.c (GC_continue_reclaim, GC_reclaim_all): Pass hhdr->hb_sz to
GC_reclaim_small_nonempty_block().
* reclaim.c [!EAGER_SWEEP && ENABLE_DISCLAIM]
(GC_reclaim_unconditionally_marked): Likewise.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant