add concurrent_hash_map concurrent erase defrag test#936
add concurrent_hash_map concurrent erase defrag test#936lukaszstolarczuk merged 1 commit intopmem:masterfrom
Conversation
lukaszstolarczuk
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 1 files reviewed, 6 unresolved discussions (waiting on @JanDorniak99)
a discussion (no related file):
it's better to add ref. #xyz or fixes #xyz already in the commit message (see git workflow guideline)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 79 at r1 (raw file):
/* * insert_and_erase_test_str -- (internal) test insert and erase operations
please fix this description and also add some to the new test
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 194 at r1 (raw file):
} if (reversed_order)
a short comment perhaps, why you do this, pls (unless it will be described in the test case description on top of this function)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 195 at r1 (raw file):
if (reversed_order) std::reverse(elements.begin() + 100, elements.end());
what's this magic number (100)?
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 200 at r1 (raw file):
for (ptrdiff_t i = 0; i < 10; i++) { threads.emplace_back([&, i]() { ptrdiff_t start_offset = i == 0 ? 100 : 0;
put this conditional if in parentheses to ease the reading
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 207 at r1 (raw file):
static_cast<ptrdiff_t>( NUMBER_ITEMS_INSERT) / 10);
make this a const perhaps, like in the other test in this file
igchor
left a comment
There was a problem hiding this comment.
Reviewed 1 of 1 files at r1.
Reviewable status: all files reviewed, 8 unresolved discussions (waiting on @JanDorniak99)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 204 at r1 (raw file):
elements.begin(), start_offset + i *
Shouldn't you also skip 100 first entries here? Anyway, I think it would be clearer to just remove the first 100 entries from elements (or just to store an iterator to the 100th element and use it here) instead of conditionally calculating i.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 236 at r1 (raw file):
} }
You can also verify size of the map here.
Codecov Report
@@ Coverage Diff @@
## master #936 +/- ##
==========================================
- Coverage 96.11% 95.91% -0.20%
==========================================
Files 48 48
Lines 6122 6122
==========================================
- Hits 5884 5872 -12
- Misses 238 250 +12
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
8e41bc2 to
4067630
Compare
JanDorniak99
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 1 files reviewed, 8 unresolved discussions (waiting on @igchor and @lukaszstolarczuk)
a discussion (no related file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
it's better to add
ref. #xyzorfixes #xyzalready in the commit message (see git workflow guideline)
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 79 at r1 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
please fix this description and also add some to the new test
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 194 at r1 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
a short comment perhaps, why you do this, pls (unless it will be described in the test case description on top of this function)
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 195 at r1 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
what's this magic number (100)?
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 200 at r1 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
put this conditional if in parentheses to ease the reading
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 204 at r1 (raw file):
Previously, igchor (Igor Chorążewicz) wrote…
Shouldn't you also skip 100 first entries here? Anyway, I think it would be clearer to just remove the first 100 entries from elements (or just to store an iterator to the 100th element and use it here) instead of conditionally calculating
i.
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 207 at r1 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
static_cast<ptrdiff_t>( NUMBER_ITEMS_INSERT) / 10);make this a const perhaps, like in the other test in this file
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 236 at r1 (raw file):
Previously, igchor (Igor Chorążewicz) wrote…
You can also verify size of the map here.
Done.
igchor
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 1 files reviewed, 7 unresolved discussions (waiting on @igchor, @JanDorniak99, and @lukaszstolarczuk)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 177 at r2 (raw file):
*/ void erase_defrag_concurrent_test(nvobj::pool<root> &pop, bool reversed_order)
Could you make number of thread some parameter to this test?
It would be good to have test where there are multiple threads erasing (like you have now) and a case, where there is only one thread erasing (especially for reverse order).
4067630 to
203b918
Compare
JanDorniak99
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 1 files reviewed, 7 unresolved discussions (waiting on @igchor and @lukaszstolarczuk)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 177 at r2 (raw file):
Previously, igchor (Igor Chorążewicz) wrote…
Could you make number of thread some parameter to this test?
It would be good to have test where there are multiple threads erasing (like you have now) and a case, where there is only one thread erasing (especially for reverse order).
Done.
igchor
left a comment
There was a problem hiding this comment.
Reviewed 1 of 1 files at r3.
Reviewable status: all files reviewed, 6 unresolved discussions (waiting on @lukaszstolarczuk)
lukaszstolarczuk
left a comment
There was a problem hiding this comment.
beside two small issues, it looks good to me ;)
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @JanDorniak99)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 180 at r3 (raw file):
size_t erase_threads_n) { const size_t NUMBER_ITEMS_ERASE = 1000 * erase_threads_n;
shouldn't you use here BATCH_SIZE instead of a 1000?
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 190 at r3 (raw file):
std::string str = " "; for (size_t i = 0; i < NUMBER_ITEMS_ERASE + 100; i++) {
this 100 is still a magic number, make it const, pls
203b918 to
889f819
Compare
JanDorniak99
left a comment
There was a problem hiding this comment.
Reviewable status: 0 of 1 files reviewed, 2 unresolved discussions (waiting on @igchor and @lukaszstolarczuk)
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 180 at r3 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
shouldn't you use here BATCH_SIZE instead of a 1000?
Done.
tests/concurrent_hash_map_defrag/concurrent_hash_map_defrag.cpp, line 190 at r3 (raw file):
Previously, lukaszstolarczuk (Łukasz Stolarczuk) wrote…
this
100is still a magic number, make it const, pls
Done.
lukaszstolarczuk
left a comment
There was a problem hiding this comment.
Reviewed 1 of 1 files at r4.
Reviewable status:complete! all files reviewed, all discussions resolved
#892
This change is