Skip to content

Unexpected behavior on unit test with assign/erase/assign #31

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
brandon-kohn opened this issue Dec 26, 2017 · 4 comments
Closed

Unexpected behavior on unit test with assign/erase/assign #31

brandon-kohn opened this issue Dec 26, 2017 · 4 comments

Comments

@brandon-kohn
Copy link

Apologies if this turns out to be due to my own mishandling of the library.

I've written a test in some of my own code to see what container I should use for a concurrent hash map. The test is as follow (adapted to junction):

template <typename Pool>
void bash_junction_map(Pool& pool, const char* name)
{
	using namespace ::testing;
	using namespace stk;

	junction::ConcurrentMap_Leapfrog<int, int> m;

	auto nItems = 10000;
	for (auto i = 2; i < nItems + 2; ++i)
	{
		m.assign(i, i * 10);
	}

	using future_t = typename Pool::template future<void>;
	std::vector<future_t> fs;
	fs.reserve(100000);
	{
		GEOMETRIX_MEASURE_SCOPE_TIME(name);
		for (unsigned i = 2; i < 100000 + 2; ++i) {
			fs.emplace_back(pool.send([&m, i]() -> void
			{
				for (int q = 0; q < nsubwork; ++q)
				{
					m.assign(i, i * 20);
					m.erase(i);
					m.assign(i, i * 20);
				}
			}));
		}
		boost::for_each(fs, [](const future_t& f) { f.wait(); });
	}

	for (auto i = 2; i < 100000 + 2; ++i)
	{
		auto r = m.find(i);
		EXPECT_EQ(i * 20, r.getValue());
	}
}

My understanding is that an assign operation followed by an erase and then another assign ought to essentially cancel each other out. However, I'm getting spurious failures with junction:

Note: Google Test filter = timing.work_stealing_thread_pool_moodycamel_concurrentQ_bash_junction
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from timing
[ RUN      ] timing.work_stealing_thread_pool_moodycamel_concurrentQ_bash_junction
G:\Projects\simulation_suite\test\fiber_timing_tests.cpp(150): error: Expected equality of these values:
  i * 20
    Which is: 1032520
  r.getValue()
    Which is: 0
G:\Projects\simulation_suite\test\fiber_timing_tests.cpp(150): error: Expected equality of these values:
  i * 20
    Which is: 540200
  r.getValue()
    Which is: 0
G:\Projects\simulation_suite\test\fiber_timing_tests.cpp(150): error: Expected equality of these values:
  i * 20
    Which is: 1032520
  r.getValue()
    Which is: 0
[  FAILED  ] timing.work_stealing_thread_pool_moodycamel_concurrentQ_bash_junction (35310 ms)
[----------] 1 test from timing (35311 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (35313 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] timing.work_stealing_thread_pool_moodycamel_concurrentQ_bash_junction

 1 FAILED TEST
Press any key to continue . . .

Are my expectations incorrect?

@preshing
Copy link
Owner

preshing commented Dec 27, 2017

Hmm, that's interesting. Could be a bug. Could you try ConcurrentMap_Linear and ConcurrentMap_Grampa?

@brandon-kohn
Copy link
Author

brandon-kohn commented Dec 27, 2017

Sure. I've tried with both of those as well now, and it happens with all 3. It seems that Grampa is the worst of all.. many failures. Linear seems to have a lower frequency of these than Leapfrog. I probably shouldn't make such statements about the frequency between Linear and Leapfrog.. I don't have enough samples to really say. The Grampa was significant in that there were over a dozen failures whereas the others would have 0-3 over 200 runs.

@preshing
Copy link
Owner

It was indeed a bug in the maps. I submitted a fix. Thanks a lot for finding this repro case! Feel free to reopen if you have further issues.

@preshing
Copy link
Owner

I also noticed you don't seem to be calling junction::DefaultQSBR.update() periodically from each thread that manipulates the map. You might want to do so, or else you will leak memory as mentioned here. Sorry the documentation isn't better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants