Skip to content

Conversation

bitfaster
Copy link
Owner

@bitfaster bitfaster commented Jan 26, 2024

MemoryCache enqueues cache maintenance to the thread pool asynchronously. This leads to non-deterministic results in the hit rate tests, depending on whether cache maintenance has run.

ConcurrentLfu tests are run with a foreground scheduler (no background maintenance) to avoid this problem.

To make the tests fair, wait for all thread pool tasks to complete each time there is a MemoryCache read. This achieves stable results with a small penalty.

Result with fix:

results glimpse

Result before fix:

results glimpse

Thread.Sleep(1) increases the test runtime for glimpse from 0.31 secs to about 15 secs - i.e. 4700% slower. Repeated Thread.Sleep(0) is only about 25% slower.

results wiki

results arc database

results arc search
results arc oltp

@coveralls
Copy link

coveralls commented Jan 26, 2024

Coverage Status

coverage: 99.164% (+0.08%) from 99.086%
when pulling 58b2d9d on users/alexpeck/hittp
into a60f44e on main.

@bitfaster bitfaster changed the title Drain threads for memory cache hit rate Drain threads for memory cache hit rate analysis Jan 27, 2024
@bitfaster bitfaster marked this pull request as ready for review February 6, 2024 03:37
@bitfaster bitfaster merged commit 58eaa67 into main Mar 9, 2024
@bitfaster bitfaster deleted the users/alexpeck/hittp branch March 9, 2024 02:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants