Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tsm1 cache performance #7228

Closed
wants to merge 1 commit into from

Conversation

stevenh
Copy link
Contributor

@stevenh stevenh commented Aug 28, 2016

  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated
  • Sign CLA (if not already signed)

Reduce the cache lock contention by widening the cache lock scope in WriteMulti, while this sounds counter intuitive it was:

  • 1 x Read Lock to read the size
  • 1 x Read Lock per values
  • 1 x Write Lock per values on race
  • 1 x Write Lock to update the size

We now have:

  • 1 x Write Lock

This also reduces contention on the entries Values lock too as we have the global cache lock.

Move the calculation of the added size before taking the lock as it takes time and doesn't need the lock.

This also fixes a race in WriteMulti due to the lock not being held across the entire operation, which could cause the cache size to have an invalid value if Snapshot has been run in between the addition of the values and the size update.

Fix the cache benchmark which where benchmarking the creation of the cache not its operation and add a parallel test for more real world scenario, however this could still be improved.

Add a fast path newEntryValues values for the new case which avoids taking the values lock and all the other calculations.

@mention-bot
Copy link

@stevenh, thanks for your PR! By analyzing the annotation information on this pull request, we identified @e-dard to be a potential reviewer

@e-dard
Copy link
Contributor

e-dard commented Aug 29, 2016

@jwilder

Reduce the cache lock contention by widening the cache lock scope in WriteMulti, while this sounds counter intuitive it was:
* 1 x Read Lock to read the size
* 1 x Read Lock per values
* 1 x Write Lock per values on race
* 1 x Write Lock to update the size

We now have:
* 1 x Write Lock

This also reduces contention on the entries Values lock too as we have the global cache lock.

Move the calculation of the added size before taking the lock as it takes time and doesn't need the lock.

This also fixes a race in WriteMulti due to the lock not being held across the entire operation, which could cause the cache size to have an invalid value if Snapshot has been run in the between the addition of the values and the size update.

Fix the cache benchmark which where benchmarking the creation of the cache not its operation and add a parallel test for more real world scenario, however this could still be improved.

Add a fast path newEntryValues values for the new case which avoids taking the values lock and all the other calculations.

Drop the lock before performing the sort in Cache.Keys().
@stevenh
Copy link
Contributor Author

stevenh commented Sep 15, 2016

Rebased to eliminate conflict.

newSize := c.size + uint64(totalSz)
if c.maxSize > 0 && newSize+c.snapshotSize > c.maxSize {
c.mu.RUnlock()
c.mu.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you switch this from a read-only lock to a read-write lock?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general we want to use RLock whenever possible.

Copy link
Contributor Author

@stevenh stevenh Sep 30, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because taking the lock multiple times was causing significant slowdown due to lock contention, but also as how this was structured was racey resulting in the potential for c.size to become totally invalid.


var a []string
for k, _ := range c.store {
a = append(a, k)
}
c.mu.RUnlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eliminating defers is always good :-)

@jwilder jwilder mentioned this pull request Oct 5, 2016
4 tasks
@jwilder
Copy link
Contributor

jwilder commented Oct 21, 2016

@stevenh This is working well in my tests. Can you rebase?

@jwilder jwilder added this to the 1.1.0 milestone Oct 21, 2016
@jwilder jwilder self-assigned this Oct 24, 2016
Copy link
Contributor

@jwilder jwilder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rebase and merge.

@jwilder
Copy link
Contributor

jwilder commented Oct 25, 2016

Fixed via 7cc95ce. Thanks @stevenh!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants