Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: remove unneeded cache memory allocations #1226

Closed

Conversation

Projects
None yet
4 participants
@benbenw
Copy link
Contributor

benbenw commented Jun 26, 2018

When the cache is enabled through 'preferQueryMode', the driver allocates more memory than if the cache is disabled
(transient allocations not the cache memory usage)

It comes from the cache key memory size estimation
(CachedQuery#getSize)
On cache borrow / release the size is estimated by generating a string
from the key and computing its length.

On simple tests it accounts for 60% of the allocated memory.

This patch creates a custom method in BaseQueryKey to estimate the size
without allocating too much memory.

Note that the new estimated size and the one before this patch might be
slightly different.

@codecov-io

This comment has been minimized.

Copy link

codecov-io commented Jun 26, 2018

Codecov Report

Merging #1226 into master will increase coverage by 0.02%.
The diff coverage is 88.88%.

@@             Coverage Diff              @@
##             master    #1226      +/-   ##
============================================
+ Coverage     69.24%   69.26%   +0.02%     
- Complexity     3811     3815       +4     
============================================
  Files           171      171              
  Lines         15790    15798       +8     
  Branches       2583     2585       +2     
============================================
+ Hits          10933    10942       +9     
- Misses         3618     3619       +1     
+ Partials       1239     1237       -2

@benbenw benbenw force-pushed the benbenw:key-estimated-size-allocations branch from 78af5b5 to 3d0eade Jun 26, 2018

perf: remove unneeded cache memory allocations
When the cache is enabled through 'preferQueryMode', the driver
allocates more memory than if the cache is disabled
(transient allocations not the cache memory usage)

It comes from the cache key memory size estimation
(CachedQuery#getSize)
On cache borrow / release the size is estimated by generating a string
from the key and computing its length.

On simple tests it accounts for 60% of the allocated memory.

This patch creates a custom method in BaseQueryKey to estimate the size
without allocating too much memory.

Note that the new estimated size and the one before this patch might be
slightly different.

@benbenw benbenw force-pushed the benbenw:key-estimated-size-allocations branch from 3d0eade to f118ae4 Jun 26, 2018

@vlsi

This comment has been minimized.

Copy link
Member

vlsi commented Jun 26, 2018

Interesting catch.
What do you think of enforcing Object key must be either String or implement CanEstimateSize kind of rule?

@benbenw

This comment has been minimized.

Copy link
Contributor Author

benbenw commented Jun 26, 2018

There's something like that in CachedQueryCreateAction#create

assert key instanceof String || key instanceof BaseQueryKey
: "Query key should be String or BaseQueryKey. Given " + key.getClass() + ", sql: "
+ String.valueOf(key);

@vlsi

This comment has been minimized.

Copy link
Member

vlsi commented Jun 26, 2018

That's great.

I mean is something like

if (key instanceof String) {
  queryLength = ((String) key).length() * 2L /* 2 bytes per char */;
} else { // must implement CanEstimateSize
  queryLength = ((CanEstimateSize) key).getSize();
}
@benbenw

This comment has been minimized.

Copy link
Contributor Author

benbenw commented Jun 26, 2018

you'd like BaseQueryKey to implements CanEstimateSize ?

@vlsi

This comment has been minimized.

Copy link
Member

vlsi commented Jun 26, 2018

Well, I think it would make sense.
I'm not sure if SimpleQuery should implement CanEstimateSize or if current approach of just "the same as key" is good enough.

@benbenw

This comment has been minimized.

Copy link
Contributor Author

benbenw commented Jun 26, 2018

ok, I understand your comment : you want a better memory size computation instead of the current one.
Should be another patch. This PR only deal with the unneeded memory allocation when evaluating the key size.

vlsi added a commit to vlsi/pgjdbc that referenced this pull request Jun 26, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226

vlsi added a commit to vlsi/pgjdbc that referenced this pull request Jun 27, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226
@bokken

This comment has been minimized.

Copy link
Member

bokken commented Jun 27, 2018

Would it make sense for BaseQueryKey to implement CanEstimateSize? I think this is what @vlsi was asking.
This is not to get a better estimate, simply change CachedQuery to enforce that the key is either a String or a CanEstimateSize. Then CachedQuery implementation of getSize changes to:

long queryLength = 0;
if (key instanceof CanEstimateSize) {
  queryLength = ((CanEstimateSize) key).getSize();
} else if (key instanceof String) {
  queryLength = ((String) key).getLength() * 2L;
} else {
  queryLength = String.valueOf(key).length() * 2
}
...
@bokken

This comment has been minimized.

Copy link
Member

bokken commented Jun 27, 2018

Should size estimates account for byte representation used in jdk 9+?
Or should consumers simply increase/double the configured max value when running with newer versions of java to account for efficiencies in string representation?

@benbenw

This comment has been minimized.

Copy link
Contributor Author

benbenw commented Jun 27, 2018

While I agree a better memory size estimates is probably needed, (especially for value part of the cache) it should be done in another patch.
I don't have any pb with making BaseQueryKey implements CanEstimateSize

@vlsi vlsi closed this in #1227 Jun 29, 2018

vlsi added a commit that referenced this pull request Jun 29, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes #1226

vlsi added a commit that referenced this pull request Jun 29, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes #1226

rhavermans added a commit to bolcom/pgjdbc that referenced this pull request Jul 13, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (pgjdbc#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226

rhavermans added a commit to bolcom/pgjdbc that referenced this pull request Jul 13, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (pgjdbc#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.