Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: avoid BaseQueryKey.toString in CachedQuery.getSize #1227

Merged
merged 4 commits into from Jun 29, 2018

Conversation

Projects
None yet
5 participants
@vlsi
Copy link
Member

vlsi commented Jun 26, 2018

LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes #1226

@vlsi vlsi added this to the 42.2.3 milestone Jun 26, 2018

@vlsi vlsi added the needs-review label Jun 26, 2018

@benbenw

This comment has been minimized.

Copy link
Contributor

benbenw commented Jun 26, 2018

@vlsi what's wrong with pr #1226 ?
why create a new one ??

@pmouawad

This comment has been minimized.

Copy link

pmouawad commented Jun 26, 2018

@vlsi , pr looks very similar if not identical to #1226, shouldn’t you have accepted the initial pr giving credit to contributor and then adapt it with a new PR.

Not sure such practice will encourage future contributions.

Just my 2 cents.

Regards

benbenw added some commits Jun 26, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes #1226

@vlsi vlsi force-pushed the vlsi:query_cache_allocates_memory branch from bbf8bd3 to 3d7507b Jun 27, 2018

@vlsi

This comment has been minimized.

Copy link
Member Author

vlsi commented Jun 27, 2018

@benbenw , the difference is exactly as I said in #1226 (comment) and #1226 (comment)

CachedQuery does not need to depend on BaseQueryKey.
I've absolutely no idea why you think it should come as a separate patch.

@pmouawad , indeed. I've updated author tags, sorry for missing that.

@codecov-io

This comment has been minimized.

Copy link

codecov-io commented Jun 27, 2018

Codecov Report

Merging #1227 into master will increase coverage by 0.09%.
The diff coverage is 57.89%.

@@             Coverage Diff              @@
##             master    #1227      +/-   ##
============================================
+ Coverage     69.24%   69.33%   +0.09%     
- Complexity     3811     3816       +5     
============================================
  Files           171      171              
  Lines         15790    15785       -5     
  Branches       2583     2581       -2     
============================================
+ Hits          10933    10945      +12     
+ Misses         3618     3609       -9     
+ Partials       1239     1231       -8
@@ -55,7 +56,12 @@ public int getExecuteCount() {

@Override
public long getSize() {

This comment has been minimized.

@bokken

bokken Jun 27, 2018

Member

how frequently is this method called?
would it make more sense to calculate this value once (in constructor) and simply return that calculated value each time?

This comment has been minimized.

@vlsi

vlsi Jun 27, 2018

Author Member

Twice in the lifetime of a query: once when adding the query to the cache, then once when removing it.
Note: this method is trivial and I see no reason to cache its results.

The one with "returning columns" might indeed take some time, however the number of binds is limited with 1<<16 or so, so it should still be "good enough".

This comment has been minimized.

@benbenw

benbenw Jun 27, 2018

Contributor

I'm not an expert of the code base but are you sure it's twice in the life time ?
It seems to be twice per execution of the cached query (borrow / release) that's why it generates so many allocations.

This comment has been minimized.

@bokken

bokken Jun 27, 2018

Member

So that means 2x the number of times the "CachedQuery" is used over the life of the connection?

It was the "returning columns" bit that caused me some concern. Is this degree of exactness required? Would it be close enough to say columnNames.length * 32 (i.e. assume 16 chars on average)?

This comment has been minimized.

@vlsi

vlsi Jun 27, 2018

Author Member

That's right. I stand corrected

This comment has been minimized.

@vlsi

vlsi Jun 27, 2018

Author Member

Technically speaking, Arrays.equals would have to be called, so iterating over columnNames one more time would not hurt much.

What do you think of just adding a field for size cache for QueryWithReturningColumnsKey?

This comment has been minimized.

@bokken

bokken Jun 27, 2018

Member

What do you think of just adding a field for size cache for QueryWithReturningColumnsKey?

And perhaps only for the portion representing the column names (i.e. what is unique to the class).

@vlsi vlsi removed the needs-review label Jun 28, 2018

@vlsi vlsi merged commit f46c86a into pgjdbc:master Jun 29, 2018

2 checks passed

codecov/project 69.33% (+0.09%) compared to b7fd9f3
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

vlsi added a commit that referenced this pull request Jun 29, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes #1226

rhavermans added a commit to bolcom/pgjdbc that referenced this pull request Jul 13, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (pgjdbc#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226

rhavermans added a commit to bolcom/pgjdbc that referenced this pull request Jul 13, 2018

perf: avoid BaseQueryKey.toString in CachedQuery.getSize (pgjdbc#1227)
LruCache uses .getSize to limit the size of the cache, so this method
should refrain from doing memory allocations.

closes pgjdbc#1226
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.