Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8259886 : Improve SSL session cache performance and scalability #2255

Closed
wants to merge 11 commits into from
Closed

8259886 : Improve SSL session cache performance and scalability #2255

wants to merge 11 commits into from

Conversation

djelinski
Copy link
Member

@djelinski djelinski commented Jan 27, 2021

Under certain load, MemoryCache operations take a substantial fraction of the time needed to complete SSL handshakes. This series of patches improves performance characteristics of MemoryCache, at the cost of a functional change: expired entries are no longer guaranteed to be removed before live ones. Unused entries are still removed before used ones, and cache performance no longer depends on its capacity.

First patch in the series contains a benchmark that can be run with make test TEST="micro:CacheBench".
Baseline results before any MemoryCache changes:

Benchmark       (size)  (timeout)  Mode  Cnt     Score    Error  Units
CacheBench.put   20480      86400  avgt   25    83.653 ?  6.269  us/op
CacheBench.put   20480          0  avgt   25     0.107 ?  0.001  us/op
CacheBench.put  204800      86400  avgt   25  2057.781 ? 35.942  us/op
CacheBench.put  204800          0  avgt   25     0.108 ?  0.001  us/op

there's a nonlinear performance drop between 20480 and 204800 entries, probably attributable to CPU cache thrashing. Beyond 204800 entries the cache scales more linearly.

Benchmark results after the 2nd and 3rd patches are pretty similar, so I'll only copy one:

Benchmark       (size)  (timeout)  Mode  Cnt  Score   Error  Units
CacheBench.put   20480      86400  avgt   25  0.146 ? 0.002  us/op
CacheBench.put   20480          0  avgt   25  0.108 ? 0.002  us/op
CacheBench.put  204800      86400  avgt   25  0.150 ? 0.001  us/op
CacheBench.put  204800          0  avgt   25  0.106 ? 0.001  us/op

The third patch improves worst-case times on a mostly idle cache by scattering removal of expired entries over multiple put calls. It does not affect performance of an overloaded cache.

The 4th patch removes all code that clears cached values before handing them over to the GC. This comment stated that clearing values was supposed to be a GC performance optimization. It wasn't. Benchmark results after that commit:

Benchmark       (size)  (timeout)  Mode  Cnt  Score   Error  Units
CacheBench.put   20480      86400  avgt   25  0.113 ? 0.001  us/op
CacheBench.put   20480          0  avgt   25  0.075 ? 0.002  us/op
CacheBench.put  204800      86400  avgt   25  0.116 ? 0.001  us/op
CacheBench.put  204800          0  avgt   25  0.072 ? 0.001  us/op

I wasn't expecting that much of an improvement, and don't know how to explain it.

The 40ns difference between cache with and without a timeout can be attributed to 2 System.currentTimeMillis() calls; they were pretty slow on my VM.


Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed

Issue

  • JDK-8259886: Improve SSL session cache performance and scalability

Reviewers

Download

$ git fetch https://git.openjdk.java.net/jdk pull/2255/head:pull/2255
$ git checkout pull/2255

@bridgekeeper bridgekeeper bot added the oca Needs verification of OCA signatory status label Jan 27, 2021
@bridgekeeper
Copy link

bridgekeeper bot commented Jan 27, 2021

Hi @djelinski, welcome to this OpenJDK project and thanks for contributing!

We do not recognize you as Contributor and need to ensure you have signed the Oracle Contributor Agreement (OCA). If you have not signed the OCA, please follow the instructions. Please fill in your GitHub username in the "Username" field of the application. Once you have signed the OCA, please let us know by writing /signed in a comment in this pull request.

If you already are an OpenJDK Author, Committer or Reviewer, please click here to open a new issue so that we can record that fact. Please use "Add GitHub user djelinski" as summary for the issue.

If you are contributing this work on behalf of your employer and your employer has signed the OCA, please let us know by writing /covered in a comment in this pull request.

@openjdk
Copy link

openjdk bot commented Jan 27, 2021

@djelinski The following labels will be automatically applied to this pull request:

  • build
  • security

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added security security-dev@openjdk.org build build-dev@openjdk.org labels Jan 27, 2021
@djelinski
Copy link
Member Author

/covered

@bridgekeeper bridgekeeper bot added the oca-verify Needs verification of OCA signatory status label Jan 27, 2021
@bridgekeeper
Copy link

bridgekeeper bot commented Jan 27, 2021

Thank you! Please allow for a few business days to verify that your employer has signed the OCA. Also, please note that pull requests that are pending an OCA check will not usually be evaluated, so your patience is appreciated!

@bridgekeeper bridgekeeper bot removed oca Needs verification of OCA signatory status oca-verify Needs verification of OCA signatory status labels Feb 1, 2021
@djelinski djelinski marked this pull request as ready for review February 1, 2021 16:03
@openjdk openjdk bot added the rfr Pull request is ready for review label Feb 1, 2021
@mlbridge
Copy link

mlbridge bot commented Feb 1, 2021

Webrevs

Copy link
Member

@erikj79 erikj79 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build change looks good, but I would like to hear from @cl4es too.

@openjdk
Copy link

openjdk bot commented Feb 1, 2021

@djelinski This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8259886: Improve SSL session cache performance and scalability

Reviewed-by: erikj, xuelei

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 545 new commits pushed to the master branch:

  • 5eb2091: 8261689: javax/swing/JComponent/7154030/bug7154030.java still fails with "Exception: Failed to hide opaque button"
  • 75a5be8: 8263054: [testbug] SharedArchiveConsistency.java reuses jsa files
  • 2afbd5d: 8250804: Can't set the application icon image for Unity WM on Linux.
  • fa43f92: 8261845: File permissions of packages built by jpackage
  • 23ee60d: 8261008: Optimize Xor
  • e1cad97: 8262862: Harden tests sun/security/x509/URICertStore/ExtensionsWithLDAP.java and krb5/canonicalize/Test.java
  • 2c0507e: 8261812: C2 compilation fails with assert(!had_error) failed: bad dominance
  • 9755782: 8157682: @inheritdoc doesn't work with @exception
  • 8c13d26: 8263050: move HtmlDocletWriter.verticalSeparator to IndexWriter
  • 8d3de4b: 8262844: (fs) FileStore.supportsFileAttributeView might return false negative in case of ext3
  • ... and 535 more: https://git.openjdk.java.net/jdk/compare/abd9310bff31b5fc1677ab02609641ecc8faf356...master

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@erikj79, @XueleiFan) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Feb 1, 2021
@cl4es
Copy link
Member

cl4es commented Feb 1, 2021

Adding an --add-exports to JAVAC_FLAGS is a bit iffy, but should be OK. Yes, all benchmarks will now be compiled with that package exported and visible, but that should have no unintentional effect on other compilations.

@XueleiFan
Copy link
Member

XueleiFan commented Feb 2, 2021

The impact could beyond the JSSE implementation, andI will have a look as well.

Copy link
Member

@XueleiFan XueleiFan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I get the patch right, the benchmark performance improvement is a trade-off between CPU and memory, by keeping expired entries while putting a new entry in the cache. I'm not very sure of the performance impact on memory and GC collections. Would you mind add two more types of benchmark: get the entries and remove the entries, for cases that there are 1/10, 1/4, 1/3 and 1/2 expired entries in the cache? And increase the size to some big scales, like 2M and 20M.

It looks like a spec update as it may change the behavior of a few JDK components (TLS session cache, OCSP stapling response cache, cert store cache, certificate factory, etc), because of "expired entries are no longer guaranteed to be removed before live ones". I'm not very sure of the impact. I may suggest to file a CSR and have more eyes to check the compatibility impact before moving forward.

@djelinski
Copy link
Member Author

the benchmark performance improvement is a trade-off between CPU and memory, by keeping expired entries while putting a new entry in the cache

Not exactly. The memory use is capped by cache size. The patch is a trade off between the cache's hit/miss ratio and CPU; we will get faster cache access at the cost of more frequent cache misses.

All calls to put() remove expired items from the front of the queue, and never perform a full scan. get() calls shuffle the queue, moving the accessed item to the back. Compare this to original code where put() only removed expired items when the cache overflowed, and scanned the entire cache.
Let me give some examples.
Example 1: insertions at a fast pace leading to cache overflows and no expirations. Here the new implementation improves performance. Consider a cache with size=4, timeout=10, and the following sequence of events:
T=1, put(1);
T=2, put(2);
T=3, put(3);
T=4, put(4);
Cache contents after these calls (same in old and new scenario). Queue order: least recently accessed items on the left, most recently accessed on the right. K denotes cache key, exp denotes entry expiration time and is equal to insertion time T plus timeout:

|K=1, exp=11|K=2, exp=12|K=3, exp=13|K=4, exp=14|

If we now add another item to the queue, it will overflow. Here's where the implementations behave differently, but the outcome is identical: old one scans the entire list for expired entries; new one improves performance by ending the search for expired entries after encountering the first non-expired entry (which is the first entry in the above example). The end result is the same in both cases - oldest (least recently accessed) item is dropped:
T=5, put(5)

|K=2, exp=12|K=3, exp=13|K=4, exp=14|K=5, exp=15|

Example 2: insertions at a moderate pace, with interleaved reads. Here the new implementation improves performance, but at a possible cost of wasting cache capacity on expired entries. Consider a cache with size=4, timeout=7, and the following sequence of events:
T=1, put(1);
T=3, put(3);
T=5, put(5);
T=7, put(7);
T=7, get(1);
Cache contents after these calls:

|K=3, exp=10|K=5, exp=12|K=7, exp=14|K=1, exp=8|

get(1) operation moved item with K=1 to the back of the queue.

If we wait for item with K=1 to expire and then add another item to the queue, it will overflow. Here's where the implementations behave differently, and the outcome is different: old one scans the entire list for expired entries, finds entry with K=1 and drops it; new one gives up after first non-expired entry (which is the first entry), and drops the first entry.

So, when we perform:
T=9, put(9);

Old implementation will get:
|K=3, exp=10|K=5, exp=12|K=7, exp=14|K=9, exp=16|

New implementation will get:
|K=5, exp=12|K=7, exp=14|K=1, exp=8(expired)|K=9, exp=16|

Note that:

  • an attempt to retrieve expired item (i.e. get(1)) will immediately remove that item from cache, making room for other items
  • retrieving a non-expired item will move it to the back of the queue, behind all expired items

Example 3: insertions at a slow pace, where most items expire before queue overflows. Here the new implementation improves memory consumption. Consider a cache with size=4, timeout=1, and the following sequence of events:
T=1, put(1);
T=3, put(3);
T=5, put(5);
T=7, put(7);
Every cache item is expired at then point when a new one is added. Old implementation only removes expired entries when cache overflows, so all entries will still be there:

|K=1, exp=2(expired)|K=3, exp=4(expired)|K=5, exp=6(expired)|K=7, exp=8|

New implementation removes expired entries on every put, so after the last put only one entry is in the cache:

|K=7, exp=8|

After another put the old implementation will encounter a cache overflow and remove all expired items.

Let me know if that helped.

add two more types of benchmark: get the entries and remove the entries

Both these operations are constant-time, both before and after my changes. Do you expect to see some oddities here, or do we just want a benchmark that could be used to compare other implementations?

increase the size to some big scales, like 2M and 20M

Can do. Do you think it makes sense to also benchmark the scenario where GC kicks in and collects soft references?

it may change the behavior of a few JDK components

Of all uses of Cache, only SSLSessionContextImpl (TLS session cache), StatusResponseManager (OCSP stapling) and LDAPCertStoreImpl (I'm not familiar with that one) set expiration timeout; when the timeout is not set, the behavior is exactly the same as before.
StatusResponseManager is constantly querying the same keys, and is liberally sized, so I don't expect much of an impact.
TLS session cache changes may result in fewer session resumptions and more full handshakes; I expect the cache performance improvement to more than offset the CPU cycles lost on full handshakes.

How do I file a CSR?

Also, what do you think about the changes done in Do not invalidate objects before GC 5859a03 commit? They offer a minor performance improvement, but if clearing the values before GC is an important security feature of this cache, I'm prepared to drop that commit.

@djelinski
Copy link
Member Author

Added benchmarks for get & remove. Added tests for 5M cache size. Switched time units to nanoseconds. Results:

Benchmark           (size)  (timeout)  Mode  Cnt    Score   Error  Units
CacheBench.get       20480      86400  avgt   25   62.999 ? 2.017  ns/op
CacheBench.get       20480          0  avgt   25   41.519 ? 1.113  ns/op
CacheBench.get      204800      86400  avgt   25   67.995 ? 4.530  ns/op
CacheBench.get      204800          0  avgt   25   46.439 ? 2.222  ns/op
CacheBench.get     5120000      86400  avgt   25   72.516 ? 0.759  ns/op
CacheBench.get     5120000          0  avgt   25   53.471 ? 0.491  ns/op
CacheBench.put       20480      86400  avgt   25  117.117 ? 3.424  ns/op
CacheBench.put       20480          0  avgt   25   73.582 ? 1.484  ns/op
CacheBench.put      204800      86400  avgt   25  116.983 ? 0.743  ns/op
CacheBench.put      204800          0  avgt   25   73.945 ? 0.515  ns/op
CacheBench.put     5120000      86400  avgt   25  230.878 ? 7.582  ns/op
CacheBench.put     5120000          0  avgt   25  192.526 ? 7.048  ns/op
CacheBench.remove    20480      86400  avgt   25   39.048 ? 2.036  ns/op
CacheBench.remove    20480          0  avgt   25   36.293 ? 0.281  ns/op
CacheBench.remove   204800      86400  avgt   25   43.899 ? 0.895  ns/op
CacheBench.remove   204800          0  avgt   25   43.046 ? 0.759  ns/op
CacheBench.remove  5120000      86400  avgt   25   51.896 ? 0.640  ns/op
CacheBench.remove  5120000          0  avgt   25   51.537 ? 0.536  ns/op

@XueleiFan
Copy link
Member

XueleiFan commented Feb 4, 2021

Thank you for the comment. The big picture is more clear to me now.

Example 2:
Old implementation will get:
|K=3, exp=10|K=5, exp=12|K=7, exp=14|K=9, exp=16|

New implementation will get:
|K=5, exp=12|K=7, exp=14|K=1, exp=8(expired)|K=9, exp=16|

K=3 is not expired yet, but get removed, while K=1 is kept. This behavior change may cause more overall performance hurt than improving the cache put/get performance. For example, it need to grab the value remotely. A full handshake or OCSP status grabbing could counteract all the performance gain with the cache update.

All calls to put() remove expired items from the front of the queue, and never perform a full scan. get() calls shuffle the queue, moving the accessed item to the back. Compare this to original code where put() only removed expired items when the cache overflowed, and scanned the entire cache.

I think the idea that put() remove expired items from the front of the queue is good. I was wondering if it is an option to have the get() method that removed expired items until the 1st un-expired item, without scan the full queue and change the order of the queue. But there is still an issue that the SoftReference may have clear an item, which may be still valid.

In general, I think the get() performance is more important than put() method, as get() is called more frequently. So we should try to keep the cache small if possible.

increase the size to some big scales, like 2M and 20M

Can do. Do you think it makes sense to also benchmark the scenario where GC kicks in and collects soft references?

In the update, the SoftReference.clear() get removed. I'm not sure of the impact of the enqueued objects any longer. In theory, it could improve the memory use, which could counteract the performance gain in some situation.

Also, what do you think about the changes done in Do not invalidate objects before GC 5859a03 commit?

See above, it is a concern to me that the soft reference cannot be cleared with this update.

How do I file a CSR?

Could you edit the bug: https://bugs.openjdk.java.net/browse/JDK-8259886? In the more drop down menu, there is a "Create CSR" option. You can do it if we have an agreement about the solution and impact.

@djelinski
Copy link
Member Author

djelinski commented Feb 4, 2021

Thanks for your review! Some comments below.

A full handshake or OCSP status grabbing could counteract all the performance gain with the cache update.

Yes, but that's unlikely. Note that K=3 is before K=1 in the queue only because 3 wasn't used since 1 was last used. This means that either K=3 is used less frequently than K=1, or that all cached items are in active use. In the former case we don't lose much by dropping K=3 (granted, there's nothing to offset that). In the latter we are dealing with full cache at all times, which means that most put()s would scan the queue, and we will gain a lot by finishing faster.

get() [..] without [..] change the order of the queue

If we do that, frequently used entries will be evicted at the same age as never used ones. This means we will have to recompute (full handshake/fresh OCSP) both the frequently used and the infrequently used entries. It's better to recompute only the infrequently used ones, and reuse the frequently used as long as possible - we will do less work that way.
That's probably the reason why a LinkedHashMap with accessOrder=true was chosen as the backing store implementation originally.

get() performance is more important [..] so we should try to keep the cache small if possible

I don't see the link; could you explain?

In the update, the SoftReference.clear() get removed. I'm not sure of the impact of the enqueued objects any longer. In theory, it could improve the memory use, which could counteract the performance gain in some situation.

That's the best part: no objects ever get enqueued! We only called clear() right before losing the last reference to SoftCacheEntry (which is the SoftReference). When GC collects the SoftReference, it does not enqueue anything. GC only enqueues the SoftReference when it collects the referenced object (session / OCSP response) without collecting the SoftReference (cache entry) itself.
This is documented behavior: If a registered reference becomes unreachable itself, then it will never be enqueued.

Could you edit the bug

I'd need an account on the bug tracker first.

@djelinski
Copy link
Member Author

So, how do we want to proceed here? Is the proposed solution acceptable? If not, what needs to change? if yes, what do I need to do next?

@XueleiFan
Copy link
Member

Thanks for your review! Some comments below.

A full handshake or OCSP status grabbing could counteract all the performance gain with the cache update.

Yes, but that's unlikely. Note that K=3 is before K=1 in the queue only because 3 wasn't used since 1 was last used. This means that either K=3 is used less frequently than K=1, or that all cached items are in active use. In the former case we don't lose much by dropping K=3 (granted, there's nothing to offset that). In the latter we are dealing with full cache at all times, which means that most put()s would scan the queue, and we will gain a lot by finishing faster.

I may think it differently. It may be hard to know the future frequency of an cached item based on the past behaviors. For the above case, I'm not sure that K=3 is used less frequently than K=1. Maybe, next few seconds, K=1 could be more frequently.

I would like a solution to following the timeout specification: keep the newer items if possible.

get() [..] without [..] change the order of the queue

If we do that, frequently used entries will be evicted at the same age as never used ones. This means we will have to recompute (full handshake/fresh OCSP) both the frequently used and the infrequently used entries. It's better to recompute only the infrequently used ones, and reuse the frequently used as long as possible - we will do less work that way.
That's probably the reason why a LinkedHashMap with accessOrder=true was chosen as the backing store implementation originally.

See above. It may be true for some case to determine the frequency, but Cache is a general class and we may want to be more careful about if we are really be able to determine the frequency within the Cache implementation.

get() performance is more important [..] so we should try to keep the cache small if possible

I don't see the link; could you explain?

link? Did you mean the link to get() method? It is a method in the Cache class.

In the update, the SoftReference.clear() get removed. I'm not sure of the impact of the enqueued objects any longer. In theory, it could improve the memory use, which could counteract the performance gain in some situation.

That's the best part: no objects ever get enqueued! We only called clear() right before losing the last reference to SoftCacheEntry (which is the SoftReference). When GC collects the SoftReference, it does not enqueue anything. GC only enqueues the SoftReference when it collects the referenced object (session / OCSP response) without collecting the SoftReference (cache entry) itself.
This is documented behavior: If a registered reference becomes unreachable itself, then it will never be enqueued.

I need more time for this section.

Could you edit the bug

I'd need an account on the bug tracker first.

Okay. No worries, I will help you if we could get an agreement about the update.

@XueleiFan
Copy link
Member

So, how do we want to proceed here? Is the proposed solution acceptable? If not, what needs to change? if yes, what do I need to do next?

For me, it is a pretty good solution, but I have some concerns. I appreciate if you would like to read my comment and see if we could have an agreement.

@djelinski
Copy link
Member Author

I may think it differently. It may be hard to know the future frequency of an cached item based on the past behaviors. For the above case, I'm not sure that K=3 is used less frequently than K=1. Maybe, next few seconds, K=1 could be more frequently.

I agree that such prediction might not be 100% accurate. But, quick google search reveals that there are many articles that claim that LRU caches offer better hit rates than FIFO, especially for in-memory caches.

I would like a solution to following the timeout specification: keep the newer items if possible.

That's a trivial change; all we need to do is change true to false here. But, as stated above, LRU is better than FIFO, so I wouldn't want to do that.

I could keep LRU and add another linked list that would store items in the order of their expiration dates; then we could quickly scan that list for expired items. Note: the order of expiration dates is not necessarily the order of insertion, because 1) System.currentTimeMillis() is not monotonic - it can move back when something changes the system time, 2) the expiration date is calculated at insertion time, so if someone changes the timeout on a non-empty cache, new items may have shorter expiration time than old ones. So, I'd either need to address that first (change currentTimeMillis to nanoTime and store creation time instead of expiration time), or use insertion sort for adding items (which would get very slow if either of the above mentioned situations happened).
Let me know your thoughts.

@djelinski
Copy link
Member Author

Well, if removing all expired items before evicting live ones is a non-negotiable, implementing all operations in constant time is much easier with FIFO, where we only need to keep one item order.
The new commits contain the following changes:

  • use nanoTime instead of currentTimeMillis to make sure that time never goes back
  • store insertion time instead of expiration time, so that older items always expire before newer ones, even when timeout is changed
  • change internal hash map to store (and evict) items in insertion (FIFO) order
  • always stop scanning entries after finding the first non-expired item, because subsequent items are now guaranteed to have later expiration dates, and collected soft references are handled by reference queue.

tier1 and jdk_security tests passed; benchmark results show only minimal changes. I verified that none of the classes using Cache mentions LRU, looks like this was an implementation detail.

@djelinski
Copy link
Member Author

Actually there's a much easier solution to reduce the number of slow put()s without making any behavioral changes.
The cache object could store the earliest expire time, and then exit expungeExpiredEntries() early when current time is earlier than the earliest expire time - when it is, we know that there are no expired items in the queue and we can skip the scan entirely.
@XueleiFan do you think the above is worth exploring?

@XueleiFan
Copy link
Member

XueleiFan commented Feb 22, 2021

Actually there's a much easier solution to reduce the number of slow put()s without making any behavioral changes.
The cache object could store the earliest expire time, and then exit expungeExpiredEntries() early when current time is earlier than the earliest expire time - when it is, we know that there are no expired items in the queue and we can skip the scan entirely.
@XueleiFan do you think the above is worth exploring?

Definitely, I think it is a good improvement. Actually, it is a surprise to me that the current code is not working this way.

Sorry, I was/am on vacation, and the review could be delayed for a few days.

@djelinski
Copy link
Member Author

I reverted all earlier Cache changes, and added a new commit that caches the earliest expire time of all cached items. The observable behavior of the new code is identical to original - items are removed from cache at exactly the same time as before; we only skip scanning the cache when we know that there are no expired items inside.

The performance is substantially improved. There can be at most cache size scans in every cache lifetime period, which is roughly one scan every 4 seconds with the default SSL session cache settings. This is much better than possibly scanning on every put() that was possible before the changes.

My reduced set of benchmarks produced the following values:

Benchmark       (size)  (timeout)  Mode  Cnt    Score   Error  Units
CacheBench.put   20480      86400  avgt   25  148.345 ? 1.970  ns/op
CacheBench.put   20480          0  avgt   25  108.598 ? 3.787  ns/op
CacheBench.put  204800      86400  avgt   25  151.318 ? 1.872  ns/op
CacheBench.put  204800          0  avgt   25  106.650 ? 1.080  ns/op 

which is comparable to what was observed with the previous commits.

@djelinski
Copy link
Member Author

ping @XueleiFan, I'd appreciate another review.

Copy link
Member

@XueleiFan XueleiFan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also update the copyright year to 2021. Otherwise, looks good to me. Thank you!

@djelinski
Copy link
Member Author

/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Mar 6, 2021
@openjdk
Copy link

openjdk bot commented Mar 6, 2021

@djelinski
Your change (at version d5c39a4) is now ready to be sponsored by a Committer.

@XueleiFan
Copy link
Member

/sponsor

@openjdk openjdk bot closed this Mar 7, 2021
@openjdk openjdk bot added integrated Pull request has been integrated and removed sponsor Pull request is ready to be sponsored ready Pull request is ready to be integrated rfr Pull request is ready for review labels Mar 7, 2021
@openjdk
Copy link

openjdk bot commented Mar 7, 2021

@XueleiFan @djelinski Since your change was applied there have been 548 commits pushed to the master branch:

  • 3844ce4: 8261247: some compiler/whitebox/ tests fail w/ DeoptimizeALot
  • f2d0152: 8263043: Add test to verify order of tag output
  • 7182985: 8263104: fix warnings for empty paragraphs
  • 5eb2091: 8261689: javax/swing/JComponent/7154030/bug7154030.java still fails with "Exception: Failed to hide opaque button"
  • 75a5be8: 8263054: [testbug] SharedArchiveConsistency.java reuses jsa files
  • 2afbd5d: 8250804: Can't set the application icon image for Unity WM on Linux.
  • fa43f92: 8261845: File permissions of packages built by jpackage
  • 23ee60d: 8261008: Optimize Xor
  • e1cad97: 8262862: Harden tests sun/security/x509/URICertStore/ExtensionsWithLDAP.java and krb5/canonicalize/Test.java
  • 2c0507e: 8261812: C2 compilation fails with assert(!had_error) failed: bad dominance
  • ... and 538 more: https://git.openjdk.java.net/jdk/compare/abd9310bff31b5fc1677ab02609641ecc8faf356...master

Your commit was automatically rebased without conflicts.

Pushed as commit 18fc350.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build-dev@openjdk.org integrated Pull request has been integrated security security-dev@openjdk.org
Development

Successfully merging this pull request may close these issues.

4 participants