Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when @Retryable is used on a @Scope("prototype") bean. #141

Closed
russellfrancis opened this issue Feb 4, 2019 · 21 comments
Closed
Milestone

Comments

@russellfrancis
Copy link

If the @retryable annotation is used on a bean with @scope("prototype"), the AnnotationAwareRetryOperationsInterceptor::delegates map will fill up with mappings of each instance of the bean created. It appears there is no mechanism through which these cached entries are removed or expired eventually leading to exhaustion of the heap space.

@krolen
Copy link
Contributor

krolen commented Feb 6, 2019

Confirmed, I just faced the same issue. Does it make sense to modify delegates map to WeakReferences?

@russellfrancis
Copy link
Author

I'm not super familiar with the code, but it seems from looking at it that the delegates Map is providing caching functionality only. If that is the case the delegates Map itself could be removed entirely from the class and the getDelegate(Object target, Method method) method could be re-written as

`
private MethodInterceptor getDelegate(Object target, Method method) {
Retryable retryable = getRetryableAnnotation(target, method);

MethodInterceptor delegate = null;
if (retryable != null) {
if (StringUtils.hasText(retryable.interceptor())) {
delegate = this.beanFactory.getBean(retryable.interceptor(), MethodInterceptor.class);
} else if (retryable.stateful()) {
delegate = getStatefulInterceptor(target, method, retryable);
} else {
delegate = getStatelessInterceptor(target, method, retryable);
}
}
return delegate;
}

private Retryable getRetryableAnnotation(Object target, Method method) {
Retryable retryable = AnnotationUtils.findAnnotation(method, Retryable.class);
if (retryable == null) {
retryable = AnnotationUtils.findAnnotation(method.getDeclaringClass(), Retryable.class);
if (retryable == null) {
retryable = findAnnotationOnTarget(target, method);
}
}
return retryable;
}
`

This would eliminate the memory leak and wouldn't incur the additional complexity of managing WeakReferences.

@russellfrancis
Copy link
Author

I would be happy to create a pull request, if that solution seems appropriate.

russellfrancis pushed a commit to russellfrancis/spring-retry that referenced this issue Feb 7, 2019
@russellfrancis
Copy link
Author

russellfrancis commented Feb 7, 2019

Not sure how to link the pull request to the issue, so here goes the old fashioned way.

Pull Request: #143

krolen added a commit to krolen/spring-retry that referenced this issue Feb 10, 2019
@krolen
Copy link
Contributor

krolen commented Feb 10, 2019

I think this solution would work perfectly for prototypes however will add extra unnecessary load in a case of regular beans. I would say in 90 percent of cases people actually use regular beans.
Ideally if we were able to identify what kind of object we are dealing with at runtime (prototype or singleton) that would make life way easier: we can use a hard caching for regular beans and weak/soft caching (or no caching at all) for prototypes. I do not know how to do that. So I propose to use soft references for caching (see attached request). I am still kind of hesitating if soft references are better than weak ones though. Thoughts?

#144

@krolen
Copy link
Contributor

krolen commented Feb 10, 2019

Another approach would be lru cache with last access time eviction policy.
In addition cache max size can be specified on app level for example. Ie you kind of know approx how many retryable you'll have.

If we were allowed to use java 8 and caffeine cache - solution would be really easy using LRU and proper cache actions... Ehhh :)

@dsyer
Copy link
Member

dsyer commented Feb 15, 2019

Given the use of soft references in SoftReferenceMapRetryContextCache already in this project, maybe it makes sense just to use that here as well?

@dsyer dsyer changed the title Memory leak when @Retryable is used on a @Scope("prorotype") bean. Memory leak when @Retryable is used on a @Scope("prototype") bean. Feb 17, 2019
@russellfrancis
Copy link
Author

Are there any performance metrics that backup that the caching which was in place was a performance benefit or that we could use to discover that cost of this proposed change?

Looking at the implementation of AnnotationUtils.findAnnotation it appears that the result it returns is already being cached in spring-core. That leaves a reflective method lookup Method targetMethod = target.getClass().getMethod(method.getName(), method.getParameterTypes()); which we could short-circuit and creating the interceptor in exchange for greater complexity and GC load. Before we head down that road, it would be good to know that we aren't making things worse with premature optimization.

@dsyer
Copy link
Member

dsyer commented Feb 20, 2019

I don't have any metrics. If you want to build some I would recommend this tool. I think it's a lot more work to create an interceptor than just looking up one method in the target (e.g. you have to scan all the methods looking for a recovery).

@russellfrancis
Copy link
Author

russellfrancis commented Feb 20, 2019 via email

@krolen
Copy link
Contributor

krolen commented Feb 20, 2019

Perfect. Just throw some more thoughts on it. I might be wrong but as far as I know soft references are kind of not really a great approach since nothing restricts GC to collect them even on minor cycles. Which might make all this caching pretty much worthless.
To me ideal scenario would look like having last-access-time eviction cache with max amount of entries.
Having java 1.6+ as a requirement makes life harder, however we can go with smth like LinkedHashMap and maybe read-write locks.
From the other hand it should be pretty common type of a cache. However I was not able to find one in spring project

@krolen
Copy link
Contributor

krolen commented Feb 26, 2019

If we decide to drop cache we still can have some kind of caching for almost all singletons:

  1. get bean names by class and
  2. if all bean names represent singleton objects

=> use caching

@mnisius
Copy link

mnisius commented Jan 22, 2020

Hi everyone. @dsyer asked me to look at this particulary bug. So here it comes ;-).

I think it shouldn't be the scope of this project to implement a caching solution to solve your bug.
If I had to fix this bug I would use a library like https://github.com/ben-manes/caffeine to implement caching.

This would improve the performance because you would get rid of the synchronized block and also solve your memory leak.

Just replace the delegate map with a Caffeine Cache. Keys and Values are automaticly Weak References and you could even use features like

  • max cache size
  • size-based eviction when a maximum is exceeded based on frequency and recency
  • time-based expiration of entries,

I can provide you with a PR but I wanted to check first if i'm allowed to use a 3rd party library like Caffeine.

@dsyer
Copy link
Member

dsyer commented Jan 22, 2020

I wanted to check first if i'm allowed to use a 3rd party library like Caffeine.

Definitely not as a mandatory dependency - we want to keep it lightweight. We could provide an extension point that takes a Map to be injected, for instance. Does that work?

@mnisius
Copy link

mnisius commented Jan 22, 2020

Well Caffeine uses it's own interface for caches so we had to create a wrapper class wich implements map.
But is that really a good idea? If you want to keep spring-retry #lightweight alterting it's behavoir if some kind of library is on the classpath seems like a contradiction.

And what if there is no caffeine on the classpath? What's the backup solution?

So if I understand it correctly there are already 2 PR that solve this issue.

#143
@russellfrancis fixes the memory leak by disabling caching. This fixes the memory leak, makes the library more #lightweight but reduces performance

#144
@krolen uses a ConcurrentReferenceHashMap implementation as cache. This map uses weak references, and also cleans up the garbage collected map entries each time a new value will be added.
It solves the memory leak and is fast because of the caching.

So I would just go with @krolen solution. I see no downside to it.

Or do you think there is an actual measurable benefit if you could somehow determine the scope of the bean and then do different caching strategies according the the scope?

@dsyer
Copy link
Member

dsyer commented Jan 23, 2020

I'm happy to go with #144 if people like it. My understanding was it might not actually work very well under pressure (the effect is to switch off caching when the heap grows). But I don't have any data to support that (just the comments in this issue and the PR).

FWIW we already have a Map wrapper (RetryContextCache) in this project. Another one wouldn't be a terrible idea.

@mnisius
Copy link

mnisius commented Jan 24, 2020

Hi again.

Can you explain to me why you think the cache is switched off if the heaps grows? From my understandig the cache should work exactly as before just without the memory leak.
Because the garbage collector can remove all the dead objects that were created once but are not referenced anymore.

@dsyer
Copy link
Member

dsyer commented Jan 24, 2020

It's a fine line between "dead" and "dormant". The objects in our cache are only ever used briefly, but that doesn't mean they are dead (never to be used again). Under pressure they will all be evicted, so the net effect is no cache. That was my reading anyway. Happy to be wrong.

@mnisius
Copy link

mnisius commented Jan 24, 2020

From my understanding you are not correct.

Right now you cache every instance that uses your retry annotation. No matter what.
Some objects have only short lifespans. For example prototype beans or requestscoped beans. So in a normal application they get instantiated do their job, the spring container dereferences them and the garbage collector can remove them.

But if a an object is annotated with your retry annotation it will be put in your Cache. So a hard reference is still in the application and the garbage collector can never remove them -> memory leak.

If you go with #144 all objects will still be cached but because of the soft references the garbage collector can remove them if they are no more referenced in the application or the spring container.

But this will never happen for a singleton bean because there will always be a reference to the specific instance in the application or the spring container. So the garbage collector can not remove them from the cache.

Have a look at
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/util/ConcurrentReferenceHashMap.html
and
https://docs.oracle.com/javase/8/docs/api/java/lang/ref/SoftReference.html?is-external=true

This is exactly what you want to use if you build a cache like yours in the application.

@dsyer
Copy link
Member

dsyer commented Feb 1, 2020

That makes sense. So the prototype beans will be evicted under pressure and the singletons will stay cached.

dsyer pushed a commit that referenced this issue Feb 1, 2020
Replaced hashmap with soft reference map
@dsyer
Copy link
Member

dsyer commented Feb 1, 2020

Fixed with #144

@dsyer dsyer closed this as completed Feb 1, 2020
@dsyer dsyer added this to the 1.2.6 milestone Feb 1, 2020
@snicoll snicoll removed this from the 1.2.6 milestone Mar 3, 2022
@snicoll snicoll added this to the 1.3.1 milestone Mar 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants