-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak in MicroProfile Open API's SchemaRegistry.current #24577
Comments
Did you have a chance to look into this problem? Sorry for insisting, but my experience developing with Open Liberty has being terrible for many months now. Several times a day, all 24 GB of memory in my computer get filled, rendering it unusable, often forcing me to hard-reset it. |
Reproduced the problem with the application provided and it appears to be in the Open API function:
It is the SchemaRegistry.current ThreadLocal that is the culprit. It would appear that remove is not called in SchemaRegistry. Not sure whose responsibility that is. |
@Azquelt Can you please take a look at this. Perhaps something can be done in the openapi glue code? |
@inad9300 Daniel, You should be able to work around this by removing the microProfile-5.0 feature from your server.xml. If you need microProfile features you could specify only the features you need, and as long as that list does not include the mpOpenAPI feature you should be good to go. |
Looks good Andrew. Thanks |
I'm sorry to report that my application keeps crashing my computer multiple times a day while developing due to its unbound memory growth. In fact, I have the impression that it happens faster than it used to. I'm not sure which criteria you used to determine that the issue was fixed but, as far as I can tell, my demo app still reproduces the problem in 23.0.0.5... |
In fact, removing every file within |
Hi @inad9300, thank you for the follow up. Running the application locally, I see many instances of the This lead me to spot an issue in your application here: scheduledPing = newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> { You're creating a new executor on init, but never shutting it down. Because you're using You could probably just shut it down in public class SseController {
@Resource
private ManagedScheduledExecutorService executor;
private Sse sse;
private final List<SseEventSink> sinks = new CopyOnWriteArrayList<>();
private ScheduledFuture<?> scheduledPing;
void onInit(@Observes @Initialized(ApplicationScoped.class) Object __) {
scheduledPing = executor.scheduleAtFixedRate(() -> {
...
...
...
With this change I still see some memory growth early on (heap dump shows some soft references from BaseTraceService.earlierMessages) but this settles down after a while and a later heap dump shows lower usage and only one instance of the |
Thanks for the tip. This is a repository that I am using to report on different Open Liberty bugs, so I did not pay too much attention to cleaning up resources of By the way, can you help me understand why calling |
I was a bit surprised by this too. It looks like the There's no mention of using any replacement mechanism to guard against executors not being shut down in that issue. I can't say for sure that there isn't one but going by what I saw in the memory analyser, it looks like they aren't getting cleaned up. |
I keep trying to understand why my application leaks memory during development; please, allow me to continue posting on this issue to see if I can get any more help from you. I have recently run Is this to be expected? Does this give you any sort of hint? Do you have any suggestions as to what to look for next and/or profiling tool recommendations? |
18 Instances of In MemoryAnalyser, you should be able to look to see what has references to each AppClassLoader. Every class will hold a reference to its class loader, so any instance of any class from the app which is kept alive will result in the whole classloader and all its classes not being garbage collected. Class loaders for running apps may be referenced from lots of places. When an app is stopped nothing should retain a reference to its class loader and it should be garbage collected. If you can, I would take a dump on the first load, then again after developing and redeploying the app for a while, then again after more development so you can look at the AppClassLoader ids and see which are not being cleaned up correctly. Then take an AppClassLoader that should have been cleaned up and show all the references to GC roots to see what has a reference to it. In the case I looked at before, I could see that the reference path lead to a thread that you had created. You might also find that it leads to an instance of a class being stored in a |
I am opening this ticket as a follow-up to OpenLiberty/ci.maven#1587, as it was determined that the issue belonged to the Open Liberty runtime rather than the Maven plugin.
Describe the bug
A field such as this:
Placed on any
@ApplicationScoped
(or perhaps just any) class will be leaked on application restart, meaning that with every application restart there will be 100 MB of additional memory used. This is particularly noticeable during development through the Maven plugin, as the application is restarted automatically with every source change.Steps to Reproduce
From OpenLiberty/ci.maven#1587 (comment):
This will just prove that there are memory leaks of some origin. Additionally, to specifically show the problem with static fields, modify the source and add
static final ByteBuffer buf = ByteBuffer.allocate(100_000_000)
toRestApplication.java
.Expected behavior
Memory allocated in static fields should be freed somehow during application tear down.
Diagnostic information:
eclipse-temurin:17.0.4.1_1-jdk-jammy
The text was updated successfully, but these errors were encountered: