Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some sort of memory leak or just significant memory growth in 0.11.2 manifesting in Python Poetry #225

Closed
dsalisbury opened this issue Aug 10, 2022 · 12 comments

Comments

@dsalisbury
Copy link

Hey!

This is a spin-off of python-poetry/poetry#6144

I'm running Poetry in a docker build within Bitbucket Pipelines CI (with a memory limit of 1GB applied on the Docker service). I see the container's memory grow and grow before the container gets killed at 1GB of usage. Another person on that issue was seeing the same behaviour in Circle CI.

I've replicated it locally using regular docker run commands too and narrowed it down to commit 0449c53. My crude test harness (which just installs poetry with custom versions of tomlkit and the tries to install packages) gets killed on 0449c53 but not in the previous commit f8099e6.

There's some recreation info here https://github.com/dsalisbury/poetry-6144

I'm attempting to dig into #212 to see more specifically where the problem arose.

@frostming
Copy link
Contributor

It is possibly due to lru_cache, using it on methods may lead to memory leaks

@dsalisbury
Copy link
Author

I reverted the lru_cache changes on a test branch and it made no noticeable change to the memory use.

@dsalisbury
Copy link
Author

I was wrong; it does appear to have fixed it. I'd left a PYTHONTRACEMALLOC set in my test script and that was causing it to blow through the memory limit regardless.

This very basic revert of the lru_cache stuff in this commit makes the memory problem go away Secrus@aef5000

@frostming
Copy link
Contributor

I was wrong; it does appear to have fixed it. I'd left a PYTHONTRACEMALLOC set in my test script and that was causing it to blow through the memory limit regardless.

This very basic revert of the lru_cache stuff in this commit makes the memory problem go away Secrus@aef5000

We may not need those lru_caches, the method is fast enough, can you submit a PR for it?

@TNonet
Copy link

TNonet commented Aug 10, 2022

@dsalisbury Thank you for opening this issue!

Would a workaround be to downgrade to 0.11.1?

EDIT:
Can confirm, downgrading from 0.11.3 to 0.11.1 alleviates the issue

@radoering
Copy link
Member

Weird. Shouldn't it be the other way around?

There is even a flake8-bugbear warning about it:

B019: Use of functools.lru_cache or functools.cache on methods can lead to memory leaks. The cache may retain instance references, preventing garbage collection.

I assume we have a special case here and one big cache at class level requires less memory than many little caches at instance level?

@Secrus
Copy link
Member

Secrus commented Aug 11, 2022

Weird. Shouldn't it be the other way around?

There is even a flake8-bugbear warning about it:

B019: Use of functools.lru_cache or functools.cache on methods can lead to memory leaks. The cache may retain instance references, preventing garbage collection.

I assume we have a special case here and one big cache at class level requires less memory than many little caches at instance level?

this exact warning was the reason I changed it. It's weird that it behaves this way...

@Secrus
Copy link
Member

Secrus commented Aug 11, 2022

I was wrong; it does appear to have fixed it. I'd left a PYTHONTRACEMALLOC set in my test script and that was causing it to blow through the memory limit regardless.
This very basic revert of the lru_cache stuff in this commit makes the memory problem go away Secrus@aef5000

We may not need those lru_caches, the method is fast enough, can you submit a PR for it?

Since it was my change that introduced the leak, I will try to provide PR for it today

@frostming
Copy link
Contributor

Weird. Shouldn't it be the other way around?

There is even a flake8-bugbear warning about it:

B019: Use of functools.lru_cache or functools.cache on methods can lead to memory leaks. The cache may retain instance references, preventing garbage collection.

I assume we have a special case here and one big cache at class level requires less memory than many little caches at instance level?

this exact warning was the reason I changed it. It's weird that it behaves this way...

Indeed the changed version constructs more objects than before. Each String object creates several instances of lru_cache, not shared with others. While the previous version only created for the class level.

@Secrus
Copy link
Member

Secrus commented Aug 11, 2022

Since 2 solutions were presented (dropping lru_cache altogether and reverting to the per-method way) I created 2 PRs: #226 and #227. I leave it up to maintainers to choose the preferred option.

@frostming
Copy link
Contributor

Closed by #227

@PabloEmidio
Copy link

PabloEmidio commented Feb 28, 2023

wouldn't it be better yanked the effected versions on pypi?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants