-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leaks with UnitOfWork instances #131
Comments
quantus
added a commit
to quantus/sqlalchemy-continuum
that referenced
this issue
Mar 18, 2016
quantus
added a commit
to quantus/sqlalchemy-continuum
that referenced
this issue
Mar 18, 2016
quantus
added a commit
to quantus/sqlalchemy-continuum
that referenced
this issue
Mar 18, 2016
For the love of god, PLEASE release a new version on PyPi. I've been fighting this issue for months, assuming it was fixed. Goddammit. |
Done and sorry for keeping you waiting. |
Awesome. I can remove the git dependency in my build process. Thanks a bunch! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Continuum can leak
UnitOfWork
instances stored in theVersioningManager
. First of all the items are not removed from the dictionary in functionVersioningManager.clear
. This is simple to fix by adding linedel self.units_of_work[conn]
.Other issue is that when using continuum with flask-sqlalchemy the function
VersioningManager.clear
gets called withSignallingSession
after the transaction commit phase is completed.VersioningManager
uses connections as keys to find the activeUnitOfWork
instance fromunit_of_works
dictionary and whenSignallingSession
's bind value is anEngine
instead ofConnection
the manager fails to find the right unit of work instance. Thus the unit of work instance never gets removed from the dictionary even when the del statement is added as above.The second issue doesn't happen when the
Session
is created with aConnection
as a bind value, but this isn't possible when using flask-sqlalchemy.The text was updated successfully, but these errors were encountered: