You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Migrated issue, originally created by jvanasco (jvanasco)
I ran into an issue while troubleshooting my read-through cache.
in Production, we use pylibmc ; in Development we usually use dbm , but sometimes switch to memory as it is much faster. our cached data has quite a bit of processing applied to it, and on a recent test run -- unwanted 'per-request' data was somehow persisting. i spent a few hours upgrading logging, and trying to track down phantom cache "sets".
then i realized what was going on -- we weren't calling a set, but the data was persisting because of the memory backend. we weren't pulling something out of a cache, it was the same object each time.
in order to get our tests to pass and ensure some amount of parity between dev and production, i put together a quick "MemoryPickle" backend. I couldn't think of a better way to handle this; it's the memory backend , with get/set wrapped in pickle loads/gets.
this does nothing but ensure that you get an item out of the cache that only has "set" data. there are probably better ways to handle this , which is why I'm just proposing this with a gist, and not doing a pull request.
this sort of thing could be handled with a wrapper or custom backend. i think something like this is very useful and belongs in core, only because it allows users the ability to leverage the speed of a memory backend with the same behavior of external backends ( dbm , pylibmc , redis , etc )... and it would only require a 1 line config change.
I've literally spent all day on a bug here that is essentially the same issue. What I'm doing at the moment, since it's an ORM/Session cache, is loading the objects with a different Session, so that when they get merged into the return value, they are different from those the default Session would load. Over here I think I'm going to stick with that since it doesnt have the significant overhead and error-prone-ness of pickle.
Anyway, I think the most straightforward system would be to just add a flag "pickle" to the existing "memory" backend. Send me a pullreq.
Migrated issue, originally created by jvanasco (jvanasco)
I ran into an issue while troubleshooting my read-through cache.
in Production, we use
pylibmc
; in Development we usually usedbm
, but sometimes switch tomemory
as it is much faster. our cached data has quite a bit of processing applied to it, and on a recent test run -- unwanted 'per-request' data was somehow persisting. i spent a few hours upgrading logging, and trying to track down phantom cache "sets".then i realized what was going on -- we weren't calling a set, but the data was persisting because of the memory backend. we weren't pulling something out of a cache, it was the same object each time.
in order to get our tests to pass and ensure some amount of parity between dev and production, i put together a quick "MemoryPickle" backend. I couldn't think of a better way to handle this; it's the memory backend , with get/set wrapped in pickle loads/gets.
this does nothing but ensure that you get an item out of the cache that only has "set" data. there are probably better ways to handle this , which is why I'm just proposing this with a gist, and not doing a pull request.
this sort of thing could be handled with a wrapper or custom backend. i think something like this is very useful and belongs in core, only because it allows users the ability to leverage the speed of a memory backend with the same behavior of external backends ( dbm , pylibmc , redis , etc )... and it would only require a 1 line config change.
here's the working copy.
https://gist.github.com/jvanasco/7651368
The text was updated successfully, but these errors were encountered: