New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-processing aiocache #426
Comments
Interestingly if I don't use noself=true, redis cannot find the keys in its cache as the keys incorporate the objects' address which change after every run, but I get no more error. |
Hey @John-Gee can you put a small working snippet that reproduces the issue so I can have a look? |
Hello, here it is: https://gist.github.com/John-Gee/f93cb05acec1624c9db6df6bbf33effd I hope it's not still too big, I tried to shorten it as much as I could without losing too much clarity in what I was trying to achieve. A simple
is useful to see if the error message is in the log or not. It does not show on the terminal by default. I probably should have written that before, versions: In case it matters, all on Linux 64b. Thank you Manuel! |
You need to instanciate an aiocache object per process. You cannot share a loop in a multiprocess pool. If you move the aiocache creation inside the process code, it will work fine. |
Yep, that's true. When decorating a function with As @crisidev mentions, you have to move the cache creation inside the process code |
Alright I was wondering if that was the case but I couldn't find out how to do so. Thank you! edit: Yup I've tried something quickly and it works indeed. I'm still interested in the question above though. :) Thanks guys! |
In my wrapper I'm reusing your code as such: (I removed the self part as it's not useful to me, well for now.) Is this ok with you? It's used in a project under the GNU GPL license, hosted here on GitHub. Thank you! |
aiocache just passes the
Yeah no worries :) |
Closing as original issue was fixed |
Hello,
this very well may not be an issue but a misconfiguration on my own.
I'd appreciate help if that's the case.
I'm using aiocache and aiohttp with Redis, all on the same host.
I have decorated a wrapper around aiohttp.get as such:
My problem is that I call this get_page function from different processes in a processpool, all with their own event loop and either aiocache or redis seems to not like that as I get:
2018-11-28 20:03:44,266 aiocache.decorators ERROR Couldn't retrieve get_page('https://www.site.com/')[], unexpected error
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/aiocache/decorators.py", line 124, in get_from_cache
value = await self.cache.get(key)
File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 61, in _enabled
return await func(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 44, in _timeout
return await func(self, *args, **kwargs)
File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 75, in _plugins
ret = await func(self, *args, **kwargs)
File "/usr/lib/python3.7/site-packages/aiocache/base.py", line 192, in get
value = loads(await self._get(ns_key, encoding=self.serializer.encoding, _conn=_conn))
File "/usr/lib/python3.7/site-packages/aiocache/backends/redis.py", line 24, in wrapper
return await func(self, *args, _conn=_conn, **kwargs)
File "/usr/lib/python3.7/site-packages/aiocache/backends/redis.py", line 100, in _get
return await _conn.get(key, encoding=encoding)
RuntimeError: Task <Task pending coro=<func() running at file.py:88>> got Future attached to a different loop.
Here's how I setup each new loop in the sub processes:
Thank you!
The text was updated successfully, but these errors were encountered: