Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

multiprocessing module issues #206

Closed
77cc33 opened this Issue Mar 29, 2013 · 9 comments

Comments

Projects
None yet
3 participants

77cc33 commented Mar 29, 2013

I have some weired crashes, when trying to read/write manager(proxy) objects
between processes and parent process (http://docs.python.org/2/library/multiprocessing.html#Proxy Objects).

Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "./run_server.py", line 62, in watchdog
    del (shared_banned_ips[:]) # clean array, but not delete it, as it's linked to external list via proxy
  File "<string>", line 2, in __delslice__
  File "/usr/lib/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
RemoteError: 
---------------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/managers.py", line 242, in serve_client
    obj, exposed, gettypeid = id_to_obj[ident]
KeyError: '1d1f878'

and lot of so errors - comming non stop.

it happens, only if uwsgi was reloaded, but if it was restarted, it
works pretty well.

do you have any idea?

Owner

unbit commented Mar 29, 2013

it is not easy to give you an answer as both multiprocessing and uWSGI abuse fork() features. I can only suggest you to add --lazy-apps to avoid forking too early in uWSGI (that could make mess if it is not taken into account)

77cc33 commented Mar 29, 2013

doesn't work for me ((

I need thread, inside of uwsgi master process, what will look for external data updates, and will update local variables for each worker. so I need one process and easy data sharing method, between this process and workers (big lists objects about 5MB each)

so lazy mode will not work for this

all I see now, it's to start in post fork hook - thread for each worker, what will look for uwsgi.cache from time to time for updates, and if data was updated, it will update local variables inside of worker. I will need also separate process attached with "attach-daemon", what will check for remote updates and write updates to cache data.

but maybe there are easier solution, to exchange data between threadsin master process and workers?

p.s.: also if attach-daemon, will restart daemon if it will die?

Owner

unbit commented Mar 29, 2013

ah ok, so you were using multiprocessing model but with thread pool. Yes in such a case postfork is a good approach (remember to add --enable-threads). The easiest and fastest way to share data is using the uwsgi caching framework (i know the name is misleading but it is basically a thread-safe shared dictionary)

77cc33 commented Mar 29, 2013

ok, but I need to get access to uwsgi.cache from some external script.

or maybe, I can start thread in master process? is this good idea?

Owner

unbit commented Mar 29, 2013

if those external scripts are in python you can attach them to mules, in that way they can access the uWSGI api (and so the cache)

By the way processes attached with the --attach-daemon-* functions are autonatically respawned if they die

77cc33 commented Mar 29, 2013

thanks

seems cache will not work either:
ValueError: uWSGI cache items size must be < 65536, requested 13042065 bytes

looks like I need some other way, to exchange data.

if uwsgi has something for so big volumes of data? or I should use some external tool?

Owner

unbit commented Mar 29, 2013

you can tune it as you want (check docs) but use the cache from 1.9, it is a lot better (and tunable), from your exception it looks like you are using 1.4

Contributor

prymitive commented Mar 29, 2013

And "cache from 1.9" means using --cache2 option. See the docs

77cc33 commented Mar 30, 2013

thanks

@77cc33 77cc33 closed this Mar 30, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment