Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disable patcher.monkey_patch #209

Closed
imazor opened this issue Mar 5, 2015 · 8 comments
Closed

disable patcher.monkey_patch #209

imazor opened this issue Mar 5, 2015 · 8 comments

Comments

@imazor
Copy link

imazor commented Mar 5, 2015

Is there way to disable the monkyk patch during run time ?
For example in case I need to use monkey_patch(all=True) for one part of the program,
but dont want this be applied for other parts of my program since it affects on other stuff in a way that I dont need.

I thought that using monkey_patch(all=False) or even specifying the exact module that need to be "unpatched" monkey_patch(socket=False) will help, but checking it with
patcher.is_monkey_patched('socket') still shows that its patched.

@temoto
Copy link
Member

temoto commented Mar 5, 2015

There is no reliable way to do "not be applied for other parts of my program". It's very rare that you control all points in code and their order of execution when something may import something else. And once things are imported, they tend to stay in sys.modules for good.

If you really want to shoot in your foot, here's how to disable monkey_patch function:

import eventlet
import eventlet.patcher
noop = lambda *a, **kw: None
eventlet.monkey_patch = eventlet.patcher.monkey_patch = noop

Though most likely, you need socket_original = eventlet.patcher.original('socket').

@imazor
Copy link
Author

imazor commented Mar 5, 2015

well, I am definitely do not want to shoot in my foot )))
I will try to extend regarding my problem, maybe you will have other idea
for me.

I am using code from following answer:
http://stackoverflow.com/questions/4720735/fastest-way-to-download-3-million-objects-from-a-s3-bucket
in order to download in parallel files from s3 bucket.
In the answer they use patcher.monkey_patch(all=True).
After that I am done downloading the files from S3, I am processing the
files in parallel using pathos.multiprocessing which is improved version of
the original multiprocessing
(use more efficient method than pickle).
I am also using manager.Queue() from multiprocessing in order to control
the files process.
At this point, looks like my program froze. When I Skipped the
patcher.monkey_patch(all=True), the multiprocessing part working as usual.

On Thu, Mar 5, 2015 at 1:05 PM, Sergey Shepelev notifications@github.com
wrote:

There is no reliable way to do "not be applied for other parts of my
program". It's very rare that you control all points in code and their
order of execution when something may import something else. And once
things are imported, they tend to stay in sys.modules for good.

If you really want to shoot in your foot, here's how to disable
monkey_patch function:

import eventlet
import eventlet.patcher
noop = lambda _a, *_kw: None
eventlet.monkey_patch = eventlet.patcher.monkey_patch = noop

Though most likely, you need socket_original =
eventlet.patcher.original('socket').


Reply to this email directly or view it on GitHub
#209 (comment).

--------------------------------------------------------------------------

Igor Mazor
Senior Data Engineer

Rocket Internet AG | Johannisstraße 20 | 10117 Berlin | Deutschland
skype: igor_rocket_internet | mail: igor.mazor http://goog_862328191
@rocket-internet.de
www.rocket-internet.de

Geschäftsführer: Dr. Johannes Bruder, Arnt Jeschke, Alexander Kudlich
Eingetragen beim Amtsgericht Berlin, HRB 109262 USt-ID DE256469659

@temoto
Copy link
Member

temoto commented Mar 5, 2015

That's eventlet side problem. There is not fully understood incompatibility between eventlet and multiprocessing. They rarely occur together so it's not a top priority, sorry. You may try following:

In any case minimal code to reproduce the problem is greatly appreciated.

@imazor
Copy link
Author

imazor commented Mar 5, 2015

the first solution thread=false, I already tried.
I am getting this error:

Traceback (most recent call last):
File "run_adjust_etl.py", line 109, in
a = AdjustParser(output_dir=PARSED_DATA_OUTPUT_DIR, adjust_log_fp=f,
del_old_csv=del_old_csv, print_log_msg=False)
File "/home/imazor/foodpanda_dwh/adjust_etl/adjust_logs_parser.py", line
46, in init
self.parsed_csv_files = self.__parse_adjust_events()
File "/home/imazor/foodpanda_dwh/adjust_etl/adjust_logs_parser.py", line
329, in __parse_adjust_events
queues[os_type] = manager.Queue()
File "/usr/lib/python2.7/multiprocessing/managers.py", line 667, in temp
token, exp = self._create(typeid, _args, *_kwds)
File "/usr/lib/python2.7/multiprocessing/managers.py", line 565, in
_create
conn = self._Client(self._address, authkey=self._authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 175, in
Client
answer_challenge(c, authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 428, in
answer_challenge
message = connection.recv_bytes(256) # reject large message
IOError: [Errno 11] Resource temporarily unavailable

I am using still python 2.7, so third solution will not help either.
regarding the second solution, I thought about it already, this is really
something that I don't like, but looks like I don't have other choice at
the moment.

BTW, the solution

import eventlet
import eventlet.patcher
noop = lambda _a, *_kw: None

eventlet.monkey_patch = eventlet.patcher.monkey_patch = noop

is not helping either (

On Thu, Mar 5, 2015 at 1:30 PM, Sergey Shepelev notifications@github.com
wrote:

That's eventlet side problem. There is not fully understood
incompatibility between eventlet and multiprocessing. They rarely occur
together so it's not a top priority, sorry. You may try following:

In any case minimal code to reproduce the problem is greatly appreciated.

/Igor Mazor
Senior Data Engineer

@temoto
Copy link
Member

temoto commented Mar 5, 2015

Yeah, disabling monkey_patch is not supposed to help, because most likely all relevant modules have been imported already. Further foot-shooting options would be to try to revert monkey_patch by cleaning sys.modules, but that's a road to hell, really.

I just tried trivial example and it works. So your code would help.

import eventlet
eventlet.monkey_patch(thread=False)
import multiprocessing
def fun():
  print('hello')
p = multiprocessing.Process(target=fun)
p.start()
p.join()

@imazor
Copy link
Author

imazor commented Mar 5, 2015

ok, following your example, I have tried to create something similar to
what I have.

import eventlet
eventlet.monkey_patch(thread=False)
import multiprocessing as mp
from multiprocessing import Manager

def foo(x):
print x_x_x

pool = mp.Pool(3)
results = pool.map(foo, range(10))
manager = Manager()
q = manager.Queue()

without the manager.Queue() all works fine.
However with, I am getting:
Traceback (most recent call last):
File "test.py", line 13, in
q = manager.Queue()
File "/usr/lib/python2.7/multiprocessing/managers.py", line 667, in temp
token, exp = self._create(typeid, _args, *_kwds)
File "/usr/lib/python2.7/multiprocessing/managers.py", line 565, in
_create
conn = self._Client(self._address, authkey=self._authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 175, in
Client
answer_challenge(c, authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 428, in
answer_challenge
message = connection.recv_bytes(256) # reject large message
IOError: [Errno 11] Resource temporarily unavailable

On Thu, Mar 5, 2015 at 1:51 PM, Sergey Shepelev notifications@github.com
wrote:

Yeah, disabling monkey_patch is not supposed to help, because most likely
all relevant modules have been imported already. Further foot-shooting
options would be to try to revert monkey_patch by cleaning sys.modules,
but that's a road to hell, really.

I just tried trivial example and it works. So your code would help.

import eventlet
eventlet.monkey_patch(thread=False)
import multiprocessing
def fun():
print('hello')
p = multiprocessing.Process(target=fun)
p.start()
p.join()


Reply to this email directly or view it on GitHub
#209 (comment).

--------------------------------------------------------------------------

Igor Mazor
Senior Data Engineer

Rocket Internet AG | Johannisstraße 20 | 10117 Berlin | Deutschland
skype: igor_rocket_internet | mail: igor.mazor http://goog_862328191
@rocket-internet.de
www.rocket-internet.de

Geschäftsführer: Dr. Johannes Bruder, Arnt Jeschke, Alexander Kudlich
Eingetragen beim Amtsgericht Berlin, HRB 109262 USt-ID DE256469659

@temoto
Copy link
Member

temoto commented Mar 5, 2015

Thanks you, I've found what's going on and opened a specific issue #210

I can't provide working patch right now, will keep you updated, though.

@temoto temoto closed this as completed Mar 5, 2015
@shettyritesh
Copy link

Hello,
Is there any solution to this. I am facing this issue too. My user story is to listen on a RabbitMQ using oslo messaging which in turn uses Eventlet. Once message is received i use ansible to process the request which uses multi processing. I get the exact same error as shown above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants