Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threaded callbacks ? #1087

Closed
jcelerier opened this issue Sep 15, 2017 · 7 comments
Closed

Threaded callbacks ? #1087

jcelerier opened this issue Sep 15, 2017 · 7 comments

Comments

@jcelerier
Copy link
Contributor

The documentation quickly mentions that threading is possible using "Python threads".

But given a C++ library that has its own threads, is it possible to interoperate with pybind11 ? For instance a network library has callbacks that execute in their own threads: how safe is it to pass a python function in place of such a callback ?

@jagerman
Copy link
Member

The main issue is that any time you go from c++ code back to Python (whether explicitly calling a Python function, or implicit operations such as creating new Python variables), you have to take out a gil lock. So as long as each wrapper function that the external library calls does this (by keeping a gil_scoped_acquire instance alive for the duration of the Python interaction), everything should be fine.

@yesint
Copy link
Contributor

yesint commented Sep 16, 2017

This may have little sense in terms of performance because at most one peace of python code could be executed in one process due to GIL anyway.

@jcelerier
Copy link
Contributor Author

So as long as each wrapper function that the external library calls does this (by keeping a gil_scoped_acquire instance alive for the duration of the Python interaction), everything should be fine.

I've tried adding gil_scoped_acquire at the beginning of my C++ callbacks (which in turn call the python callbacks), but then I get a deadlock because the lock is acquired again in the python callback it seems.

This may have little sense in terms of performance because at most one peace of python code could be executed in one process due to GIL anyway.

Yes, this is not for the sake of performance but because I have an already threaded API

@jagerman
Copy link
Member

jagerman commented Sep 16, 2017

The deadlock is probably happening because of invocation from a function called itself from Python via a pybind-registered function: such a function holds the gil already and needs a gil_scoped_release instance to release it before invoking the other code with the callback.

@yesint
Copy link
Contributor

yesint commented Sep 17, 2017

@jcelerier would be useful to see minimal code example. In my project running python code in different C++ threads works nicely with appropriately used gil_scoped_* but it mat be tricky to set them correctly if you have complex interplay between threads. I've also got deadlocks many times before managed to do this correctly.

@bcumming
Copy link

bcumming commented Jan 5, 2018

I had deadlocks when C++ threads were calling Python code when they attempted to aquire the GIL using gil_scoped_acquire. As @jagerman mentions, my problem was that the thread was launched by a C++ function called from Python, which still had the GIL. I added the following guard policy to the call: pb::call_guard<pb::gil_scoped_release>(), so that the calling thread would release the GIL, and the deadlock was fixed.

@YannickJadoul
Copy link
Collaborator

@jcelerier would be useful to see minimal code example. In my project running python code in different C++ threads works nicely with appropriately used gil_scoped_* but it mat be tricky to set them correctly if you have complex interplay between threads.

Agreed. This is hard to judge or debug without reproducer. Thanks, @yesint! Closing. Please reopen if this is still an issue, and you want to further discuss with some concrete code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants