Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Considering acquiring a lock by blocking or nonblocking, call greenlet switch once. #1464

ko-han opened this issue Sep 24, 2019 · 1 comment


Copy link

@ko-han ko-han commented Sep 24, 2019

  • gevent version: 1.4.0
  • Python version: python3.7 on MacOS
  • Operating System: Darwin Kernel Version 18.7.0


Recently, I found the following code doesn't run well with gevent monkey patch.

import gevent.monkey; gevent.monkey.patch_all()

import time
import threading

mutex = threading.Lock()
g = 0

def f():
    global g
    if g != 0:
    while True:
        if not mutex.acquire(blocking=False):
            # the greenlet never yield at here.
            if g != 0: return
            else: continue

        if g != 0: return
        g += 1

tasks = []
for i in range(2):
for t in tasks:
for t in tasks:

The process never exits. I am wondering why this happended and found the answer at

        if not blocking:
            return False    # do not switch greenlet if not blocking

        success = self._wait(timeout)

If acquiring a lock by nonblocking mode, the function just return, so the loop never stops.
So i change the above code to the following, force greenlet switching. Everything works fine.

        if not blocking:
            success = self._wait(0)
            success = self._wait(timeout)

The problem solved. So I wondering if we could switch greenlet even if took blocking=False to acquire call.
Open a pull request, the decision is on you.


This comment has been minimized.

Copy link

@jamadden jamadden commented Dec 5, 2019

I've been thinking about this quite a bit.

The example code is using busy waiting (a spin lock) --- constantly testing whether it can acquire the lock in a tight loop. Spin locks are useful for avoiding the overhead of context switching if the lock is mostly un-contended and/or the lock is held for very short periods of time (less than the time it takes to context switch).

If the lock is contended, they only work if the holder of the lock can make forward progress and release the lock so that the waiter can acquire it. That's simplest with true native threads running in parallel at the operating system level, but it can be accomplished in other ways (as we'll see).

In CPython 3 (and I think PyPy3), acquiring a lock with blocking=False does not release the GIL. That means that if a thread starts looping, it's going to spend its entire time quantum (switch interval) looping. The thread holding the lock won't get to run until the time quantum elapses. So a spin lock is pretty self-defeating in terms of efficiency, but the interpreter-level "preemption" will eventually allow the lock to be released and everyone to make progress.

On CPython 2 (and I think PyPy2), acquiring a lock with wait=False does release the GIL. In principle, then, the holder could release it and let it immediately be acquired by the waiter. But still, the waiter won't get a chance to run again until the time quantum elapses (check interval) and the GIL is released. And if there are other threads waiting to acquire the GIL, it could be several such intervals before this thread gets to run. So again, pretty self-defeating, but it does work. Actually, it's even more self-defeating than on Python 3 because we still have the overhead of dropping and acquiring the GIL at the operating system level, exactly what a spin lock was intended to avoid.

It looks a bit less silly, especially on Python 3, if the loop does more than just spin tightly. Imagine that it wants to publish results as it computes them, but it can do so in batches and can buffer the results between iterations:

def worker(work_items, result_lock, result_destination):
    results = []
    for item in work_items:
        if result_lock.acquire(blocking=False):
            del results[:]
    # Anything left goes now
    if results:
        with result_lock:

In this way, it gets to put its whole quantum to beneficial use.

Under (monkey-patched) gevent, as noted here, the tight spin loop spins forever because the holder never has a chance to run to release the lock. The more complex example could easily wind up with one worker in result_destination.publish() holding the lock and all the remaining workers buffering every result and running serially, if publish() yields but process_item() does not. Both outcomes are undesired and unexpected.

One of gevent's benefits (when not monkey-patched) is that without arbitrary preemption (the purely cooperative model) it's easier to reason about code and concurrency. Monkey-patching does give that up to an extent, though.

So with all of that said, I think a good path forward that keeps the best of both worlds is:

  • Leave Semaphore unchanged. This keeps the gevent API and behaviour predictable and compatible.
  • Make gevent.thread.LockType, what's monkey-patched into threading.Lock, invoke gevent.sleep() if it didn't block and didn't acquire the lock. In the worst case scenario (no callbacks to run and the greenlet holding the lock is blocked on IO), that gets us matching the Python 3 behaviour of limiting the wasted spin time to the quantum. In the best case scenario (the greenlet holding the lock is ready to run in the callback queue) that greenlet immediately re-gains control, releases the lock, and then hands control back over to the waiting greenlet.

I'll work on that.

@jamadden jamadden self-assigned this Dec 5, 2019
@jamadden jamadden closed this in 06ede5d Dec 6, 2019
jamadden added a commit that referenced this issue Dec 6, 2019
Fix #1464 and fix #1465 by making LockType sleep on failure to non-blocking acquire the lock.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.