Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threading.current_thread does not change when using greenthreads? #172

Closed
harlowja opened this issue Dec 3, 2014 · 15 comments
Closed

Threading.current_thread does not change when using greenthreads? #172

harlowja opened this issue Dec 3, 2014 · 15 comments
Milestone

Comments

@harlowja
Copy link

harlowja commented Dec 3, 2014

It appears that usage of threading.current_thread() does not change when the current greenthread is different. Was this intentional?

An example that creates a simple latch that uses the current_thread() information that doesn't work when the threading module is used due to this:

import eventlet
eventlet.monkey_patch()

import threading
import time

class CrappyLatch(object):
    def __init__(self):
        self._wait_for = []
        self._cond = threading.Condition()

    def append(self, identity):
        self._wait_for.append(identity)
        return identity

    def check_in(self, identity):
        with self._cond:
            try:
                self._wait_for.remove(identity)
            except ValueError:
                pass
            self._cond.notify_all()

    def wait(self):
        with self._cond:
            while self._wait_for:
                self._cond.wait()


def thing_to_do(latch, name):
    print("%s: %s" % (name, threading.current_thread()))
    time.sleep(0.5)
    latch.check_in(threading.current_thread())
    latch.wait()



latch = CrappyLatch()
threads = []
for i in range(0, 5):
    threads.append(latch.append(threading.Thread(target=thing_to_do,
                                                 args=(latch, i))))
for t in threads:
    t.start()
while threads:
    t = threads.pop()
    t.join()

When ran using threading.current_thread() the following (or similar) is output due to this current_thread update not happening...

0: <_MainThread(MainThread, started -136735040)>
1: <_MainThread(MainThread, started -136735040)>
2: <_MainThread(MainThread, started -136735040)>
3: <_MainThread(MainThread, started -136735040)>
4: <_MainThread(MainThread, started -136735040)>

This is obviously not right (since those are supposed to be different threads).

If instead you comment out the following two lines then this will work as expected:

#import eventlet
#eventlet.monkey_patch()
@harlowja
Copy link
Author

harlowja commented Dec 4, 2014

Another example that has a similar issue:

https://review.openstack.org/#/c/138217/8/oslo_concurrency/lockutils.py (the ReaderWriterLock created there).

@vstinner
Copy link
Contributor

threading.get_ident() is patched, but not threading.current_thread().

@vstinner
Copy link
Contributor

threading.get_ident() is patched, but not threading.current_thread().

I'm wrong, I looked at eventlet/green/thread.py (thread), but I see current_thread() function in eventlet/green/threading.py (threading).

@jaketesler
Copy link

jaketesler commented Apr 7, 2018

I may have found a super hacky fix to this issue. I want to emphasize that this is THOROUGHLY UNTESTED (and by that I mean it works for me but no guarantees to anyone else).

The code you have to change is in eventlet/eventlet/green/threading.py in the current_thread() function.

Lines 102-104 read as follows:

try:
    t = active[id(g)]
except KeyError:
...

Replace those lines (102-104) with the following code block:

try:
    if id(g) in active:
        t = active[id(g)]
    else:
        all_threads = patcher.patch_function(__import__('threading').enumerate)()
        all_idents = {th.ident: th for th in all_threads}
        t = all_idents[id(g)]

except KeyError:
...

You'll also need to comment out line 26 to be able to reuse patcher.

The _MainThread issue appears to arise from the active[] list not becoming populated with new eventlet-spawned threads (monkey-patched threading.Thread objects). Furthermore, I was unable to have the current_thread() return correct results from threading.enumerate() unless the enumerate() function was a) imported at runtime using the gross __import__() call and b) was hot-patched using patch_function().

AGAIN, I would like to emphasize that this is IN NO WAY a permanent fix, but as far as my testing goes, it's seems to be working for my setup.

@temoto
Copy link
Member

temoto commented Apr 7, 2018

@jaketesler thank you.

@strigazi
Copy link

strigazi commented Apr 10, 2018

It works for me, I tested it in an OpenStack project (OpenStack/Magnum) with the kubernetes client which uses multiprocessing. #147

temoto added a commit that referenced this issue Apr 14, 2018
temoto added a commit that referenced this issue Apr 15, 2018
@temoto
Copy link
Member

temoto commented Apr 15, 2018

@jaketesler @strigazi @harlowja Please, try this patch.

pip install https://github.com/eventlet/eventlet/archive/1d6d8924a9da6a0cb839b81e785f99b6ac219a0e.zip

@strigazi
Copy link

strigazi commented Apr 16, 2018

works for me :) with oslo.service, so I expect other openstack services to not be affected.

@strigazi
Copy link

@temoto Should we take this [0] in? Do you want to do more tests?

[0] 1d6d892

@temoto
Copy link
Member

temoto commented Apr 24, 2018

@strigazi yes, sorry for delay, I'll merge it.

@strigazi
Copy link

@temoto You will do a release too? xD

@aeon0
Copy link

aeon0 commented Apr 27, 2018

Also solved my issues just to have another confirmation! Way to go fixing a > 3-year-old issue ; ) Thanks. A release would be cool.

@strigazi
Copy link

strigazi commented May 4, 2018

@temoto Apologies for pinging again. Eventlet is a very important dependency for OpenStack projects but not all projects suffer from this bug. The project I'm working needs this fix and I want to "sell" the release bump well before the next openstack release. Can we have a release? :)

@temoto
Copy link
Member

temoto commented May 4, 2018 via email

@temoto temoto added this to the 0.23 milestone May 6, 2018
@temoto
Copy link
Member

temoto commented May 6, 2018

It's 0.23 on pypi.

@temoto temoto closed this as completed May 6, 2018
ShilpaDevharakar added a commit to ShilpaDevharakar/fasteners-1 that referenced this issue Apr 16, 2019
Issue [1] got resolved/addressed in eventlet in 0.23 version and currently
Openstack using eventlet's 0.24 version. So removing the workaround code.

[1]: eventlet/eventlet#172
openstack-mirroring pushed a commit to openstack/oslo.concurrency that referenced this issue Sep 6, 2022
The fasteners lib in version 0.15.0 removed the
threading.current_thread workaround for eventlet[1] because eventlet
seemed to fixed the current_thread issues tracked in [2]. However the
fix for [2] only fixed half of the problem. The threading.current_thread
call works if it is called from thread created by eventlet.spawn.
However if the thread is created with eventlet.spawn_n then
threading.current_thread is still broken and returns the ID of the
python native thread.

The fasteners' ReaderWriterLock depends heavily on
threading.current_thread to decide which thread holds a lock and to
allow re-entry of that thread. This leads to the situation that
multiple threads created from spawn_n could take the same
ReaderWriterLock at the same time.

The fair internal lock in oslo.concurrency uses ReaderWriterLock and
as a result such lock is broken for threads created with spawn_n.

Note that this issue was raised with eventlet in [3] when the nova team
detected it via a direct usage of ReaderWriterLock in the nova test
code. As [3] did not lead to a solution in eventlet nova implemented a
nova local fix for the test code in [4].

However now we detected that oslo.concurrency is affected by this issue
as well.

This patch adds tests to show the scope of the problem.

Note that the coverage tox target is changed to explicitly enable native
threading otherwise it runs eventlet specific tests in a native
environment.

Also note that [5] was opened to reintroduce the workaround[1] in fasteners.

[1] harlowja/fasteners@467ed75
[2] eventlet/eventlet#172
[3] eventlet/eventlet#731
[4] https://review.opendev.org/c/openstack/nova/+/813114
[5] harlowja/fasteners#96

Related-Bug: #1988311
Change-Id: Ibc193c855b49b95b46ebd2aac82ea89e33f885f0
openstack-mirroring pushed a commit to openstack/openstack that referenced this issue Sep 6, 2022
* Update oslo.concurrency from branch 'master'
  to 796203c94846d44237d869e3b20a6cd8a3602e36
  - Prove that spawn_n with fair lock is broken
    
    The fasteners lib in version 0.15.0 removed the
    threading.current_thread workaround for eventlet[1] because eventlet
    seemed to fixed the current_thread issues tracked in [2]. However the
    fix for [2] only fixed half of the problem. The threading.current_thread
    call works if it is called from thread created by eventlet.spawn.
    However if the thread is created with eventlet.spawn_n then
    threading.current_thread is still broken and returns the ID of the
    python native thread.
    
    The fasteners' ReaderWriterLock depends heavily on
    threading.current_thread to decide which thread holds a lock and to
    allow re-entry of that thread. This leads to the situation that
    multiple threads created from spawn_n could take the same
    ReaderWriterLock at the same time.
    
    The fair internal lock in oslo.concurrency uses ReaderWriterLock and
    as a result such lock is broken for threads created with spawn_n.
    
    Note that this issue was raised with eventlet in [3] when the nova team
    detected it via a direct usage of ReaderWriterLock in the nova test
    code. As [3] did not lead to a solution in eventlet nova implemented a
    nova local fix for the test code in [4].
    
    However now we detected that oslo.concurrency is affected by this issue
    as well.
    
    This patch adds tests to show the scope of the problem.
    
    Note that the coverage tox target is changed to explicitly enable native
    threading otherwise it runs eventlet specific tests in a native
    environment.
    
    Also note that [5] was opened to reintroduce the workaround[1] in fasteners.
    
    [1] harlowja/fasteners@467ed75
    [2] eventlet/eventlet#172
    [3] eventlet/eventlet#731
    [4] https://review.opendev.org/c/openstack/nova/+/813114
    [5] harlowja/fasteners#96
    
    Related-Bug: #1988311
    Change-Id: Ibc193c855b49b95b46ebd2aac82ea89e33f885f0
openstack-mirroring pushed a commit to openstack/openstack that referenced this issue Sep 6, 2022
* Update oslo.concurrency from branch 'master'
  to ee3f73a13379a79282325906787aae7da0f6ac27
  - Fix fair internal lock used from eventlet.spawn_n
    
    The fasteners lib in version 0.15.0 removed the
    threading.current_thread workaround for eventlet[1] because eventlet
    seemed to fixed the current_thread issues tracked in [2]. However the
    fix for [2] only fixed half of the problem. The threading.current_thread
    call works if it is called from thread created by eventlet.spawn.
    However if the thread is created with eventlet.spawn_n then
    threading.current_thread is still broken and returns the ID of the
    python native thread.
    
    The fasteners' ReaderWriterLock depends heavily on
    threading.current_thread to decide which thread holds a lock and to
    allow re-entry of that thread. This leads to the situation that
    multiple threads created from spawn_n could take the same
    ReaderWriterLock at the same time.
    
    The fair internal lock in oslo.concurrency uses ReaderWriterLock and
    as a result such lock is broken for threads created with spawn_n.
    
    Note that this issue was raised with eventlet in [3] when the nova team
    detected it via a direct usage of ReaderWriterLock in the nova test
    code. As [3] did not lead to a solution in eventlet nova implemented a
    nova local fix for the test code in [4].
    
    However now we detected that oslo.concurrency is affected by this issue
    as well.
    
    This patch restores the workaround that was removed by [1].
    
    Note that a fasteners issue [5] also opened to restore the
    workaround[1].
    
    [1] harlowja/fasteners@467ed75
    [2] eventlet/eventlet#172
    [3] eventlet/eventlet#731
    [4] https://review.opendev.org/c/openstack/nova/+/813114
    [5] harlowja/fasteners#96
    
    Closes-Bug: #1988311
    Change-Id: Ia873bcc6b07121c9bd0b94c593567d537b4c1112
openstack-mirroring pushed a commit to openstack/oslo.concurrency that referenced this issue Sep 6, 2022
The fasteners lib in version 0.15.0 removed the
threading.current_thread workaround for eventlet[1] because eventlet
seemed to fixed the current_thread issues tracked in [2]. However the
fix for [2] only fixed half of the problem. The threading.current_thread
call works if it is called from thread created by eventlet.spawn.
However if the thread is created with eventlet.spawn_n then
threading.current_thread is still broken and returns the ID of the
python native thread.

The fasteners' ReaderWriterLock depends heavily on
threading.current_thread to decide which thread holds a lock and to
allow re-entry of that thread. This leads to the situation that
multiple threads created from spawn_n could take the same
ReaderWriterLock at the same time.

The fair internal lock in oslo.concurrency uses ReaderWriterLock and
as a result such lock is broken for threads created with spawn_n.

Note that this issue was raised with eventlet in [3] when the nova team
detected it via a direct usage of ReaderWriterLock in the nova test
code. As [3] did not lead to a solution in eventlet nova implemented a
nova local fix for the test code in [4].

However now we detected that oslo.concurrency is affected by this issue
as well.

This patch restores the workaround that was removed by [1].

Note that a fasteners issue [5] also opened to restore the
workaround[1].

[1] harlowja/fasteners@467ed75
[2] eventlet/eventlet#172
[3] eventlet/eventlet#731
[4] https://review.opendev.org/c/openstack/nova/+/813114
[5] harlowja/fasteners#96

Closes-Bug: #1988311
Change-Id: Ia873bcc6b07121c9bd0b94c593567d537b4c1112
tanaypf9 pushed a commit to tanaypf9/pf9-requirements that referenced this issue May 20, 2024
In 0.15.0 version, removed the workaround code for issue [1]

openstack/masakari need these fixes to lock the progress
details information during host recovery after waitall
method of GreenPool thread is called [2].

[1]: eventlet/eventlet#172
[2]: https://opendev.org/openstack/masakari/src/branch/master/masakari/engine/drivers/taskflow/host_failure.py#L346

Change-Id: Ic6bab5d860d0b44262605bdeb41bc458dbe99dcf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants