New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lockless implementation of add_callback #1876

Merged
merged 4 commits into from Nov 4, 2016

Conversation

Projects
None yet
3 participants
@pitrou
Contributor

pitrou commented Nov 3, 2016

Fixes #1874.

@pitrou

This comment has been minimized.

Show comment
Hide comment
@pitrou

pitrou Nov 3, 2016

Contributor

I've stress-tested this PR with the following script: https://gist.github.com/pitrou/6a89c8563599beccb1763f27951af416
(the number of added and called callbacks at the end should be the same)

Contributor

pitrou commented Nov 3, 2016

I've stress-tested this PR with the following script: https://gist.github.com/pitrou/6a89c8563599beccb1763f27951af416
(the number of added and called callbacks at the end should be the same)

@pitrou

This comment has been minimized.

Show comment
Hide comment
@pitrou

pitrou Nov 3, 2016

Contributor

Also see previous discussion in #1875.

Contributor

pitrou commented Nov 3, 2016

Also see previous discussion in #1875.

f.close()
except IOError:
# Yield to another thread
time.sleep(1e-3)

This comment has been minimized.

@ajdavis

ajdavis Nov 3, 2016

Contributor

Is this long enough to avoid the Beazley Effect in Python 2 on multicore? That is, is there a danger that this same thread will reacquire the GIL without actually letting another thread run?

@ajdavis

ajdavis Nov 3, 2016

Contributor

Is this long enough to avoid the Beazley Effect in Python 2 on multicore? That is, is there a danger that this same thread will reacquire the GIL without actually letting another thread run?

This comment has been minimized.

@pitrou

pitrou Nov 3, 2016

Contributor

Unless many non-Python threads are competing for the CPU, yes. We cannot guarantee this approach will always work, but it reasonably should.

@pitrou

pitrou Nov 3, 2016

Contributor

Unless many non-Python threads are competing for the CPU, yes. We cannot guarantee this approach will always work, but it reasonably should.

self._callbacks.append(functools.partial(
stack_context.wrap(callback), *args, **kwargs))
# If we're on the IOLoop's thread, we don't need to wake anyone.
pass

This comment has been minimized.

@ajdavis

ajdavis Nov 3, 2016

Contributor

This seems distracting, could you merge this explanation with the previous comment block and delete the "else" branch?

@ajdavis

ajdavis Nov 3, 2016

Contributor

This seems distracting, could you merge this explanation with the previous comment block and delete the "else" branch?

This comment has been minimized.

@pitrou

pitrou Nov 3, 2016

Contributor

Will do.

@pitrou

pitrou Nov 3, 2016

Contributor

Will do.

@ajdavis

This comment has been minimized.

Show comment
Hide comment
@ajdavis

ajdavis Nov 3, 2016

Contributor

I like this a lot and would vote for merging.

Contributor

ajdavis commented Nov 3, 2016

I like this a lot and would vote for merging.

@bdarnell

This comment has been minimized.

Show comment
Hide comment
@bdarnell

bdarnell Nov 4, 2016

Member

LGTM. The main purpose of the lock was to avoid unnecessary calls to the waker, but that predated the introduction of the thread.ident check. This change will make cross-thread add_callbacks slightly slower since every call will need to write to the waker, but that seems like a good tradeoff to get rid of the locking in the single-thread case.

Member

bdarnell commented Nov 4, 2016

LGTM. The main purpose of the lock was to avoid unnecessary calls to the waker, but that predated the introduction of the thread.ident check. This change will make cross-thread add_callbacks slightly slower since every call will need to write to the waker, but that seems like a good tradeoff to get rid of the locking in the single-thread case.

@bdarnell bdarnell merged commit 804ae3f into tornadoweb:master Nov 4, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment