Skip to content

Commit

Permalink
Add test case
Browse files Browse the repository at this point in the history
  • Loading branch information
sourcejedi committed Oct 21, 2020
1 parent b9fa0ed commit 50184ea
Showing 1 changed file with 32 additions and 0 deletions.
32 changes: 32 additions & 0 deletions Lib/test/test_asyncio/test_unix_events.py
Expand Up @@ -267,6 +267,38 @@ def test_close_on_finalizing(self, m_signal, m_sys):
self.assertEqual(len(self.loop._signal_handlers), 0)
self.assertFalse(m_signal.signal.called)

def test_wakeup_pipe_overflow(self):
# On Linux, we find the self-socket can hold 278 bytes

This comment has been minimized.

Copy link
@DStape

DStape Jan 20, 2021

Could you please elaborate on how you conclude that self-socket can store up to 278 bytes (on unix)?

I'm seeing BlockingIOError traceback logs due to enabling asyncio's debug mode and believe this is due to scheduling coroutines from a different thread (via run_coroutine_threadsafe). I narrowed down the number of coros I needed to schedule to ~275, so 278 bytes jumps out at me, given each coro results in a single null byte being written.

And FWIW, I found this commit via https://bugs.python.org/issue35749.

Thanks for your time,
David

This comment has been minimized.

Copy link
@sourcejedi

sourcejedi Jan 20, 2021

Author Owner

I found it empirically. The test passes if you change to COUNT_TO=277 or lower.

On current master:

make -j
git fetch https://github.com/sourcejedi/cpython.git 50184ea3b354fd775866d036ccee058ec6734927
git cherry-pick 50184ea3b354fd775866d036ccee058ec6734927
./python -m unittest -v test.test_asyncio.test_unix_events

This isn't for Unix overall btw. The number 278 is specifically for Linux. I only remember bothering to test 64-bit Linux.

I guess the limit is slightly different on 32-bit Linux. https://unix.stackexchange.com/questions/424380/what-values-may-linux-use-for-the-default-unix-socket-buffer-size . Although looking at that link, I dunno why it's quite as low as 278. Shrug.

If you want it fixed, I would feel free to take the issue status "needs patch" at face value :-). Rebase this branch, submit it as PR, try to get it reviewed.


I'm a bit down on the asyncio module. This issue should be relatively simple. 42110 is possible. And I think the solution for 38323 is to change to SafeChildWatcher for baseline Unix/POSIX platforms, which could break some existing code. So documenting that problem could use a bit more urgency, IMO. PidFdChildWatcher is great, but won't be available on all systems.

This comment has been minimized.

Copy link
@DStape

DStape Jan 20, 2021

Thank you.

Another simple way I've confirmed 278 on my setup (64-bit linux also) is by running:

import socket
 
 a, b = socket.socketpair()
 a.setblocking(False)
 b.setblocking(False)
 
 for i in range(1, 280):
     try:
         a.send(b"\0")
     except OSError as e:
         print(f"Error after sending {i} bytes: {e}")
 
 a.close()
 b.close()

It might be an interesting side project to determine this value across different platforms, however for now I am content in knowing that the BlockingIOErrors showing up in the asyncio debug logs in my use-case are nothing to be concerned about (although they are still slightly annoying).

Thanks for the other links. I may not get round to submitting PRs for this anytime soon, but I am interested in learning more about the internals of asyncio and these give me nice entrypoints into exploring more parts of the asyncio code.

This comment has been minimized.

Copy link
@sourcejedi

sourcejedi Jan 20, 2021

Author Owner

Cool. I agree run_coroutine_threadsafe() should work OK, ignoring the warning messages. The issue is missing which signal number occured. If you're not registering any signal handlers with the event loop, there's nothing to go wrong...

...as long as you're not using asyncio subprocesses and SafeChildWatcher - i.e. using the event loop to listen for SIGCHLD. SafeChildWatcher is only the default in python versions older than 3.8.

This comment has been minimized.

Copy link
@DStape

DStape Jan 20, 2021

I'm certainly not using any of that stuff at present, but will keep this in mind should that change. Thanks.

# (when those bytes are written in individual write() calls).
# But let us test a bit harder than that.
# Since Linux 2.6.11, pipe buffers default to 64K.
COUNT_TO=64*1024

class SignalCounter:
def __init__(self):
self.n = 0

def handler(self):
self.n += 1
usr1 = SignalCounter()
usr2 = SignalCounter()

async def test():
loop = asyncio.events.get_running_loop()
loop.add_signal_handler(signal.SIGUSR1, usr1.handler)
loop.add_signal_handler(signal.SIGUSR2, usr2.handler)

pid = os.getpid()
for i in range(COUNT_TO):
os.kill(pid, signal.SIGUSR1)
os.kill(pid, signal.SIGUSR2)
asyncio.run(test())

# Note multiple instances of the same signal may be coalesced into 1.
self.assertGreater(usr1.n, 0)

self.assertEqual(usr2.n, 1)


@unittest.skipUnless(hasattr(socket, 'AF_UNIX'),
'UNIX Sockets are not supported')
Expand Down

0 comments on commit 50184ea

Please sign in to comment.