Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add socket event monitor capability #50

Merged
merged 14 commits into from
Jul 1, 2015
Merged

Conversation

claws
Copy link
Contributor

@claws claws commented Jun 22, 2015

This pull request adds the ZMQ socket event monitor capability to aiozmq.

I am using this in my system which uses ephemeral ports. I use it to detect client disconnects, then tear down the socket, use a discovery mechanism to resolve the new endpoint address and finally re-establish the socket.

@@ -194,3 +215,36 @@ def msg_received(self, data):

data is the multipart tuple of bytes with at least one item.
"""

MonitorEvents = {zmq.EVENT_ACCEPTED: 'accepted',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use enum here

@asvetlov
Copy link
Member

I support the idea in general but propose a bit another interface.
I suggest adding event_received() callback just to ZmqProtocol.

ZmqTransport should have enable_monitor() and disable_monitor() methods, both returns None. Monitor socket handled internally by transport and destroyed on transport closing if present.

I believe hiding monitor socket from library user makes public API easy and clean.

@claws
Copy link
Contributor Author

claws commented Jun 28, 2015

This change hides most of the event monitor details from the user. The event monitor can be enabled by calling enable_monitor on a transport. The monitor can be disable using disable_monitor. The monitor will also be shutdown when the socket being monitored is closed. Socket events are captured by an internal event monitor protocol that converts the binary event into a useful dict and then passes it to the protocol running the socket being monitored. This simplifies the developer API as they only need to deal with the ZmqProtocol class to obtain messages or events.

This took a while to get going because I encountered a confusing problem obtaining the socket event messages when running within the event loop. Using standard zmq was fine. It turns out I had to connect first and then bind second when using inproc:// transport within the event loop. Apparently something with the zmq.EVENTS is not being reported properly by libzmq internals in this particular case. I found this problem mentioned here.

@claws
Copy link
Contributor Author

claws commented Jun 28, 2015

Looks like I have a few CI issues to resolve, I'll push a change to fix this soon.

@claws
Copy link
Contributor Author

claws commented Jun 28, 2015

It looks like I have a CI problem remaining - "connection refused". I can't replicate this locally using OSX or Linux (Ubuntu 14.04).

Have you encountered issues that only show up on TravisCI? Can you recommend any debugging strategies for this kind of problem (otherwise, I'm stuck with trial and error)?

@claws
Copy link
Contributor Author

claws commented Jun 29, 2015

It looks like the last CI issue I had was related to the TravisCI test environment using libzmq3. In my test environments I am using a more recent version of libzmq (libzmq4). I think this explains why it works in my test environments (OSX and linux on machines at home and at work) but not on TravisCI. The socket monitor capability was only really added in libzmq>=4 and pyzmq>=14.4.

I have added runtime checks so that the library remains compatible with operating on libzmq3 (though without a socket monitoring capability) and can provide the monitoring capability when operating on libzmq4.

It might be time to revise the .travis.yml file to add test environments that also use libzmq4 so that full test coverage can be maintained.

@claws
Copy link
Contributor Author

claws commented Jun 30, 2015

I consider this change complete now and ready for your review.

Unfortunately the latest TravisCI builds indicate failure but I think this status is erroneous. Two builds succeed and then I pushed a docs-only change which resulted in a broken build. I subsequently pushed a trivial change to re-trigger a Travis rebuild and again this produced a failure but in a different builder. I consider this a TravisCI issue, not a problem with the docs change that was last delivered.

As owner of this repo you could manually re-trigger a TravisCI build to re-check the build status to confirm if this is a TravisCI issue or a real code issue. You can do this by opening the TravisCI page and clicking on the rebuild icon. Details can be found here.

@claws
Copy link
Contributor Author

claws commented Jun 30, 2015

@asvetlov could you manually bump TravisCI to perform another build? See above comment for details.


This method is only called when a socket monitor is enabled.

:param event: A dict containing the event description keys `event`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dislike dict (yes, it's supported by zmq.utils.monitor.parse_monitor_message() but we may repeat these 6 lines of code.
My suggestion is using namedtuple instead.

@claws
Copy link
Contributor Author

claws commented Jun 30, 2015

It looks like TravisCI fell over again running this docs-only update. Perhaps you could manually re-trigger the TravisCI build again?

@asvetlov
Copy link
Member

Eah, Travis is pretty unstable for the last year.

@asvetlov
Copy link
Member

I like the PR but please write additional tests.
Run make cov and see coverage html output.


def msg_received(self, data):
if len(data) != 2 or len(data[0]) != 6:
raise RuntimeError(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not test covered

@@ -693,6 +776,8 @@ def close(self):
def _force_close(self, exc):
if self._conn_lost:
return
if self._monitor:
self.disable_monitor()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not covered

@claws
Copy link
Contributor Author

claws commented Jul 1, 2015

I have added test coverage for the monitor additions. While I was looking at the coverage output I noticed some areas that could be updated to improve the coverage so I did that too (e.g. test supplying an external zmq context, get_write_buffer_limits) to improve the existing test coverage. These were in the vicinity of the areas I was updating anyway.

asvetlov added a commit that referenced this pull request Jul 1, 2015
Add socket event monitor capability
@asvetlov asvetlov merged commit 4f48376 into aio-libs:master Jul 1, 2015
@asvetlov
Copy link
Member

asvetlov commented Jul 1, 2015

Thanks!

@asvetlov
Copy link
Member

asvetlov commented Jul 1, 2015

Would you take a look on adding events monitoring to stream API?

@claws claws deleted the add_events branch July 2, 2015 12:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants