New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call UpdateService for SRV & A/AAAA records as well as TXT #239
Conversation
There is a possibility that my fix here is a breaking change as UpdateService used to get called on service addition. I feel this is wrong and was a mistake. |
Two comments here so far (on top of that it's nice work you're doing):
|
Yep the pypy stuff seems to pass sometimes, fail others. I’m gonna install it on my personal machine as a next step so I can look at this more closely. I had initially suspected timing issues, but not so sure now. Any help you can give is appreciated. |
If you rebase your branch onto the master branch to get the benefit of #240 we should see some relevant details in the CI logs. |
Right worked it out! Interestingly the pypy stuff did uncover a couple of issues in the code and the test suite |
Sure, I'll have a look later today. I think it'll be fine once things are squashed. |
Use tuple rather than List for handler specification
So I've cleaned up all the various linting errors, a lot of the changes you've seen above are to tidy up the errors reporting in my UI (vscode). However the move to pytest has thrown up a lot warnings. Some I can understand, which are the deprecation warning. However there are only 3 of those and there are a further 17 other warnings reported, but I can't see the actual warnings even using verbose mode (pytest -v), any ideas? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding warnings – I think I have an idea how pytest counts those. From https://travis-ci.org/github/jstasiak/python-zeroconf/jobs/674322037, we see three warnings because pytest shows all instances of a specific warning together, so first one:
=============================== warnings summary ===============================
zeroconf/test.py: 11 tests with warnings
/home/travis/build/jstasiak/python-zeroconf/zeroconf/__init__.py:1605: DeprecationWarning: address is deprecated, use addresses instead
warnings.warn("address is deprecated, use addresses instead", DeprecationWarning)
That's 11, then we have 4 more:
zeroconf/test.py::test_multiple_addresses
zeroconf/test.py::test_multiple_addresses
zeroconf/test.py::test_multiple_addresses
zeroconf/test.py::test_multiple_addresses
/home/travis/build/jstasiak/python-zeroconf/zeroconf/__init__.py:1633: DeprecationWarning: ServiceInfo.address is deprecated, use addresses instead
warnings.warn("ServiceInfo.address is deprecated, use addresses instead", DeprecationWarning)
That's 15 total, then two more:
zeroconf/test.py::test_multiple_addresses
zeroconf/test.py::test_multiple_addresses
/home/travis/build/jstasiak/python-zeroconf/zeroconf/__init__.py:1642: DeprecationWarning: ServiceInfo.address is deprecated, use addresses instead
warnings.warn("ServiceInfo.address is deprecated, use addresses instead", DeprecationWarning)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
And this gives us 17. So I think problem solved here (as far as understanding where do those come from is concerned, anyway). :)
All in all very nice work.
zeroconf=zeroconf, service_type=self.type, name=name, state_change=state_change | ||
) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, two things here.
Regarding the logic, I had to write myself a little table to make sure I understand things correctly. I added some notes in two cases that I believe to be worth taking a second look at. The table:
Existing enqueued state change | New state change to enqueue | State change on the list afterwards | My notes |
---|---|---|---|
Added | Added | ||
Updated | Updated | ||
Removed | Removed | ||
Added | Added | Added | |
Added | Updated | Added | |
Added | Removed | Added | Shouldn't this be just Removed, since the service is gone? When we enqueue Removed then there's Updated on the list already we switch to Removed. |
Updated | Added | Added | |
Updated | Updated | Updated | |
Updated | Removed | Removed | |
Removed | Added | Added | |
Removed | Updated | Removed | Now I'm not sure when would this happen, why wouldn't it make sense to result in Updated here? |
Removed | Removed | Removed |
Regarding the implementation details, I think _handlers_to_call
could work better as an OrderedDict[str, ServiceStateChange]
instead of List[Tuple[str,ServiceStateChange]]
. The code below could then look more or less like (keeping the logic intact, I let myself reverse the logic and remove the early return(s), as it it wouldn't really make sense in this case):
if (
(state_change is ServiceStateChange.Updated and name not in self._handlers_to_call)
or (
state_change is ServiceStateChange.Removed
and self._handlers_to_call.get(name) is ServiceStateChange.Updated
)
or state_change is ServiceStateChange.Added
):
self._handlers_to_call[name] = state_change
This removes all that count()
, remove()
and append()
dance which is nice I think. Then the popping code would use OrderedDict.popitem() instead of list.pop()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it, the code could just be an unconditional self._handlers_to_call[name] = state_change
. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ll have a bit more of a think about this one the coming week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your last suggestion won't work because then an update trumps an add. I've used a slightly amended version of your first suggestion and the code you had above didn't update removed if for example the list was empty.
Regarding your 2 queries about the truth table, yes there are some combinations that don't make sense and in a well written responded shouldn't happen. However I've taken the view that I need to make a decision on the precedence of the messages and my view is that Added always wins, so for example if we receive an add and remove message in the same batch, then add is the most likely meaning. I can hypothesise a reason this might occur if you have multiple res ponders and one goes offline. In this case Added + Removed still means there is a service there to respond. (note this is a contrived example as the messages would come in a different batch)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough – if it causes issues I expect it'll be reported and we'll sort it out then, having a concrete case to deal with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, closed by accident. LGTM, I'll merge this now.
Fix #235 Contains: * Add lock around handlers list * Reverse DNSCache order to ensure newest records take precedence When there are multiple records in the cache, the behaviour was inconsistent. Whilst the DNSCache.get() method returned the newest, any function which iterated over the entire cache suffered from a last write winds issue. This change makes this behaviour consistent and allows the removal of an (incorrect) wait from one of the unit tests.
Just merged manually (to get rid of merge errors, squash and produce nice commit message) as 552a030. Nice job! I'll release this later today. |
Shipped in 0.26.0. |
Closes #255 Background: #239 adds the lock _handlers_lock: python-zeroconf/zeroconf/__init__.py self._handlers_lock = threading.Lock() # ensure we process a full message in one go Which is used in the engine thread: def handle_response(self, msg: DNSIncoming) -> None: """Deal with incoming response packets. All answers are held in the cache, and listeners are notified.""" with self._handlers_lock: And also by the service browser when issuing the state change callbacks: if len(self._handlers_to_call) > 0 and not self.zc.done: with self.zc._handlers_lock: handler = self._handlers_to_call.popitem(False) self._service_state_changed.fire( zeroconf=self.zc, service_type=self.type, name=handler[0], state_change=handler[1] ) Both pychromecast and Home Assistant calls Zeroconf.get_service_info from the service callbacks which means the lock may be held for several seconds which will starve the engine thread.
Fixes #235