New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.3.2 regression: kraken_two.get_status() returns wrong fan RPM value #87
Comments
I can reproduce this, but only while a connection has held, and only when using the (currently) default This leads to me believe that the reports are being queued, and that we're seeing stale data (as you mentioned: updates are delayed). I remember thinking a bit about this when checking the HIDAPI sources, as it has a queue and I wasn't sure whether this would cause issues or not; but apparently I only checked the liquidctl CLI, which doesn't keep the connection long enough for it to matter, and is thus unaffected. I'll try to fix this ASAP. |
As incoming HID reports are queued by the OS, HidapiDevice.read() would return stale data to the drivers. The drivers assume the reads aren't queued because that's not necessary with the currently supported devices. This didn't affect the liquidctl CLI as the connection to HIDAPI wouldn't last long enough for too much data to be queued and lost. But for downstream projects like GKraken this was a significant regression from liquidctl 1.1.0. To keep the behavior consistent between the HIDAPI and Libusb backends, traverse all queued (incoming) reports until the last one is found, and only return it. Related: #87 ("1.3.2 regression: kraken_two.get_status() returns wrong fan RPM value") Reported-by: @leinardi
Can you give the latest 'master' a try? It passes my tests (using a slightly more aggressive example than yours), but please double check. Assuming it's indeed fixed, I'll do a patch 1.3.3 release. |
Can you also give the latest 'master' a try with your Smart Device V2 on macOS? I'm worried that the fix I just pushed might break devices like the SDV2. |
@jonasmalacofilho sure! I'll try to test it soon and report back :) |
Just downloaded and tested the Master branch. The changes are working correctly except for one possible issue. For the Smart Device V2 it looks like the
UPDATE: On second thought, that behavior is correct. LED configurations do not change from call to call. Only fan status changes between calls. So it makes sense for |
@jonasmalacofilho just tested and 2050d0b seems to work fine! |
Thank you both! @CaseySJ, that has been like that for a while already. Moving the LED part of the output to |
As incoming HID reports are queued by the OS, HidapiDevice.read() would return stale data to the drivers. The drivers assume the reads aren't queued because that's not necessary with the currently supported devices. This didn't affect the liquidctl CLI as the connection to HIDAPI wouldn't last long enough for too much data to be queued and lost. But for downstream projects like GKraken this was a significant regression from liquidctl 1.1.0. To keep the behavior consistent between the HIDAPI and Libusb backends, traverse all queued (incoming) reports until the last one is found, and only return it. Related: #87 ("1.3.2 regression: kraken_two.get_status() returns wrong fan RPM value") Reported-by: @leinardi
Commit 2050d0b ("Traverse all hid/hidraw queued reports and return the last one") aggressively started to clear all enqueued reports whenever a read was done on a HID using hidapi. While that seemed to work ok, it could cause important reports to be lost, which might eventually break some drivers. As no contributor has enough devices to test all drivers, it is better to be conservative and only explicitly clear enqueued reports. Notably, it is wiser to clear the queue before any writes that precede one or more reads. This will clear stale data that has already been queued, but will not risk loosing expected responses. And drivers that expect reads to match specific writes should already be prepared to handle what is left. The objective is not definitively clear all previous reports, but simply to avoid acting on (or forwarding to the user) stale data. The subtle difference is that _stale_ presumes a delay of more than infinitesimal length. Related: #87 ("1.3.2 regression: kraken_two.get_status() returns wrong fan RPM value")
Commit 2050d0b ("Traverse all hid/hidraw queued reports and return the last one") aggressively started to clear all enqueued reports whenever a read was done on a HID using hidapi. While that seemed to work ok, it could cause important reports to be lost, which might eventually break some drivers. As no contributor has enough devices to test all drivers, it is better to be conservative and only explicitly clear enqueued reports. Notably, it is wiser to clear the queue before any writes that precede one or more reads. This will clear stale data that has already been queued, but will not risk loosing expected responses. And drivers that expect reads to match specific writes should already be prepared to handle what is left. The objective is not definitively clear all previous reports, but simply to avoid acting on (or forwarding to the user) stale data. The subtle difference is that _stale_ presumes a delay of more than infinitesimal length. Related: #87 ("1.3.2 regression: kraken_two.get_status() returns wrong fan RPM value")
This has been fixed in liquidctl v1.3.3. By the way, I ended up reworking yesterday's fix to be more conservative, because I was still worried about potential problems with some of drivers I can't test myself. |
Are you going to issue a Pull Request to Homebrew-Core for v1.3.3? |
@CaseySJ can you still take care of those (homebrew-core and our tap)? I rather you did it because you can actually test the result on macOS. |
I should be able to submit 1.3.3 to homebrew-core this evening. Will update the tap as well. |
Thank you, 1.3.3 is already up on homebrew-core. |
I'm migrating GKraken from liquidctl 1.1.0 to 1.3.2 and I just noticed that the call to
get_status()
returns a wrong value for the RPM fan:The very weird thing is that the values seems to be delayed: if I set the fan to 100% I see no immediate update but if I wait around 3 minutes I see the RPM go up. I'm polling liquidctl every 3 seconds.
If I downgrade to version 1.2.0 or 1.1.0 everything works fine as before.
Just FYI, I'm lazy loading the driver instance and reusing it every time I poll (so I'm creating and connecting to it only once and then polling it every 3 seconds).
I'm able to reproduce the issue with this simple script:
1.1.0 and 1.2.0 work correctly, 1.3.2 does not.
The text was updated successfully, but these errors were encountered: