Skip to content

KOSync: Set sane socket timeouts properly #10835

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Aug 22, 2023
Merged

Conversation

NiLuJe
Copy link
Member

@NiLuJe NiLuJe commented Aug 22, 2023

An attempt was made in the original code, but the whole thing was designed in the hope of actually switching to turbo, so it was super janky without it.
Anyway, we now actually have a sane way to set socket timeouts, so, use that, and set them very tight for now.

This is fairly critical right now, because the server is down, and the default timeouts are ~30s. That happens to be above the debounce threshold, so you can't even hope for that to help you. Meaning, right now, you get a 2 * 30s block on resume with auto sync. That's... Very Not Good(TM).

That becomes a single 2s one after this.

Exercised the timeouts thanks to the main server being down right now, and also tested successfully with a custom sync server (but, granted, one that's actually much closer to me than koreader.rocks), so, timings might need some more tinkering later...


This change is Reviewable

NiLuJe added 3 commits August 22, 2023 01:00
Should prevent double calls on resume

Which hurts when the server is down...
And make 'em very tight.
Meaning we no longer block for upwards of 30s when the server is down.
Make it 2s instead. Better have good connectivity, guys ;o).
This reverts commit ba2dae9.

Err, yeah, of course that doesn't work, there's a race on the timestamp
value stored in the instance if we update it before the scheduling,
because we can fire as much as three of those semi-concurrently...

(Resume -> Connected -> reRun from Resume)
@NiLuJe NiLuJe added this to the 2023.08 milestone Aug 22, 2023
NiLuJe added 4 commits August 22, 2023 02:38
I don't *think* we have anything else doing stupid things like that, but
it's not entirely out of the realm of possibilities...
Copy link
Member Author

@NiLuJe NiLuJe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 2 files at r1, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @NiLuJe)

@NiLuJe NiLuJe merged commit 6fa8e1d into koreader:master Aug 22, 2023
@luutuankiet
Copy link

luutuankiet commented Nov 23, 2024

hi @NiLuJe sorry for reviving this but I can't really find a confirmation anywhere - would the 2s timeout be applied to this case #10807 ? or is the touch to cancel mentioned in bc235b3 of that ticket is implemented ?

asking because I'm still experiencing the issue above on my Kobo H2O (self hosted kosync server, if that helps). Understandable if it's not cause there's a can't fix tag. thanks!

@NiLuJe
Copy link
Member Author

NiLuJe commented Nov 23, 2024

What issue, exactly?

The issue you've just commented on has nothing to do with the issue you linked, so I'm confused.

@luutuankiet
Copy link

luutuankiet commented Nov 24, 2024

my issue is as described in #10807 - when I'm out of known wifi range, opening the device results in a limbo from the long running pop up "scanning for network". Except in my case it don't seem to time out even after 60s, so long that I had to reboot the device to get it operational.

I thought Kosync & progress sync are both related to this issue.

Can you help point me to the right issue to comment on / or I can create a new one ? thanks!

@NiLuJe
Copy link
Member Author

NiLuJe commented Nov 24, 2024

This should timeout after 20s, and is related to none of those two issues ;).

@NiLuJe
Copy link
Member Author

NiLuJe commented Nov 24, 2024

See koreader/lj-wpaclient#12 for some recent discussions about that behavior ;).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants