Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Rock-ons] Address timeout issues during Rock-on list refresh cycle #2683

Closed
Hooverdan96 opened this issue Oct 2, 2023 · 4 comments
Closed

Comments

@Hooverdan96
Copy link
Member

Thanks to greven and jimla1965 for reporting an issue that seems to intermittently occur during the Rock-on repository update and processing on a Rockstor appliance, leading to timeout issues (the first instance reported was intermittent, the second instance reported was persistent).

https://forum.rockstor.com/t/another-take-on-unknown-internal-error-doing-a-post-to-api-rockons-update/7739/

and, more recently,

https://forum.rockstor.com/t/unknown-internal-error-doing-a-post-to-api-rockons-update-old-fix-gone/9035/

The current symptom can be observed in the Rockstor WebUI.

Unknown internal error doing a POST to /api/rockons/update

Further details can be gleaned from the two posts listed above.

@Hooverdan96
Copy link
Member Author

For reference, @phillxnet is working on a promising approach to substantially shorten the retrieval time (and possibly avoid above errors related to timeouts): #2707 (with related PR #2708

@phillxnet
Copy link
Member

@Hooverdan96 Lets see if we can get some field confirmation on the linked issue/pull-request once we get the next testing channel rpm out. If we have favourable feedback I propose that we close this one against PR #2708, and potentially also #2706 as I think they will hopefully both help out on this front.

The next testing channel rpm version to contain both hopefully related fixes should be 5.0.5-0

@phillxnet phillxnet added this to the 5.1.X-X Stable release milestone Nov 24, 2023
@phillxnet
Copy link
Member

@Hooverdan96 Maybe we can close this one now - we have not explicitly had confirmation but we have significantly reduced this whole process, and made it far less prone to time-out, i.e. via single https session use, and more appropriate gunicorn threads.

@Hooverdan96
Copy link
Member Author

Yes, I think for now we can close it, based on your assessment and all the recent changes that have been merged into the latest testing channel. If during the real-world application it creeps up again, we'll create a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants