Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase collector timeout #391

Merged
merged 1 commit into from
Nov 6, 2022
Merged

Increase collector timeout #391

merged 1 commit into from
Nov 6, 2022

Conversation

adripo
Copy link
Contributor

@adripo adripo commented Nov 5, 2022

This pr increases http requests timeout.
Will fix old issues like #318 #185 #183

The problem happens when smartctl returns lot of data for one HDD and the http post request will not end in 10 seconds.
The error is: Client.Timeout exceeded while awaiting headers
I increased the timeout to 60 seconds and tested eveything. It solves the issue completely.

@AnalogJ
Copy link
Owner

AnalogJ commented Nov 5, 2022

thanks for the PR! 🥳

Wow, I never considered post payloads larger than 10s -- the SMART data has been reasonably sized in my experience, but I understand that network issues could cause these timeouts as well.
I'm happy to merge this, however if you have an example of a payload that failed due to a timeout, I'd love to take a look at it (just send it to jason@thesparktree.com)

Thanks again!

@codecov-commenter
Copy link

Codecov Report

Merging #391 (222b810) into master (a01b8fe) will decrease coverage by 0.12%.
The diff coverage is 28.38%.

@@            Coverage Diff             @@
##           master     #391      +/-   ##
==========================================
- Coverage   32.69%   32.56%   -0.13%     
==========================================
  Files          51       54       +3     
  Lines        2753     3043     +290     
  Branches       61       66       +5     
==========================================
+ Hits          900      991      +91     
- Misses       1821     2016     +195     
- Partials       32       36       +4     
Flag Coverage Δ
unittests 32.56% <28.38%> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
collector/pkg/collector/base.go 0.00% <ø> (ø)
webapp/backend/pkg/config/config.go 0.00% <0.00%> (ø)
webapp/backend/pkg/database/scrutiny_repository.go 12.60% <0.00%> (-0.92%) ⬇️
...end/pkg/database/scrutiny_repository_migrations.go 0.00% <0.00%> (ø)
...ckend/pkg/database/scrutiny_repository_settings.go 0.00% <0.00%> (ø)
webapp/frontend/src/app/layout/layout.component.ts 2.50% <20.00%> (ø)
...end/src/app/core/config/scrutiny-config.service.ts 42.85% <42.85%> (ø)
...bapp/frontend/src/app/shared/device-status.pipe.ts 64.70% <62.50%> (+54.70%) ⬆️
webapp/frontend/src/app/shared/file-size.pipe.ts 78.57% <76.92%> (-6.05%) ⬇️
...mon/dashboard-device/dashboard-device.component.ts 73.07% <80.00%> (+17.52%) ⬆️
... and 11 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@AnalogJ AnalogJ merged commit 5cc7fb3 into AnalogJ:master Nov 6, 2022
@AnalogJ
Copy link
Owner

AnalogJ commented Nov 6, 2022

Thanks for sending over your log files, I'll take a look and see if there's an unexpected cause for the slowness that I can determine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants