Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: conditional per request timeout setting #110

Closed

Conversation

Overbryd
Copy link

Conditional timeout

You can set a conditional timeout per request.
Although make sure to always meet your requirements (e.g. Heroku 30s timeout) in all cases.
If the block returns nil or false, timeout will be disabled for this request.

# set conditional_timeout to a callable block
# config/initializers/rack_timeout.rb
Rack::Timeout.conditional_timeout = proc { |env| env["PATH_INFO"] =~ /^\/admin/ ? 20 : 5 }

# equivalent for Sinatra/Rack apps
use Rack::Timeout, conditional_timeout: ->(env) { env["PATH_INFO"] =~ /^\/admin/ ? 20 : 5 }

@DJSAMPAT
Copy link

@kch Please look into this. Would help resolve Heroku timeouts.

@wuputah
Copy link
Collaborator

wuputah commented Aug 30, 2016

You might want to see #56 (and #9 #64 #67 #107)

@Overbryd
Copy link
Author

Sad, but protecting developers from their very own mistakes is not an
attitude I would encourage. If they want to break their app in weird ways
let them. We are all using turing complete languages, and we should all use
them with great care. And if somebody builds software for the medical or
nuclear sector in Ruby... A whole different category of fail.

I submitted this patch because Rack::Timeout is a valuable tool even
outside of Heroku. In my case, allowing for long running requests on the
admin tool enables us, without much hassle, to do some heavier processing
in the request cycle.

We are luckily not running on Heroku so this does not cost us much, nor
does it break the whole app.

 gem "rack-timeout", github: "Overbryd/rack-timeout"

Cheers, Lukas

  • sent from my mobile

Am Dienstag, 30. August 2016 schrieb Jonathan Dance :

You might want to see #56 #56
(and #64 #64 #67
#67 #107
#107)


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#110 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABSd1wlmrEtGKnv21Zw-R1S6XFDi1RTks5qk-PYgaJpZM4JnYN3
.

@wuputah
Copy link
Collaborator

wuputah commented Aug 30, 2016

I think this is one of the cleanest implementations I've seen submitted, and, yes, people keep requesting this feature and, yes, it is generally a bad idea, but people keep asking for it. I am not sure.

Whether you are on Heroku or not doesn't matter, the longer a request runs the more resources you are tying up. Using large deployments of Unicorn or Puma helps but if you have too many /admin requests your app will start suffering badly.

That said, rack-timeout is really a debugging tool and should not be relied upon to keep your app performing well. Aborting requests is actually quite dangerous and can lead to problematic database connections, etc. You really want other things to timeout before rack-timeout comes into play, like using statement_timeout for Postgres.

So, in short, this lets people shoot themselves in the foot either a bit more or a bit less, but rack-timeout is all about foot-shooting anyway, so does it really matter?

@DJSAMPAT
Copy link

DJSAMPAT commented Aug 31, 2016

@Overbryd I think you have done a great job here and I would recommend that it gets merged as it is simplified and is something that many have requested (@sathish316, @hamitturkukaya, @ankane, @rpechayr, @zhuochun)

@wuputah If the functionality is added, then it should be up to the developer to shoot themselves in the foot. If this can allow exceptions to go past the Heroku (Or any other) timeout, then it should be an option Per Query not across the whole app. What we need is an option to use it when certain criteria is met.

@kch I had a look at #9 #56 #64 #67 #107 and there is mention of other methods to go around "long running reports/background jobs" but no mention in the Read Me or mention of a certain fork that would not cause other issues.

@rojosinalma
Copy link

rojosinalma commented Feb 1, 2017

I think it's not really about one particular case, at least the one I'm faced with, is not related to either Heroku, payments or whatever other argument that has already been mentioned in previous rejected PRs or comments in this thread.

I don't want to explain my case particularly, because I think this feature should be added on the base of "Give developers flexibility". Yes, most of the times rack-timeout will be used with performance and protection in mind, but we can't just pretend that all software behaves exactly the same for exactly all cases (or even worse, forcing that to be the case).

Adding this feature is not something that breaks any apps and even more so, if it happens it would be explicitly because of a developers fault and not the gem logic.

It doesn't open any gaps or introduces any errors, it just gives people flexibility and that's good.

Please merge :D

@kch
Copy link
Contributor

kch commented Jun 21, 2017

I'll just say that I'm not against this. Tho feet are really important and I'd personally refrain from shooting them, myself.

@schneems
Copy link
Member

Last comment was from two years ago. I think that you can effectively get this same behavior by writing your own custom middleware that optionally calls rack-timeout which gives you maximum flexibility.

I say we close this for now. If you want, you could write a gem that does what I said (wraps rack-timeout) and publish it, perhaps as rack-timeout-conditional.

@mhenrixon
Copy link

Shame that we have to increase ALL our timeouts just because of one endpoint. Would be awesome to have the possibility of configuring timeouts per endpoint for those odd cases when it actually does make sense. Yes a background worker would be better but adding it for one endpoint in an otherwise super simple service also adds dependencies on something like redis that then needs to be used.

Would have been super to just be able to bump a timeout for that one endpoint that processes some files.

@schneems
Copy link
Member

Sure, I understand the desire which is why I would recommend making another library. If it's successful and the need is proved and the bugs are worked out then we can re-consider adding it back into this library.

@zombocom zombocom locked as resolved and limited conversation to collaborators Nov 25, 2019
@schneems schneems closed this Dec 11, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants