Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unbuffered chunked streaming #324

Open
bharendt opened this issue Jul 25, 2017 · 8 comments
Open

Unbuffered chunked streaming #324

bharendt opened this issue Jul 25, 2017 · 8 comments
Milestone

Comments

@bharendt
Copy link

Unbuffered chunked streaming

Fabio uses the go ReverseProxy which buffers the outgoing response to the client.

It uses a ResponseWriter with a buffered writer which buffers all outgoing data with a size of 4kB.

As workaround an Accept: text/event-stream request header can be set to enable periodically flushing of that buffer. This needs to be set already in the incoming request, because the flush interval must be set already in the ReverseProxy constructor.

Setting the accept header is not suitable for us, because we need to get the chunked resource directly from the browser. It streams live data.

In addition buffering up to 4kB before forwarding the response increases the latency for each request.

That's why we modified the default ReverseProxy implementation and added a Flush() after the write to the response writer which solves both problems for us.

If you think it makes sense to adopt this change we are happy to open a pull request.

@magiconair
Copy link
Contributor

That sounds interesting and there was #129 (comment) which seems related. It was always kind of clear that forking the reverse proxy would eventually happen but I'm glad that for the most part I didn't have to.

So my suggestion would be this:

  • Please send a PR but make using the streamed proxy configurable. If the streaming proxy is a drop-in replacement of the reverse proxy with more features then it should replace it in the long run. If it has a different feature set then it must be configurable.

  • Consider pushing a PR towards Go to add this feature there as well.

@tommyvicananza
Copy link

tommyvicananza commented May 8, 2019

Hello, here's what we did.
Thanks @bharendt approach but we wanted to try to avoid unnecessary flushes so we decided to add a Target option which adds flush to desired route.

E.g:
route add nomad nomad.tommyvicananza.io http://192.168.50.4:4646 opts "flush=-1s"

@aaronhurt
Copy link
Member

@tommyvicananza I believe this is related #655 and may solve your issue. It does not implement a configurable flush but does re-add the Flush method that was lost in a previous change.

@oterrien
Copy link

Hi

We have the same issue (fabio:1.5.15-go1.15.5).

Our use-case is to display, by streaming (SSE), all statuses of a given object. Because the size of the buffer is never reached, the content is never displayed.

Updating proxy.flushinterval in fabio.properties has no impact.

Is it possible to configure the size of buffer to be taken into account while streaming?

@sbrl
Copy link

sbrl commented Apr 2, 2021

This is a serious problem when putting Fabio in front of Hashicorp Nomad: hashicorp/nomad#5404

If I access my nomad instance directly, then viewing the logs of any running job works fine. But if I try to view them when connecting via Fabio, it doesn't work. Apparently flush=-1s is supposed to help here, but it doesn't have any effect at all.

What's the status on this issue please?

@james-masson
Copy link

I hit the same problem.

@sbrl - flush=-1s is a product of the patch @tommyvicananza made in his custom build - it's not in mainline Fabio as far as I can see.

per-route config for this would be ideal, would you be willing to submit this as a PR @tommyvicananza ?

The workaround for now is to set proxy.globalflushinterval="-1s"

In the environment I'm working on, globally disabling buffering should be fine, but for many this will be very inefficient.

Nomad does not trigger the SSE handling, as it uses content-type "application/json"

@sbrl
Copy link

sbrl commented Jul 18, 2021

Thanks, @james-masson! I've implemented that as a temporary fix. It seems to make it to work in some scenarios where it didn't before, but not others (e.g. when there aren't many logs) - so there's still work that needs doing here.

Nomad does not trigger the SSE handling, as it uses content-type "application/json"

Does that mean that even implementing this fix won't completely solve the problem? If so, that explains the behaviour I'm seeing?

@Himura2la
Copy link

I tried to set FABIO_PROXY_GLOBALFLUSHINTERVAL = "-1s", and it doesn't help with Nomad logs issue...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants