-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limit readahead for private objects #2964
Comments
I have not (yet) checked all the details, but this has to do with the fact that at some point we changed how uncacheable fetches work: While, in some very old code, we looped over a fixed size (relatively small) buffer, I think with the introduction of the separate backend threads, we changed this to "pre-fetch" data from the backend and then deliver it to the client asynchronously. I could imagine we could limit the amount of such readahead |
for anything but return (deliver) Fixes varnishcache#2964
sorry, the references from the bugfix were for #2963 |
I had overlooked the fact that for a canceled request we might only read parts of the body. Interestingly, this is only exposed on the vtest ARMs. Ref #2964
bugwash:
|
Bugwash: Not clear how to solve. Move to VIP |
I had overlooked the fact that for a canceled request we might only read parts of the body. Interestingly, this is only exposed on the vtest ARMs. Ref varnishcache#2964
With 32ef5dc, this test also cancels backend requests and thus may prevent the server from finishing writing the reponse body Ref varnishcache#2964
Hi, I just check https://github.com/varnishcache/varnish-cache/wiki/VIP-26:-limit-private-prefetch and it's not clear. If we try to use pipe on vcl_backend* (to pipe on backend response condition (on Content-Lenght)), it's not allowed (worked in 3.x / but not in 6.3.x). How can we do to do it now? Best regards, |
@anthosz pipe mode is unaffected by this issue, but not recommended because of the lack of control over the backend response. |
Thx. So how to avoid pass (and avoid Transient storage issue due to memory limit) if we cannot use pipe? :X The idea is to avoid pass & transient storage usage when there are a big request. I cannot find the solution in V6. |
you can, from the client side. |
@nigoroll but the Content-Length header is provided by backend (so in vcl_backend_response) :/ The issue is that we cannot pipe in vcl_backend_response or vcl_deliver and we cannot have the Content-Length value in vcl_recv. |
BTW, there is a poc in #3240 |
We've had this issue too, and made a workaround in VCL code like below to pipe all requests, if the content length header shows that the response exceeds 30GB. basically, you can restart your request, and go into pipe mode, by chaining through several vcl methods if done creatively... vcl 4.1;
import std;
sub vcl_backend_response {
if (std.integer(beresp.http.content-length, 0) > 32212254720) {
std.log("DEBUG: Fail this error request, as Transient storage might be insufficient.");
set bereq.http.x-error-reason = "oversize";
return (error);
}
}
sub vcl_backend_error {
if (bereq.http.x-error-reason == "oversize") {
set beresp.status = 599;
return (deliver);
}
}
sub vcl_deliver {
if (resp.status == 599) {
std.log("DEBUG: restarting, backend fetch failed due to oversize");
set req.http.x-pipe-request = "1";
return (restart);
}
}
sub vcl_recv {
# If we have a restart from vcl_deliver, then we immediately go into pipe
if (req.http.x-pipe-request && req.restarts > 0) {
return(pipe);
}
} |
@mnederlof I do the same thing already but thanks for sharing |
I'm hitting this bug too and this VCL doesn't work. The best solution I have at the moment is the simple |
It wont work if you download something. You can't reset the connection without the browser thinking the download failed Any fix? Is there a way to detect it already in apache or nginx and set no-cache? |
This can't work It would RESTART it and not simply bypass |
The workaround only works for good for GET requests but for POST requests in the restart request no POST DATA is provided... |
Have you tried https://varnish-cache.org/docs/6.0/reference/vmod_generated.html#func-cache-req-body |
no didn't tried to but came to the conclusion that |
I think there's a confusion on what caching means in that context. Also, |
I understand what cache_req_body() does but the problem it self is that we trigger the same So for everything which is not a GET we should pipe the request anyway. In case of a Or I'm wrong? btw. this has nothing to do with http2 at least my backend servers don't use http2. |
The responses for POST requests can be cached if the server says so, but yes, care should be taken not to retry a request with side effects. |
what I'm saying is that if you |
Time for discussion to move to a different forum, like a mailing list. https://varnish-cache.org/lists/mailman/listinfo/varnish-misc |
Expected Behavior
varnish doesn't need additional memory
Current Behavior
consumes memory till oom
Possible Solution
pipe without caching?
Steps to Reproduce (for bugs)
Context
I'm trying to get around caching big files, in varnish < 4.0 I found code snipes for restarting and piping. But this doesn't work any longer in varnish >= 4.0. As alternative I tried 2 things.
In my opinion if I use uncacheable = true, varnish should not allocate memory for the data transfered.
Your Environment
The text was updated successfully, but these errors were encountered: