New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nuke_limit is not honored #1764
Comments
nuke_limit is not honored anymore. relevant vtc:
|
object creation. Fixes #1764 Conflicts: bin/varnishd/cache/cache_fetch.c bin/varnishd/storage/stevedore.c bin/varnishd/storage/storage.h bin/varnishd/storage/storage_lru.c bin/varnishd/storage/storage_persistent.c bin/varnishd/storage/storage_simple.c
Backport review: This is backported by @mbgrydeland (365e605) and is part of 4.1.7-beta1. For some users this fix will change the behavior of varnish in a significant way, but for most people it will not be noticed. |
object creation. Fixes varnishcache#1764 Conflicts: bin/varnishd/cache/cache_fetch.c bin/varnishd/storage/stevedore.c bin/varnishd/storage/storage.h bin/varnishd/storage/storage_lru.c bin/varnishd/storage/storage_persistent.c bin/varnishd/storage/storage_simple.c
Hi, After the security bug we updated our varnish cluster from 4.1.1 -> 4.1.8 After a few days we realized a new issue, which i pretty much think is due to nuked objects. I then saw the following:
Therefore I have these questions
|
This is one of very few patches after 4.1 that can affect a running varnish in a negative way, and it is present in all versions from 4.1.7-beta1 and onwards. |
This is a topic for the misc mailing list, but the short story is that the |
We are also facing the same problem as described by @INCRE , varnish is truncating the transaction and only send part of the response body to the client. This is happening for large objects for now. Our cache memory is almost full, therefore, I assume varnish need to do nuking to make space for new object. Our current nuke_limit is 50. But @hermunn mention that after reaching nuke_limit we will receive 503, but this is not happening, we receive the error |
@naveen-goswami please take this to the mailing list instead. If you don't get a 503, it means that streaming was enabled (default) and Varnish started the client delivery parallel to the backend fetch. This is a trade off between latency and correctness. The solution is to have two storage backends, one for large files and one for small files, this way you won't run into a situation where large files nuke lots of small files to make space. |
just to update on the issue, increasing nuke_limit to 500 helped us in removing those errors. Will take future concerns to mailing list as mentioned. @Dridi could you provide any guide that help us to divide storage back-ends, so we can tackle this problem effectively ? our statics suggest that we would again face this problem when the object size is > 10 mb |
Old ticket imported from Trac:
nuke_limit doesn't seem to have any effect anymore. It looks like stv_alloc_obj is called multiple times per object, and only does one allocation, so it never hits nuke_limit.
The text was updated successfully, but these errors were encountered: