archive/zip: provide API for resource limits #33036
Zip unpacking can generate outputs thousands of times larger than the input. We should provide a security API that lets callers limit the resources that can be spent for unpacking untrusted archives.
It might be enough to limit the output size, if CPU and memory are always dependent on it.
The text was updated successfully, but these errors were encountered:
(sorry for the drive-by comment, I'm a bit short on time right now)
I haven't thought this through completely, but since it sounds like a general-ish problem how about having a way to set a quota for how much memory can be allocated on the heap (not live memory, just the sum of allocation requests) by a G? Such a quota would apply to the stack frame where its set and to any child frames (including frames of Gs spawned when the quota is active).
For every allocation the quota would be checked; if the amount requested goes over quota the allocation fails and the G panics.
With such a mechanism any of the parsers/decoders/unmarshalers could estimate an upper bound on the amount of memory allocated to perform a certain (sub)operation, and the runtime will ensure the limit can't be crossed. Users of the parsers/decoders/unmarshalers would be able to use app-specific knowledge to place additional limits (e.g. "I expect images to be no bigger than N bytes once decoded, so allocating more than 1000*N+K bytes would be hard to justify"). Having quotas nestable (although this may complicate the implementation a bit) would allow these limits to transparently compose (nested quotas that are greater or equal than an enclosing quota would return error when set or, most likely, be silently ignored).
There is the problem of how to propagate such a quota to lesser-idiomatic structures like worker pools, but it should be possible to find a workaround for it (off the top of my head, make it possible to explicitly propagate the quota controller - but this could be probably done later).
update: reworded some parts for clarity