-
Notifications
You must be signed in to change notification settings - Fork 4.9k
[release/3.1] Remove local header compressed/uncompressed length validations #42870
[release/3.1] Remove local header compressed/uncompressed length validations #42870
Conversation
src/System.IO.Compression/tests/ZipArchive/zip_InvalidParametersAndStrangeFiles.cs
Show resolved
Hide resolved
Please wait to merge - @Anipik will do it when branch is open and this is signed off/green |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please update the assembly version as well here ?
Why do we need to update the assembly version? |
As part of our versioning scheme we always update the assemblyVersion (unless it affects the assembly redirects) This library is a part of shared framework so i don't think changing or keeping it same would have any effect. but we should do it for consistency. servicing doc https://github.com/dotnet/runtime/blob/0515878d1eb4fb544ae0219c436c299746fdb2b4/docs/project/library-servicing.md cc @joperezr |
We have not been updating assembly versions for fixes in shared framework and it worked fine so far, so I do not see why we should start now. The changes in how we do versioning tend to have non-trivial fallout. |
That servicing doc states:
This assembly only ships in the shared framework. It doesn't ship in any OOB package. Can you give an example of a previous servicing event to a shared-fx-only assembly where we updated the assembly version? |
Agreed here, we shouldn't need to rev assembly version for things that are only in the shared framework. It can have side-effects that leak, which is a no-no for things that are centrally serviced (eg: serviced machine generates serialized payload that fails to deserialize on non-serviced machine) |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
any updates on this? |
@NotoriousRebel Once a PR is approved for servicing but gets a NO MERGE label, it usually means that the release branches are currently locked due to a release in progress, so we can't merge PRs in. That means that once they unlock the branch then this will be ready to go in and ship in the next release. |
@joperezr this is a little confusing, in which release will this be? 3.1.4? (Hopefully, since we are bitten by this bug ( dotnet/runtime#1094 ), and we are simply creating a zip from a .NET Framework app and reading it somewhere else with a .NET Core 3.1.2 app. |
yes this will be available in 3.1.4 build. It should be released in some time around May |
Port from dotnet/runtime#32149. Fixes dotnet/runtime#1094
Description
We had and issue dotnet/runtime#27741 reported that .NET core ZipArchive API extracting a tampered zip file without any error, where Uncompressed size was tampered to have much smaller size than real/extracted size. When trying to decompress the tampered zip with different tools some were throwing error, some just restricting decompressed output by given uncompressed size. As it could cause security issue we added strict validation of Compressed and Uncompressed size to throw if it unmatched, also limited the decompressed output by uncompressed size.
Customer Impact
Throwing error for unmatched compressed/uncompressed sizes in local header vs central directory record was a breaking change. Turns out zip format specification, especially file local header format is not that clear in some area and not strictly followed by different zip producing tools. Several customers complaining that they cannot read a zip anymore which they were able to read before 3.0. As validation was not mandatory when we restricting the output stream by uncompressed size we are removing the validation we added in 3.0
Regression
No.
As we are removing a restriction, nothing added so no regression should happen
Testing
Removed related tests, added new test for original bug
Risk
Customer might read corrupted zip file with updated compressed/uncompressed size from real size, but as we have appropriate restriction for reading compressed stream and output decompressed stream that wouldn't be a big issue.
cc @ericstj