Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APNG Decoder incorrectly handles frame offsets and dispose previous with blend over #2708

Closed
4 tasks done
SpaceCheetah opened this issue Mar 28, 2024 · 3 comments
Closed
4 tasks done

Comments

@SpaceCheetah
Copy link
Contributor

Prerequisites

  • I have written a descriptive issue title
  • I have verified that I am running the latest version of ImageSharp
  • I have verified if the problem exist in both DEBUG and RELEASE mode
  • I have searched open and closed issues to ensure it has not already been reported

ImageSharp version

3.1.3

Other ImageSharp packages and versions

None

Environment (Operating system, version and so on)

Windows 11

.NET Framework version

.net8.0

Description

For APNG, the decoder has various significant issues handling certain cases.
For instance, in APNG, for compression, frames are allowed to be smaller than the overall image with an offset. The decoder currently does not take that offset into account.
Additionally, there seem to be some issues with dispose previous, blend over.
Both of these issues are represented in the test image below.

Steps to Reproduce

using SixLabors.ImageSharp;

Image image = Image.Load("test.png");
for (int i = 0; i < image.Frames.Count; i++) {
    image.Frames.CloneFrame(i).SaveAsPng($"{i}.png");
}

Images

Original:
test
Outputted frames:
0
1
2
3
4

@SpaceCheetah
Copy link
Contributor Author

I'll be looking into fixing the decoder (and probably encoder) soon, but first.

Should each of the frames from the decoder really be the end result, rendered frame? If so, giving any frame, in many cases, requires rendering all previous frames.

It also makes much of the retrieved frame metadata irrelevant to users; after all, blend mode doesn't matter if the frame is already fully rendered. The only frame metadata that would need to be exposed is the frame delay.

Alternatively, it would be much simpler on the decoder side to simply provide the frames as stored, since that wouldn't need any information from previous frames. Providing all the necessary metadata, the user could then take care of composting themselves, or perhaps we could provide a method to render frames outside of the main decoder.

@SpaceCheetah
Copy link
Contributor Author

Fixed a few issues, still working on some related ones before I make a PR.

One thing I noticed is that, in APNG, the default frame isn't necessarily animated (if the first fCTL comes after the default image, it's not in the animated sequence). This currently isn't handled at all. I can fix some of the resulting issues, but part of fixing this would require a change to the public API.
Clearly, the default image still needs to be in the image frames, even if it's not being animated. There needs to be a way for the end user to know to skip it when animating the results.

I'm currently planning to add a bool to the pngmetadata, saying whether the default frame is part of the animation. Is there any issue I should know about that would make that wrong?

@JimBobSquarePants
Copy link
Member

Hi @SpaceCheetah thanks for raising this. Yes, we should absolutely be handling offset. We have a TODO in the encode which somehow got missed.

Should each of the frames from the decoder really be the end result, rendered frame? If so, giving any frame, in many cases, requires rendering all previous frames.

It also makes much of the retrieved frame metadata irrelevant to users; after all, blend mode doesn't matter if the frame is already fully rendered. The only frame metadata that would need to be exposed is the frame delay.

Alternatively, it would be much simpler on the decoder side to simply provide the frames as stored, since that wouldn't need any information from previous frames. Providing all the necessary metadata, the user could then take care of composting themselves, or perhaps we could provide a method to render frames outside of the main decoder.

I'm sure you've already figured this out now you've stepped through the code but yes. We need to complete rebuilt frame to allow processing. If we don't do that then you would introduce artifacts during activities like resizing as sampled pixels would include empty data.

We do work during encoding to remove duplucate pixels from following franes, reintroducing the optimization.

I'm currently planning to add a bool to the pngmetadata, saying whether the default frame is part of the animation. Is there any issue I should know about that would make that wrong?

That seems like the correct approach to me.

JimBobSquarePants added a commit that referenced this issue Apr 3, 2024
Fix animated png handling (issue #2708)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants