Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include failure reason when snapshot comparisons fail #638

Merged
merged 1 commit into from
Sep 21, 2022

Conversation

ejensen
Copy link
Contributor

@ejensen ejensen commented Sep 21, 2022

A follow-up to #628 (comment)

This PR updates the image comparison methods to report a failure message rather than just a boolean of whether the comparison passed or failed. This failure message provides more insight into the reason why a snapshot assertion failed. This is especially helpful when the image differences are subtle - such as the "ghost" differences mentioned in #628 (comment)

Fancy logo
Fancy logo

Copy link
Member

@stephencelis stephencelis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much better!

Comment on lines -80 to +100
let deltaFilter = CIFilter(
name: "CILabDeltaE",
parameters: [
kCIInputImageKey: CIImage(cgImage: newCgImage),
"inputImage2": CIImage(cgImage: oldCgImage)
]
)
guard let deltaOutputImage = deltaFilter?.outputImage else { return false }
let extent = CGRect(x: 0, y: 0, width: oldCgImage.width, height: oldCgImage.height)
guard
let thresholdOutputImage = try? ThresholdImageProcessorKernel.apply(
withExtent: extent,
inputs: [deltaOutputImage],
arguments: [ThresholdImageProcessorKernel.inputThresholdKey: (1 - perceptualPrecision) * 100]
)
else { return false }
let averageFilter = CIFilter(
name: "CIAreaAverage",
parameters: [
kCIInputImageKey: thresholdOutputImage,
kCIInputExtentKey: extent
]
)
guard let averageOutputImage = averageFilter?.outputImage else { return false }
var averagePixel: Float = 0
CIContext(options: [.workingColorSpace: NSNull(), .outputColorSpace: NSNull()]).render(
averageOutputImage,
toBitmap: &averagePixel,
rowBytes: MemoryLayout<Float>.size,
bounds: CGRect(x: 0, y: 0, width: 1, height: 1),
format: .Rf,
colorSpace: nil
return perceptuallyCompare(
CIImage(cgImage: oldCgImage),
CIImage(cgImage: newCgImage),
pixelPrecision: precision,
perceptualPrecision: perceptualPrecision
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CoreImage operations were extracted into a separate method since they were common across UIImage.swift and NSImage.swift

@stephencelis stephencelis merged commit 208cb71 into pointfreeco:main Sep 21, 2022
format: .Rf,
colorSpace: nil
)
let actualPerceptualPrecision = 1 - maximumDeltaE / 100
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ejensen I'm not familiar with this filter's output, but when taking this for a spin in isowords I noticed that it seems to be flipped:

image

Should this just be maximumDeltaE / 100?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe not, I was misremembering for this one:

IMG_3696

Might just need to get a feel for the change and what the numbers "mean" :)

Copy link
Contributor Author

@ejensen ejensen Sep 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stephencelis Delta E should have an inverse relationship with precision. A higher Delta(Difference)E means the pixels are less similar(precise).

The wording of the error reporting might need some finessing. Seeing "Actual perceptual precision: 0.56" insinuates the entire image is significantly different. When in reality the perceptual precision being reported is the lowest of all the pixels compared, which could be a significant outlier.

Copy link

@simondelphia simondelphia Sep 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So when it says "Actual image precision 0.03083378 is less than required 0.9" in my test logs what does that mean?

Copy link
Contributor Author

@ejensen ejensen Sep 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So when it says "Actual image precision 0.03083378 is less than required 0.9" in my test logs what does that mean?

That means that the lowest perceptual precision of all the pixels is 3.08%. You would need to set perceptualPrecision <= 0.0308 if you were to require all the pixels to meet the perceptual requirement (eg. setting precision: 1.0).

The failure messages could both better explain the values and provide suggestions. I'm prototyping this in the image-comparison-playground branch. It contains an iOS/macOS compatible Swift Playground where you supply two snapshots and it displays the result of the comparison along with all the intermediaries for inspection.
NotMatching
The output makes suggestions for values to use when images do not match:

❌ Images do not match…
The images would match if either:
  1. precision is lowered to 0.9991
  2. perceptualPrecision is lowered to 0.9939
  3. precision is raised to 1.0 and perceptualPrecision is lowered to 0.9912
ℹ️ 99.91% of pixels match with 99.98% perceptual precision

Matching images also get suggestions when the precision values could be raised:

✅ Images match
The images would still match if:
  1. precision is raised to 0.9998
  2. perceptualPrecision is raised to 0.9939
  3. precision is raised to 1.0 and perceptualPrecision is set to 0.9859
ℹ️ 100.00% of pixels match with 98.00% perceptual precision

There is a tradeoff between detail + precision and speed of execution. It is particularly expensive to calculate the maximum matching perceptualPrecision when precision < 1. It takes a couple of seconds to calculate with a full-screen snapshot. The other suggestions are quite fast and are likely reasonable to include in the swift-snapshot-test's failure messages.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actual image precision 0.03083378 is less than required 0.9

Isn't that referring to precision, not perceptualPrecision?

Copy link

@simondelphia simondelphia Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So does it mean there is at least one pixel in the image that is 97% different from the same pixel on the reference image? Sorry, I still don't get what it means when it says "Actual image precision 0.03083378 is less than required 0.9" when you're looking at two identical-looking images.

Or, only 3% of the pixels in the image match the corresponding pixels in the other image enough to pass the perceptualPrecision parameter? Why is it so low when there are no perceptible differences? If it was like 80% or 90% fine but 3% I can't understand.

edit:

When in reality the perceptual precision being reported is the lowest of all the pixels compared, which could be a significant outlier.

So there's just one pixel in the image that has a precision of 3%.

Copy link
Contributor Author

@ejensen ejensen Nov 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So does it mean there is at least one pixel in the image that is 97% different from the same pixel on the reference image

Yes, that is what that message means.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess currently there's nothing built-in that I can use to filter out outliers?

@ejensen ejensen deleted the image-diff-reporting branch December 21, 2022 15:09
niil-qb pushed a commit to quickbit/swift-snapshot-testing that referenced this pull request Feb 23, 2023
OksanaFedorchuk pushed a commit to urlaunched-com/swift-snapshot-testing that referenced this pull request Mar 28, 2024
Muhieddine-El-Kaissi pushed a commit to thumbtack/swift-snapshot-testing that referenced this pull request Aug 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants