You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When writing tests that compare screenshots using expect(locator).toHaveScreenshot(), the assertion's outcome is partly controlled by configuration values such as maxDiffPixelRatio. These configuration values can generate passing test results that the test author might consider a false positive. Or there might be inconsistencies between tests that are capturing a full-page image a different viewport sizes. Tests for larger viewports will be less sensitive to subtle changes than smaller viewport sizes.
Problem
The test results web UI currently only shows these results when the assertion fails.
Solution
To be able to determine if the configuration should be adjusted, there's value in being able to see the "actual" and "expected" images for screenshots assertions that have passed. As such, the test results UI should show the "actual" and "expected" images when a passing step is expanded in the test results UI.
The text was updated successfully, but these errors were encountered:
@mscottford Are you using built-in html report? We do not save passed screenshots, because that would be expensive. Not sure how we could make this work to avoid significant performance issues for the average case. Perhaps an opt-in?
Yes, I'm using the built-in html report.
I wouldn't mind an opt-in. I'm not concerned with the additional storage. This is primarily for my local development workflow.
M. Scott Ford
Co-Founder & Chief Code Whisperer (CTO)
Corgibytes, LLC
804.596.2375 x701
pronouns: he/him
***@***.***
https://corgibytes.com ( https://corgibytes.com/ )
Have you read our First Round Review ( http://firstround.com/review/forget-technical-debt-heres-how-to-build-technical-wealth/ ) article about paying off technical debt?
Love refactoring and TDD? Join us at LegacyCode.Rocks ( http://LegacyCode.Rocks ) for virtual meetups, podcasts, and more.
Sent via Superhuman ( ***@***.*** )
On Tue, May 23, 2023 at 7:29 PM, Dmitry Gozman < ***@***.*** > wrote:
@ mscottford ( https://github.com/mscottford ) Are you using built-in html
report? We do not save passed screenshots, because that would be
expensive. Not sure how we could make this work to avoid significant
performance issues for the average case. Perhaps an opt-in?
—
Reply to this email directly, view it on GitHub (
#23197 (comment)
) , or unsubscribe (
https://github.com/notifications/unsubscribe-auth/AAAFGXDAQMBND6OMN6FCSXTXHVB47ANCNFSM6AAAAAAYJSO65I
).
You are receiving this because you were mentioned. Message ID: <microsoft/playwright/issues/23197/1560254013
@ github. com>
Context
When writing tests that compare screenshots using
expect(locator).toHaveScreenshot()
, the assertion's outcome is partly controlled by configuration values such asmaxDiffPixelRatio
. These configuration values can generate passing test results that the test author might consider a false positive. Or there might be inconsistencies between tests that are capturing a full-page image a different viewport sizes. Tests for larger viewports will be less sensitive to subtle changes than smaller viewport sizes.Problem
The test results web UI currently only shows these results when the assertion fails.
Solution
To be able to determine if the configuration should be adjusted, there's value in being able to see the "actual" and "expected" images for screenshots assertions that have passed. As such, the test results UI should show the "actual" and "expected" images when a passing step is expanded in the test results UI.
The text was updated successfully, but these errors were encountered: