Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store passing screenshots for reftests #22473

Open
gsnedders opened this issue Mar 26, 2020 · 7 comments
Open

Store passing screenshots for reftests #22473

gsnedders opened this issue Mar 26, 2020 · 7 comments

Comments

@gsnedders
Copy link
Contributor

@gsnedders gsnedders commented Mar 26, 2020

During spec development especially, it can at times be useful to be able to look at passing screenshots for reftests, especially if there's any doubt as to whether the reference was itself rendered correctly. I think it would be worthwhile revisiting whether or not we should store passing screenshots, too.

@Hexcles Hexcles transferred this issue from web-platform-tests/wpt.fyi Mar 26, 2020
@Hexcles
Copy link
Member

@Hexcles Hexcles commented Mar 26, 2020

Transferred this issue to wpt because on the wpt.fyi side we're totally capable of storing all screenshots. The change needs to happen in wptrunner and IIRC for Marionette reftest runner this might require a bit of work as it doesn't return the screen capture if a test passes (I'm not sure). And then there's the question of whether the CI platforms (Azure & Taskcluster) can store the screenshots temporarily (during the run).

@gsnedders
Copy link
Contributor Author

@gsnedders gsnedders commented Mar 26, 2020

We have all the support needed in wptrunner for this, just a question of passing the right flags and storing the right screenshots.

@Hexcles
Copy link
Member

@Hexcles Hexcles commented Mar 26, 2020

Hmm, I guess my memory was about the default behaviour, which can probably be changed here:

def setup(self, screenshot="unexpected"):
data = {"screenshot": screenshot}
if self.executor.group_metadata is not None:
data["urlCount"] = {urljoin(self.executor.server_url(key[0]), key[1]):value
for key, value in iteritems(
self.executor.group_metadata.get("url_count", {}))
if value > 1}
self.executor.protocol.marionette.set_context(self.executor.protocol.marionette.CONTEXT_CHROME)
self.executor.protocol.marionette._send_message("reftest:setup", data)

Also IIRC @jgraham previously expressed concerns about performance regressions (especially in Firefox where not all screenshots are dumped to the runner by default).

@Hexcles
Copy link
Member

@Hexcles Hexcles commented Mar 26, 2020

A few numbers FWIW: so far we have about 68k images in total, less than 1GB for all failing screenshots across all browsers

@jgraham
Copy link
Contributor

@jgraham jgraham commented Mar 27, 2020

The support exists here, but we don't have any way to only generate the screenshots we don't have. And in general that seems tricky because we can't hash a screenshot we haven't generated to know if it's been seen before. So having this on by default (even just for GitHub runs) ends up doing a lot of extra work.

If we do a push to a trigger branch with the flags set to generate screenshots always, that should backfill a bunch of data. Could we start by doing that? Later we could consider setting the flag for PRs but not master runs, or doing one run a month with the flag enabled, or only on beta or something.

@Hexcles
Copy link
Member

@Hexcles Hexcles commented Mar 27, 2020

So your main concern is about performance (specifically, run time), right? We should try and see how much slower it actually is to generate all screenshots.

@foolip foolip added the infra label May 13, 2020
@stephenmcgruer
Copy link
Contributor

@stephenmcgruer stephenmcgruer commented May 25, 2020

Tentatively marking backlog, but feel free to upgrade to roadmap @Hexcles if you are planning to work on this anytime soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.