Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendering regression test #11888

Open
hrydgard opened this issue Mar 14, 2019 · 2 comments
Open

Rendering regression test #11888

hrydgard opened this issue Mar 14, 2019 · 2 comments
Milestone

Comments

@hrydgard
Copy link
Owner

We should collect a large number of GE captures and have a way to automatically render and compare them to blessed images. Dolphin has something similar that runs in the cloud, but I primarily want this for all the mobile devices - they generally have the worst driver bugs and rendering issues. It would run locally on the device but use network sharing to download and run each GE capture so we don't have to manually copy over files.

@ghost
Copy link

ghost commented Dec 21, 2021

How about this now?

@unknownbrackets
Copy link
Collaborator

I think what we could do is:

  1. Choose a batch of representative GE frame dumps (we have a bunch reported now.)

  2. Generate representative screenshots for them. I have a tool that does this for (most) using PSP rendering, which is arguably the best thing. There's an argument for using a specific device for "best" due to higher bit rate, etc.

  3. Problem: setup some form of CI? Maybe llvmpipe or something? I think we need to validate on pull request, or else we'll break it and not notice which would erode the value of these. This would be separately useful anyway, but Actions and Travis and etc. don't have GPUs. Obviously this is just for one (or a couple) reference passes, as long as those pass we could rely on users for the rest.

  4. Decide if we're maintaining the list in code (seems better? but then we can't ever remove/change and might version...) or remotely (i.e. a URL that lists them.)

  5. Expose a button or similar that runs each test in sequence. Question for this: do we force off hack settings for it? Or warn/block running it when settings are dangeresque?

  6. Optional bonus feature: measure a baseline rendering speed for each frame dump, and provide some device performance score based on how the device's times compare.

I think the biggest problem is CI. It's tempting to ignore it, but I think the feature will fall flat without it.

-[Unknown]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants