-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GS/OpenGL: Add a shader/program cache #5194
Conversation
The FXAA and external shaders options are not working with this PR. |
when trying to load a external shader the error i get is
which i presume is the error @gilderoylockhart01 got |
There is some rather large drops and or freezes with lower FPS than master on first run as the cache is being created but afterwards on subsequent runs performance returns to the same level as master with no freezing and or stuttering. Tested on an RTX 2060 Super with both Vulkan Dev 472.85 and Game Ready 497.29. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not tested it, but shaders seem to be written to disk synchronously? This could potentially explain the initial "lag" Jordan encounteted.
Writing to disk async is a whole new can of worms though, eugh. Regardless if that is even to be considered I think it makes sense not to worry about it in this PR - I may be able to make writes async on my own separately.
fragment_source_hash_high == key.fragment_source_hash_high && fragment_source_length == key.fragment_source_length); | ||
} | ||
|
||
bool ShaderCache::CacheIndexKey::operator!=(const CacheIndexKey& key) const |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Protip - defining this operator as return !(this == key);
allows for much less typing. It doesn't matter now since it's typed out as-is but it's worth mentioning IMO
I hadn't really noticed shader compilation stutters/hitches on master myself, yet Is there a game which is susceptible to such issues? (and thus would make a good test case) |
If you're using NVIDIA, it has its own internal cache. But not all drivers do.
That's just the logging stuff in wx being garbage - the info stuff is written to stdout, the shader cache messages are written using Console. |
Ah got it and yeah on Intel I had noticed this solved some stuttering I had too. |
FXAA/externalfx should be fixed. Also got rid of the uniform buffers for those, it's just more allocations which aren't needed. |
Probably, at any rate It seems Intel doesn't have a (decent) shader cache Also can confirm that PCSX2's console is kindof jank, CLR_DEV9 would often get log messages split aswell. |
FYI, this is now based on #5198 (avoids conflicts). |
Can an option be added to disable this, I consider these all junk files and these will quickly build up (especially if you hop between many games in short bursts), I'd rather have this not be locally cached at all, or just be done so in RAM. |
They are shared between games and a tester here reported the cache as being a little less than 3 MB after 11 games. I don't think that maintaining two ways to load shaders is necessary just for a tiny bit of storage space. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested once more on an RTX 2060 Super and Iris Xe 48EU and all seems good and functions as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MD5Digest.{cpp,h} needs a reindent
}; | ||
|
||
static int m_shader_inst; | ||
static int m_shader_reg; | ||
|
||
private: | ||
// Increment this constant whenever shaders change, to invalidate user's program binary cache. | ||
static constexpr u32 SHADER_VERSION = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you are hashing the actual shader content but just to make sure, this is just so that old garbage isn't taking up space in the shader cache right? If I forgot to update this when editing a shader, the worst that could happen is my shader cache gets bigger than usual?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct (the cache entry is based on the code hash, not the version).
Description of Changes
More stuff from the Qt branch. Also tossed in a bunch of related GL commits.
Currently, GS will compile every shader as it's used, every time. This is even worse when you consider it happens when you change any GS settings.
This adds a disk cache using the program binaries available in most OpenGL drivers. Shaders aren't preloaded, they're still "compiled" on demand, but instead of using the source code, the program binary is used, which is much faster. Note that some modern drivers have a built in hash-based cache of their own, but not all do.
One downside here is that instead of using separate shader objects, we now use single program objects, which means more variants. But they're cached and shared between all games, so in practice this will likely lead to less stalls due to shader compiling. It also opens up more opportunity for inter-stage optimization from the drivers, assuming they don't just lower the SSO into variants anyway. Plus the fact that they're broken on AMD/Intel.
We should do the same thing for D3D some time. I have a similar class for that, but one thing at a time.
Suggested Testing Steps
Make sure OpenGL rendering performance and stability isn't affected across a range of drivers.