New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shader compilation: reading 0 as int instead of float #334
Comments
sigh - GLSL compilers strike again. Ok, I will post a fix tonight (right now that file has other changes that I have to finish first). |
No hurry. The retro WIN32 / XP build, that worked on, has this change already. Just thought it would be useful to post the finding here, for whatever it is worth. |
Oh it is definitely useful. My consternation isn't with you, its with the GLSL compiler design (everyone writes their own, they are all broken in their own special ways). I'm glad you are reporting this since if one person reports an issue, most likely many people are silently suffering from the same issue. |
Fixed in the 1.09.52 release. |
Thanks to the new shader error logging I could trace this.
Only an issue with older Windows Radeon Drivers for cards like HD 6xxx / HD 7xxx.
Seemingly fixed by making the 0 into a 0.0. Which makes the shader compiler recognize it as float.
Fixes true color rendering for my system. Also seems to speed up 8-bit GPU rendering, so I figure the system was somehow tripping over it constantly?
Fix in Shaders\textureSampleFunc.h
The text was updated successfully, but these errors were encountered: