Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use CPU pixel format conversion for AMD/ATI platform only #6039

Closed
wants to merge 3 commits into from
Closed

Use CPU pixel format conversion for AMD/ATI platform only #6039

wants to merge 3 commits into from

Conversation

raven02
Copy link
Contributor

@raven02 raven02 commented May 6, 2014

Nvidia platform shouldn't need it and will bit slow down a bit.

I remember this has been suggested previously here #1686 to detect and set it simply using vendor .

(There are some underlying codes need to be removed but will clean up later)

@raven02
Copy link
Contributor Author

raven02 commented May 6, 2014

@solarmystic , can you help to test a bit those games that may require read framebuffer to memory mode ? to see if it is working okay with this commit on AMD platform

@solarmystic
Copy link
Contributor

@raven02 Danganronpa, Ys Seven and Trails in the Sky still work as expected on my AMD graphics card with this commit when using FB to Memory mode.

I'm not sure whether it is a good idea to make every other platform use Read Framebuffers to Memory (GPU) by default (if I understand your pull request correctly) without giving them the option to switch to the CPU though. We know for a fact from testing that AMD/ATI cards need the CPU mode to attain accurate functionality in games like Danganronpa (correct object targeting/interaction), but other mobile platforms may need CPU mode as well.

@raven02
Copy link
Contributor Author

raven02 commented May 6, 2014

CPU pixel conversion only used in Asynchronous PBO mode and current code always force synchronous mode on mobile. Thereforw mobile should be.unaffected.

@unknownbrackets
Copy link
Collaborator

What about Intel, PowerVR on desktop? I guess we probably don't care about Matrox/S3...

-[Unknown]

@hrydgard
Copy link
Owner

hrydgard commented May 6, 2014

Hm, if we actually do this, we should change the rendering mode settings to have only 3 instead of 4 option, as this locks the CPU/GPU choice without telling the user.

Also, we should remove that glGetError call or only call it the first time, as glGetError is very expensive (it forces the CPU to wait for the GPU).

@solarmystic
Copy link
Contributor

I still think that the choice should be given to the end-user for those options, but that's just me.

And like @unknownbrackets mentioned, we still need input from users with other desktop GPU vendors.

@hrydgard
Copy link
Owner

hrydgard commented May 6, 2014

Yeah, I'm undecided. Maybe we should simply somehow "default" or make more prominent the option that's likely to be more suitable for the user's GPU.

@raven02
Copy link
Contributor Author

raven02 commented May 7, 2014

Yep , probably keep it for sake of good to our user .I will remove the glGetError call by the way.

@raven02 raven02 closed this May 7, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants