Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid page faults for NVA2 memory #9

Closed
PatrickvL opened this issue Oct 16, 2016 · 2 comments
Closed

Avoid page faults for NVA2 memory #9

PatrickvL opened this issue Oct 16, 2016 · 2 comments
Labels
cpu-emulation LLE CPU enhancement general improvement of the emu graphics GPU and/or game graphics related LLE Low Level Emulation memory memory manager problem needs-developer-discussion devs must discuss this

Comments

@PatrickvL
Copy link
Member

PatrickvL commented Oct 16, 2016

I'll try to add more specifics to this later, but the general idea here is that miniport API's can be used to move the NV2A command buffer to a pre-allocated memory-range;

As far as I remember from my research, all Direct3D code linked into XBE's aknowledge the miniport-supplied memory range. So, most (perhaps all) Xbox titles don't access hard-coded memory ranges, but will follow what the miniport API dictated.

This means that many page-faults can be avoided, just as long as there's a method in place that will handle writes to the NV2A memory. One method could be to have a separate thread interpreting the data written to the command buffer, translating it to a high level 3D API.

@LukeUsher LukeUsher added the enhancement general improvement of the emu label Dec 16, 2016
@PatrickvL
Copy link
Member Author

Another approach could be to allocate the memory range ourselves, let that be read and written like normal memory, and handle the GPU in a separate thread.

A separate thread to handle the GPU with will be needed anyway, as it's emulation should run independently from the CPU.

There are a few things to watch out for though;

Timing might become an issue.
If correct functioning of a piece of software relies on specific timing, that could turn out to be difficult to emulate.

Reads or writes could require specific handling.
If a read or write to GPU memory requires specific handling during this access, we can't treat any access to the hardware address range like normal memory anymore. Instead, we would need address specific handlers, at least for these addresses.
This in turn requires an efficient mechanism to match an address to a handler. (We're still not sure what the best approach for that would be.)

@PatrickvL PatrickvL added LLE Low Level Emulation graphics GPU and/or game graphics related labels Jan 31, 2017
LukeUsher pushed a commit that referenced this issue Mar 28, 2017
@PatrickvL PatrickvL removed the LTCG label Aug 22, 2017
@PatrickvL PatrickvL added the memory memory manager problem label Nov 14, 2017
@PatrickvL
Copy link
Member Author

This entire idea is obsolete. Accurate GPU emulation requires handling every write and read that would have some effect on the real GPU too.

Now, there do exist methods to speed up the handling of pages faults, but we would have to research the academic papers on the subject first before making any bold statements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cpu-emulation LLE CPU enhancement general improvement of the emu graphics GPU and/or game graphics related LLE Low Level Emulation memory memory manager problem needs-developer-discussion devs must discuss this
Projects
None yet
Development

No branches or pull requests

2 participants