-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI refactorings #71
AI refactorings #71
Conversation
The AI controller has been reworked to be made independent of the audio plugin. All it expects is 2 callbacks (one for setting sample format, and the other to push samples). A retro-compatibility module implements these 2 new callbacks using the existing audio plugin in a best effort manner as pure zilmar spec is not fully compatible with the new callbacks. However, by exploiting implementation details of both core and audio plugin, we can get a "good enough" implementation of these callbacks. Audio DMA fifo has been also reworked and its associated data has changed. A compromise has been made in order to not change the savestate format.
Thanks @bsmiles32 I'll take a look when I get some time, but just a quick question for clarification: Is it your eventual hope to remove the audio plugin altogether, or simply to clean up its interface so that it's easier for (e.g.) mupen64plus-ae to write its own custom audio plugin (rather than wrap mupen64plus-audio-sdl)? |
My long term plan is to remove the audio plugin altogether from the core. From an external point of nothing has changed, audio plugin keeps working. But now, under the hood, frontends that want to bypass the audio plugin could do so with minimal hacking. |
I don't maintain a frontend, by the way. On MIPS, I just use the ui-console frontend. |
Putting audio output in my frontend might be a good excuse to test out my audio library. |
@bsmiles32 I don't have a deep understanding of the core, but I think what you propose here would be fine. In mupen64plus-ae, we currently implement a few audio-related callbacks for SDL (here, here, here, here). They have a similar signature to what you propose, so I think it wouldn't be a big change to use this new API. Plus we could eliminate the SDL middleman altogether which would be a big win in my book. If I understand you correctly, however, the front-end will be responsible for audio mixing/resampling if it doesn't use the audio-sdl plugin. Minor note, if the new API defines a startup method, it should probably also define a shutdown method for clients to cleanup. |
Also, just a general question about a possible move to API 3.0. How do you imagine the interaction between core and front-end will be implemented? For example, I could imagine the front-end dynamically loading the core, locating its API methods, and registering a bunch of callbacks, C-style:
Or, a little more elegantly with C++
where Or did you have something different in mind? |
@bsmiles32 - Thanks for including me in the mix and thanks for your refactors so far, I like a lot of what has been done. Unfortunately I still have to work through backporting some of your PRs so I can't immediately comment on these AI refactorings. I am done with the cenification patches by now, however the 'SI refactoring' patches posed some problems for me because nothing with regards to mempack/flashram/EEPROM saving is supposed to be file I/O-based for the libretro port, so I'm struggling with backporting some of these commits since they assume that files have to be written to/read from. So I am already having to make divergences there from upstream (exactly what we're trying to prevent this time I think). Can we start addressing this before I am again forced to arrive at a stage where the libretro port is never going to be upstreamable? I'd like to eventually arrive at the stage where I can just shallow fork so I'd like to prevent this as much as possible this time around. Is it possibly you can come to #mupen64plus on Freenode IRC so we can swap some ideas there? Real-time interaction through IRC is really far more preferable than Github, it gets stuff done much quicker than some mailinglist or Github. |
@twinaphex : It's unfortunate that SI refactorings causes troubles for your fork :(
@littleguy77 : Frontend will have to manage on their own the creation and destruction of the audio device (for instance creation before start r4300 emulation, destruction after the core returned from r4300 emulation). @Nebuleon : I wasn't sure what frontend you used for the MIPS port... Still you might have good points to make here :) @wareya : I wasn't aware of your frontend. But now I am, so welcome to the discussion :) [1] https://github.com/mupen64plus/mupen64plus-core/blob/master/src/main/main.c#L959-L961 |
@bsmiles32 Sounds good. I was mainly trying to visualize your long-range goals, without actually understanding all the details. I think what you suggest is natural and intuitive. Audio/video device creation/teardown seems a natural front-end responsibility, and this division is much clearer than the current audio plugin API. |
Ping @dh4 |
Ping @Cronikeys |
Sorry for the late response, My Pi port uses the mupen64plus/ui-console front-end however I have an audio-omx plugin which gives a 5% performance increase so would like to continue using omx. I can provide the omx code to the mupen64plus/ui-console when the makefile VC variable is set but feel it would pollute the code a little or I have to maintain a fork. It also raises the question as to how users would choose the audio output method when there are multiple opions e.g. omx or sdl. |
@ricrpi : Don't worry for now, nothing has changed from an external point of view. Plugins still works are still the main way of outputing sound. (If they don't work, this is a BUG and should be reported).
I think we will try to do option1 first because that's less effort for everyone and don't force dependance on ui-console, but option3 is kind of nice too if we want to do remove the extra indirection for ported backends. Hope these options will address your concerns :) |
bsmiles, from your original plan description, I think the way forward with this will be to:
|
That looks like a good plan to me. |
@richard42 : Ok, I've started some work to allow the transition like what you've described I have howver a couple of questions:
|
@bsmiles32 thanks for the ping. A lot of major changes happened since m64p 2.0 and a lot of Bizhawk's API callbacks were/are custom, so I've been in the progress of making a maintainable fork (rather than try to back-port major milestones like CountPerOp). I'll let you know specific callback suggestions after I'm done playing catch-up. More abstraction sounds nice as does moving away from the plugin system (at least for AI). |
@Cronikeys : thanks for reply.
These are very general remarks on forking especialy in case of a "moving project", but the important point is please talk to us :) |
@bsmiles32 : I doubt that developers actually read the README file, or that a warning about the costs of forking would dissuade anyone. :) Regarding struct audio_backend, it would be a good idea to put a version number in the struct so that the core knows what to expect when the frontend calls its CoreSetAudioInterfaceBackend function. You should also bump up the minor number of the FRONTEND_API_VERSION returned by the core's PluginGetVersion call, so that the frontend knows that the Core supports this interface. I would like to release a new version by the beginning of April. I would prefer that these api changes not be a part of this new release. |
With this PR I propose small refactorings (the first 3 commits), and a more ambitious change which isolate the AI subsystem from the audio plugin.
The AI subsystem now expect two callbacks (set_audio_format, push_audio_samples) in order to play audio.
A simple implementation of these 2 callbacks has been done using the audio plugin in order to keep retro-compatibility with existing audio plugin.
@littleguy77 , @twinaphex, @gen2brain, @ricrpi, @Nebuleon :
I'd like your opinions on the 2 new possible callbacks for audio output.
Would it be easy for your frontends to provide their own implementation of these callbacks (in case we deprecate the audio plugin, or if you want to get rid the audio plugin in your frontends) ?
Basically set_audio_format allows you to setup the audio device (frequency + sample format), and in push_audio_samples you should push (and eventually resample) the given samples to your audio device.
Edit: Just to make it clear, we don't intend to remove/deprecate the audio plugin now.
This is just preliminary work to better isolate core components.