Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI refactorings #71

Merged
merged 4 commits into from
Feb 14, 2015
Merged

AI refactorings #71

merged 4 commits into from
Feb 14, 2015

Conversation

bsmiles32
Copy link
Member

With this PR I propose small refactorings (the first 3 commits), and a more ambitious change which isolate the AI subsystem from the audio plugin.

The AI subsystem now expect two callbacks (set_audio_format, push_audio_samples) in order to play audio.
A simple implementation of these 2 callbacks has been done using the audio plugin in order to keep retro-compatibility with existing audio plugin.

@littleguy77 , @twinaphex, @gen2brain, @ricrpi, @Nebuleon :
I'd like your opinions on the 2 new possible callbacks for audio output.
Would it be easy for your frontends to provide their own implementation of these callbacks (in case we deprecate the audio plugin, or if you want to get rid the audio plugin in your frontends) ?
Basically set_audio_format allows you to setup the audio device (frequency + sample format), and in push_audio_samples you should push (and eventually resample) the given samples to your audio device.

Edit: Just to make it clear, we don't intend to remove/deprecate the audio plugin now.
This is just preliminary work to better isolate core components.

The AI controller has been reworked to be made independent of the audio
plugin. All it expects is 2 callbacks (one for setting sample format,
and the other to push samples).

A retro-compatibility module implements these 2 new callbacks using the
existing audio plugin in a best effort manner as pure zilmar spec
is not fully compatible with the new callbacks. However, by exploiting
implementation details of both core and audio plugin, we can get
a "good enough" implementation of these callbacks.

Audio DMA fifo has been also reworked and its associated data has
changed. A compromise has been made in order to not change the
savestate format.
@littleguy77
Copy link
Member

Thanks @bsmiles32 I'll take a look when I get some time, but just a quick question for clarification: Is it your eventual hope to remove the audio plugin altogether, or simply to clean up its interface so that it's easier for (e.g.) mupen64plus-ae to write its own custom audio plugin (rather than wrap mupen64plus-audio-sdl)?

@bsmiles32
Copy link
Member Author

My long term plan is to remove the audio plugin altogether from the core.
But this cannot happen until I am sure all frontends can adapt to these changes.
So for now, I'm only doing internal refactorings.

From an external point of nothing has changed, audio plugin keeps working. But now, under the hood, frontends that want to bypass the audio plugin could do so with minimal hacking.

@Nebuleon
Copy link
Contributor

Nebuleon commented Feb 9, 2015

I don't maintain a frontend, by the way. On MIPS, I just use the ui-console frontend.

@wareya
Copy link

wareya commented Feb 9, 2015

Putting audio output in my frontend might be a good excuse to test out my audio library.

@littleguy77
Copy link
Member

@bsmiles32 I don't have a deep understanding of the core, but I think what you propose here would be fine. In mupen64plus-ae, we currently implement a few audio-related callbacks for SDL (here, here, here, here). They have a similar signature to what you propose, so I think it wouldn't be a big change to use this new API. Plus we could eliminate the SDL middleman altogether which would be a big win in my book. If I understand you correctly, however, the front-end will be responsible for audio mixing/resampling if it doesn't use the audio-sdl plugin.

Minor note, if the new API defines a startup method, it should probably also define a shutdown method for clients to cleanup.

@littleguy77
Copy link
Member

Also, just a general question about a possible move to API 3.0. How do you imagine the interaction between core and front-end will be implemented? For example, I could imagine the front-end dynamically loading the core, locating its API methods, and registering a bunch of callbacks, C-style:

   dlopen("libmupen64plus-core.so");
   // dlsym calls to locate core registration api
   // e.g. coreRegisterAudioInit = dlsym(......)

   coreRegisterAudioInit(myAudioInitFcn);
   coreRegisterAudioWrite(myAudioWriteFcn);
   coreRegisterAudioQuit(myAudioQuitFcn);
   coreRegisterFileRead(myFileReadFcn);
   coreRegisterFileWrite(myFileWriteFcn);
   coreRegisterController(myControllerFcn);
   // etc.

Or, a little more elegantly with C++

   dlopen("libmupen64plus-core.so");
   // dlsym calls to locate core registration api

   MyAudioHandler ah;
   MyFileHandler fh;
   MyInputHandler ih;
   coreRegisterAudioHandler(&ah);
   coreRegisterFileHandler(&fh);
   coreRegisterInputHandler(&ih);

where MyAudioHandler etc. derive from abstract classes defined in a core header file.

Or did you have something different in mind?

@inactive123
Copy link
Contributor

@bsmiles32 - Thanks for including me in the mix and thanks for your refactors so far, I like a lot of what has been done.

Unfortunately I still have to work through backporting some of your PRs so I can't immediately comment on these AI refactorings. I am done with the cenification patches by now, however the 'SI refactoring' patches posed some problems for me because nothing with regards to mempack/flashram/EEPROM saving is supposed to be file I/O-based for the libretro port, so I'm struggling with backporting some of these commits since they assume that files have to be written to/read from. So I am already having to make divergences there from upstream (exactly what we're trying to prevent this time I think).

Can we start addressing this before I am again forced to arrive at a stage where the libretro port is never going to be upstreamable? I'd like to eventually arrive at the stage where I can just shallow fork so I'd like to prevent this as much as possible this time around.

Is it possibly you can come to #mupen64plus on Freenode IRC so we can swap some ideas there? Real-time interaction through IRC is really far more preferable than Github, it gets stuff done much quicker than some mailinglist or Github.

@bsmiles32
Copy link
Member Author

@twinaphex : It's unfortunate that SI refactorings causes troubles for your fork :(
Because when I refactored I explicitely had in mind non file based accesses.
Save chip content (fla,sra,mpk,eep) handling is very simple :

  • You register the "save" callback(+user_data) which will be called when something gets written to it
    (this allow you to flush the saved content to disk).
  • You set the "data" pointer to the buffer that holds your saved content.
    And that's it. See [1] for an exemple.
    For m64p we implement this "save" callback using files and all this logic is isolated in the {eep,mpk,sra,fla}_file modules in main dir. For your fork you can provide an alternative non file based implementation of these.
    Hope it will help you to get closer to upstream.

@littleguy77 : Frontend will have to manage on their own the creation and destruction of the audio device (for instance creation before start r4300 emulation, destruction after the core returned from r4300 emulation).
Regarding the m64p 3.0 API, we're not there yet.
Furthermore, I think it would be simpler to just leverage the existing core/frontend communication API and do something similar to the M64CMD_SET_FRAME_CALLBACK ? But yeah, not there yet.

@Nebuleon : I wasn't sure what frontend you used for the MIPS port... Still you might have good points to make here :)

@wareya : I wasn't aware of your frontend. But now I am, so welcome to the discussion :)

[1] https://github.com/mupen64plus/mupen64plus-core/blob/master/src/main/main.c#L959-L961

@littleguy77
Copy link
Member

@bsmiles32 Sounds good. I was mainly trying to visualize your long-range goals, without actually understanding all the details. I think what you suggest is natural and intuitive. Audio/video device creation/teardown seems a natural front-end responsibility, and this division is much clearer than the current audio plugin API.

@tony971
Copy link
Contributor

tony971 commented Feb 11, 2015

Ping @dh4

@bsmiles32
Copy link
Member Author

Ping @Cronikeys
I just saw your fork for TAS purpose. If you have comments about the new audio or input callbacks, please join the discussion.

richard42 added a commit that referenced this pull request Feb 14, 2015
@richard42 richard42 merged commit 52c9449 into mupen64plus:master Feb 14, 2015
@ricrpi
Copy link
Contributor

ricrpi commented Feb 14, 2015

Sorry for the late response, My Pi port uses the mupen64plus/ui-console front-end however I have an audio-omx plugin which gives a 5% performance increase so would like to continue using omx.

I can provide the omx code to the mupen64plus/ui-console when the makefile VC variable is set but feel it would pollute the code a little or I have to maintain a fork.

It also raises the question as to how users would choose the audio output method when there are multiple opions e.g. omx or sdl.

@bsmiles32
Copy link
Member Author

@ricrpi : Don't worry for now, nothing has changed from an external point of view. Plugins still works are still the main way of outputing sound. (If they don't work, this is a BUG and should be reported).
That being said, in a (distant ?) future, I'd like to move the choice of {set_audio_format, push_audio_samples} to frontends and extend the core API to do so.
For the mupen64plus-ui-console frontend, we have basically 3 options :

  • option1: move the audio plugin loading/unloading logic + the bridge emulate_speaker_via_audio_plugin there. This will allow us to not break existing frontends/audio plugins and still provide choice of desired audio backend easily.
  • option2: do away with audio plugins by porting them inside the ui-console as "audio backends".
    Instead of being dynamically loaded they we be statically compiled inside the frontend.
    Each audio backend could be included/excluded from the set of usable backends via a compile time macro (to let frontend maintainer to choose the set of backend that are applicable for their plateform). Plus we would provide a way for the user to choose which backend to use among the compiled backends via a config value ?
  • option3: a mix of the first 2 by providing 3+ backends in the ui-console {emulation_via_audio_plugin, sdl, omx, ...}.

I think we will try to do option1 first because that's less effort for everyone and don't force dependance on ui-console, but option3 is kind of nice too if we want to do remove the extra indirection for ported backends.

Hope these options will address your concerns :)

@bsmiles32 bsmiles32 deleted the ai_refac branch February 14, 2015 18:26
@richard42
Copy link
Member

bsmiles, from your original plan description, I think the way forward with this will be to:

  1. In UI-Console, leave the existing plugin/core attachment code to use audio plugins via the core, but also add the new frontend audio API and SDL/OMX code to drive the backend.
  2. Simultaneously, add code to the core to use the front-end audio API if available, or else fall back to the audio plugin.
    Then later:
  3. Remove the fallback code from the ui-console and core, bump major version # of APIs

@wareya
Copy link

wareya commented Feb 14, 2015

That looks like a good plan to me.

@bsmiles32
Copy link
Member Author

@richard42 : Ok, I've started some work to allow the transition like what you've described
(or at least what I've understood of your plan :)
For now, I've added in the core a new function "CoreSetAudioInterfaceBackend" callable by the UI, which set the audio callbacks. If the UI doesn't call it, by default the core uses the "Audio Backend Compat" which implement the audio backend interface using the audio plugin.
Next step is to "port" the SDL (and OMX?) plugin inside the ui-console, allow this new backend to be selected using the cmd line and in that case call CoreSetAudioInterfaceBackend.

I have howver a couple of questions:

  • CoreSetAudioInterfaceBackend takes as a parameter a pointer to a
    struct audio_backend
    {

    void* user_data;

    m64p_error (set_audio_format)(void,unsigned int, unsigned int);

    m64p_error (push_audio_samples(void,const void*,size_t);
    }

  • Should I "version" this structure to allow future changes ?
    (This is not to exclude as AI emulation is still not fully accurate and might still evolve in ways needing small additions)

  • When do you intend to release a new version ? and do you prefer this work to be part of it or not ?

@Cronikeys
Copy link

@bsmiles32 thanks for the ping.

A lot of major changes happened since m64p 2.0 and a lot of Bizhawk's API callbacks were/are custom, so I've been in the progress of making a maintainable fork (rather than try to back-port major milestones like CountPerOp). I'll let you know specific callback suggestions after I'm done playing catch-up.

More abstraction sounds nice as does moving away from the plugin system (at least for AI).

@bsmiles32
Copy link
Member Author

@Cronikeys : thanks for reply.
Yeah a lot of work is happening lately (and I think will continue that way for some time).
Since you forked this project I have to warn you that this can be very time-consuming to keep in sync with upstream, so if applicable you should try to minimize differences with upstream.
When in doubt on how to do feature X that we don't have or do badly please ask/file bug so that we can talk about it and see how to proceed :

  • if feature X is good for everyone -> make it upstream so everyone can benefit from it and no additional burden for your fork
  • if feature X is too specific -> we can still make suggestions on how to implement it in a way that make it more maintainable for you in the future

These are very general remarks on forking especialy in case of a "moving project", but the important point is please talk to us :)
@richard42 : maybe we shoud add a section in the README or something about the cost of forking this project and advise potential forkers to talk to us before forking ?

@richard42
Copy link
Member

@bsmiles32 : I doubt that developers actually read the README file, or that a warning about the costs of forking would dissuade anyone. :)

Regarding struct audio_backend, it would be a good idea to put a version number in the struct so that the core knows what to expect when the frontend calls its CoreSetAudioInterfaceBackend function. You should also bump up the minor number of the FRONTEND_API_VERSION returned by the core's PluginGetVersion call, so that the frontend knows that the Core supports this interface.

I would like to release a new version by the beginning of April. I would prefer that these api changes not be a part of this new release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants