Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Realtime MIDI channel and fx mapping while musicians are playing. #669

Closed
jjceresa opened this issue Sep 2, 2020 · 49 comments
Closed

Realtime MIDI channel and fx mapping while musicians are playing. #669

jjceresa opened this issue Sep 2, 2020 · 49 comments

Comments

@jjceresa
Copy link
Collaborator

jjceresa commented Sep 2, 2020

A proposal to gain benefice of mixer and fx unit capabilities.

1)Actually in fluidsynth the mixer offers the potential to map distinct MIDI instrument to distinct stereo buffer.
2)Similarly distinct MIDI instrument can be mapped to distinct Fx unit input.

1)When the musician think for a MIDI instrument mapping configuration, he decide of 3 mapping at MIDI level:
- MIDI chan x to dry buf i (actually i = x % synth.audio-groups)
- MIDI chan x to fx unit j (actually j = x % synth.effects-groups)
- fx j to dry buf k (actually k = j % synth.audio-groups)

Actually these mapping are rigid and aren't real time capable while MIDI instrument are played.
1.1) A new simple API should offer mapping flexibility in real time situation allowing:

  • mapping of any MIDI channel to any dry buffer.
  • mapping of any MIDI channel to any fx unit input.
  • mapping of any fx output to any dry buffer.

For example, this allows:

  • MIDI instrument i1 mapped to dry1 buffer.

  • MIDI instrument i1 mapped to fx1 input, and fx1 output mapped to dry1 buffer.

  • MIDI instrument i2 mapped to dry2 buffer.

  • MIDI instrument i2 mapped to fx2 input., and fx2 output mapped to dry2 buffer.

During MIDI instrument playing it could be possible to change the mapping of fx1 output
to dry1 buffer. So that we can hear that instrument fx1 leave dry i1 and now is mixed
with fx2 of instrument i2.

Another real time feature is the ability for a musician to play temporarily only fx (dry is silence)
or only dry (fx is silence).

  1. As distinct MIDI instrument can be mapped to distinct Fx unit input, means that now we could expect to have distinct parameter for distinct fx unit. For example MIDI instrument i1 could have a room-size reverb different than room-size reverb of instrument i2. Actually there is only one API that set the same parameter value to all fx units.
    2.1) Now we need a new API that allows to change a particular fx unit parameter.

I will propose a PR for these new API (1.1) and (2.1).

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 2, 2020

During MIDI instrument playing it could be possible to change the mapping of fx1 output
to dry1 buffer.

A typo error: dry1 should be replaced by dry2. So one must read:

During MIDI instrument playing it could be possible to change the mapping of fx1 output
to dry2 buffer.

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 5, 2020

2.1) Now we need a new API that allows to change a particular fx unit parameter.

This new API functions will look like the actual one but with an additional parameter. For example:
-(a)The actual fluid_synth_set_reverb_roomsize(fluid_synth_t *synth, double roomsize) allows to change the roomsize of all fx unit.
-(b)The new API fluid_synth_set_fx_reverb_roomsize(fluid_synth_t *synth, int fx, double roomsize) allows to change only the
roomsize of unit fx (if fx >=0). With fx set to -1, this new API function will behave as the actual API (i.e roomsize is applied to all unit fx).

Doing this way allows to maintain backward API compatibility, but new applications should only make use of new functions API.
Also at a later time it should be probably necessary to deprecate the actual API which become redundant with the new one.

What do you think ?

@derselbst
Copy link
Member

The API you suggest would be ok for me. I have a slight preference for calling it fluid_synth_set_reverb_roomsize2() rather than fluid_synth_set_fx_reverb_roomsize(). Not sure.

However, I am struggling in general whether we need that overall flexibility. Perhaps it would be helpful to propose the new APIs for 1.1 and 2.1 on the mailing list.

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 6, 2020

However, I am struggling in general whether we need that overall flexibility.

This overall flexibility is appreciated when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument. API 1.1 helps to mix instruments to appropriate load speakers or headphones.
Some musicians have difficulties to synchronize their playing when all instruments are hear only on a unique audio output. For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass and piano audio can be temporarily mixed to the same output. This helps the bassist learning to synchronize to the piano. Of course using API 1.1, requires an audio driver multi channels capable.

Perhaps it would be helpful to propose the new APIs for 1.1 and 2.1 on the mailing list.

Yes, i will propose this.

@mawe42
Copy link
Member

mawe42 commented Sep 6, 2020

@jjceresa Is that a use-case that you have encountered yourself? Or do you know anybody who has expressed interest in that use-case?

Edit: I mean the use-case "when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument". Is that something you need? Or know someone who needs this? And if so... why? :-)

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 6, 2020

I mean the use-case "when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument". Is that something you need?

Yes, having only one software synth instance able to play multiple MIDI input is something I need. In home studio, this allows to group connection with existing external MIDI hardware (for example 2 keyboards).

@derselbst
Copy link
Member

API 1.1 helps to mix instruments to appropriate load speakers or headphones.
Some musicians have difficulties to synchronize their playing when all instruments are hear only on a unique audio output. For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass and piano audio can be temporarily mixed to the same output.

To me, that sounds like the "custom audio processing before audio is sent to audio driver" use-case as provided by new_fluid_audio_driver2(). It's a little hard for me to understand at which level you intend to place this new API. But perhaps I should just wait once you're ready.

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 6, 2020

It's a little hard for me to understand at which level you intend to place this new API. But perhaps I should just wait once you're ready

As drafted in point 1 of first comment, this mapping API is intended to be placed at MIDI channel level here some details:

/**
* Set mixer MIDI channel mapping to audio buffers. 
* These mapping allows:
*  (1) Any `MIDI channels` mapped to any audio `dry buffers`.
*  (2) Any `MIDI channel` mapped to any `fx unit inpu`t .
*  (3) Any `unit fx output` mapped to any audio `dry buffers`.
*
* The function allows the setting of mapping (1) or/and(2) or/and (3)
* simultaneously or not:
* 1)Mapping between MIDI `channel chan_to_out` and audio dry output at
*   index `out_from_chan`. If `chan_to_out` is –1 this mapping is ignored,
*   otherwise the mapping is done with the following special case:
*   if `out_from_chan` is –1, this disable dry audio for this MIDI channel.
*   This allows playing only fx (with dry muted temporarily).
*
* @param synth FluidSynth instance.
* @param `chan_to_out`, MIDI channel to which `out_from_chan` must be mapped.
*  Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_chan`, audio output index to map to `chan_to_out`.
*  Must be in the range (-1 to synth->audio_groups-1).
*
* 2)Mapping between MIDI channel `chan_to_fx` and fx unit input at
*   index `fx_from_chan`. If `chan_to_fx` is –1 this mapping is ignored,
*   otherwise the mapping is done with the following special case:
*   if `fx_from_chan` is –1, this disable fx audio for this MIDI channel. 
*   This allows playing only dry (with fx muted temporarily).
*  
* @param `chan_to_fx`, MIDI channel to which `fx_from_chan` must be mapped.
*  Must be in the range (-1 to MIDI channel count - 1).
* @param `fx_from_chan`, fx unit input index to map `to chan_to_fx`.
*  Must be in the range (-1 to synth->effects_groups-1).
*
* 3)Mapping beetwen `fx unit output `(which is mapped to `chanfx_to_out`) and
*   audio dry output at index index `out_from_fx`. If `chanfx_to_out` is -1,
*   this mapping is ignored.
*
* @param `chanfx_to_out`, indicates the fx unit (which is actually mapped
*  to `chanfx_to_out`) whose output must be mapped with `out_from_fx`.
*  Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_fx`, audio output index that must be mapped to fx unit output
*  (which is actually mapped to `chanfx_to_out`).
*  Must be in the range (0 to synth->audio_groups-1).
*
* @return #FLUID_OK on success, #FLUID_FAILED otherwise
*/

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 6, 2020

The 3 mapping described in the API (see previous comment) are represented in branch master fluidsynth\doc\FluidMixer.pdf please see:

  • (1) map MIDI chan-to-buf (MIDI channel x to dry buffer i).
  • (2) map MIDI chan-to-fx (MIDI channel x to fx unit input j).
  • (3) map fx-to-buf (fx output j to dry buffer k) .

All 3 mapping type are set by the API in realtime:

  • Once mappings (1) (chant-to-buf) or (2) (chan-to-fx) are set , these mappings are taking account on next
    noteon played on the MIDI channel concerned (in fluid_voice_init()). Then these mapping are use by the mixer during
    the notes's voice life
  • Once mapping (3) (fx-to-buf) is set, this mapping is taking account immediately by fluid_rvoice_mixer_process_fx().

This was referenced Sep 8, 2020
@mawe42
Copy link
Member

mawe42 commented Sep 9, 2020

I must admit I'm still wondering if the use-case justifies the additional public API functions and new features. I imagine that most people who want or need this kind of flexibility are already using either multi-channel output with something like jack, or use multiple fluidsynth VST or DSSI instances in a plugin host. In both cases, channel routing and different external effects per channel are already very easy to configure.

The first part (1) of the proposal has its own merit, I guess. We already have a limited ability to change the channel routing, and 1.1 makes this more explicit and flexible. I think if we implemented that, then we should also change the fluidsynth command-line options and rework the audio-groups / audio-channels settings.

But (2) sounds a little too much like going down the route of implementing more and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.

But I also don't want to be the guy who always rejects new and larger extensions to the codebase. Maybe I'm too cautious here... so please don't take this as a downvote. It's more me thinking out loud about the scope of fluidsynth.

@jjceresa
Copy link
Collaborator Author

I imagine that most people who want or need this kind of flexibility are already using either multi-channel output with something like jack, or use multiple fluidsynth VST or DSSI instances in a plugin host. In both cases, channel routing and different external effects per channel are already very easy to configure

I am not using jack, nor VST nor DSSI. I am just using directly an audio driver multi channel capable (i.e using an audio device card multi channels ). please see #667
This allows to route/mix the instrument to separate load speakers.

But (2) sounds a little too much like going down the route of implementing more >and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.

Point 2 is about internal unit fx parameters. When 2 distinct MIDI instruments are connected to 2 distinct internal unit reverb, I would expect that these 2 reverb unit have different parameters. Actually all internal fx unit have the same parameters which is a lack particularly when these 2 instruments are routed to distinct stereo speakers. As the issue is about internal fx unit lack, it appears in the scope of fluidsynth. Please, note also that API implementation (pr 672,673) require small part of code.

@derselbst
Copy link
Member

I didn't want to comment first to avoid biasing Marcus. However, I do share his concerns.

Let me go one step back to your use-case (the practical part of it, not the initial theoretical one):

For example during playing repetition with 4 musicians (bassist, pianist(playing melody), guitarist, percussionist) , if the bassist has more difficulties than other musicians, both bass and piano audio can be temporarily mixed to the same output

Ok, every musician plays his instrument on one MIDI channel. So we need 4 stereo channels, i.e. synth.audio-groups=4. [And probably 4 effects units as well, to give each instrument its own reverb.]

API 1.1 helps to mix instruments to appropriate load speakers or headphones.

Understood. But let's replace "instruments" with "MIDI channels".

This use-case you have is absolutely valid. However, you're providing a bottom-up solution for it by adding more complexity into rvoice_mixer. I don't think this is the correct way, because I don't see a reason for it.

Instead, I would have voted for a top-down solution:

  1. Write a little client program, that somehow administrates the buffer mappings.
  2. Create an audio driver by using new_fluid_audio_driver2() providing a custom audio procession function.
  3. In that audio procession function: Assign the correct buffers to the fluid_synth_process() call based on the previous mapping.
  4. Done.

It is up to you whether the client program in 1. is your own demo program or fluidsynth's command shell. And ofc. 2. requires WaveOut and dsound drivers to learn to support new_fluid_audio_driver2().

Now coming back to your use-case: While the mapping of MIDI channels to audio channels is indeed rigid in rvoice_mixer, it does not mean that it's rigid when calling fluid_synth_process(). That's because the stereo buffers provided to fluid_synth_process() can alias each other, i.e. they don't have to be four distinct stereo buffers. You could simply pretend to fluid_synth_process() that you have four stereo buffers, while you only provide three distinct stereo buffers. That is, if the first and second stereo buffers alias each other, the bass and piano will be mixed with each other. Likewise, you can decide where to map the effects, because you can control, which buffers will be written to under the hood.

The only drawback of my solution I see is that it would also affect voices that are already playing. Whereas your solution only applies the new mapping to new voices. But does this issue really justify the added complexity of this PR? Esp. since we are talking about "temporary mappings", as far as I understand.

The drawback of your proposal is that it duplicates functionality (i.e. flexibility of buffer mapping) that is already provided by fluid_synth_process(). A functionality that is not only internally, but also exposed via the public API.

So, in summary, I'm sorry to say, but given this API propsal, I don't see any "new features" that can't be already achieved with fluid_synth_process().


But (2) sounds a little too much like going down the route of implementing more >and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much.

Point 2 is about internal unit fx parameters. When 2 distinct MIDI instruments are connected to 2 distinct internal unit reverb, I would expect that these 2 reverb unit have different parameters. Actually all internal fx unit have the same parameters which is a lack particularly when these 2 instruments are routed to distinct stereo speakers. As the issue is about internal fx unit lack, it appears in the scope of fluidsynth. Please, note also that API implementation (pr 672,673) require small part of code.

The changes proposed in #673 is actually ok for me, but let's talk about this later separately.

@jjceresa
Copy link
Collaborator Author

Now coming back to your use-case: While the mapping of MIDI channels to audio channels is indeed rigid in rvoice_mixer, it does not mean that it's rigid when calling fluid_synth_process().

Yes, fluid_synth_process() expose a powerful mapping/mixing for audio buffers feature. In fact this fluid_synth_process() functionality is not the same that MIDI channels mapping to audio channels exposed by rvoice_mixer in this PR. rvoice_mixer MIDI channels mapping is the only one naturally synchronous with MIDI notes played by musician. This makes a MIDI channel mapping change realtime possible during the song (without audio artifact) (regardless if the mapping change is required by the musician while is is playing or another person devoted to the recording).

The only drawback of my solution I see is that it would also affect voices that are already playing. Whereas your solution only applies the new mapping to new voices. But does this issue really justify the added complexity of this PR? Esp. since we are talking about "temporary mappings", as far as I understand.

yes, the rvoice_mixer realtime MIDI channels mapping solution allowing easy direct recording possible (i.e without artifact) during the musicians playing.

So, in summary, I'm sorry to say, but given this API propsal, I don't see any "new features" that can't be already achieved with fluid_synth_process()

Please, be aware that I am sensible and aware of the powerful of fluid_synth_process(), but I don't think that the realtime MIDI channel mapping proposed by this PR is a duplicate functionality that can be easily achieved using fluid_synth_process().
For the same reason, I also think that actual mix mode of rvoice_mixer is a powerful feature that should stay inside rvoice_mixer.

Now coming back to the your application proposed above (client that administrates the buffer mappings and create the custom audio driver).
This kind of application could be useful at the audio mixing stage for the recording of a song on distinct tracks. This application could be used by the recording team that prepares a fixed buffer mapping configuration for audio track separation and post recording processing purpose. Once the track configuration is prepared beforehand, the song played by musicians can start. The song will be dispatched to the tracks. It seems that fluid_synth_process() is appropriate for this client application.

I didn't want to comment first to avoid biasing Marcus. However, I do share his concerns.

I summary,

  • point 1.1 (MIDI channel mapping API): Marcus is not against . Tom think that its is a complex duplicate functionality
    that can be solved using fluid_synth_process().
  • point 2.1(Unit fx API): Marcus think that this sounds a little too much. Tom is ok.

@derselbst
Copy link
Member

in this PR. rvoice_mixer MIDI channels mapping is the only one naturally synchronous with MIDI notes played by musician. This makes a MIDI channel mapping change realtime possible during the song

Ok, but I don't understand why this so important? Is it only to avoid potential audio artifacts? If so, wouldn't a simple fade in / fade out easily solve this?

Also, there is another point that I still don't get: Assuming you have a pianist playing on MIDI chan 0 and a bassist on MIDI chan1. Each musician wears a headphone. In the beginning, the headphone of the bassist plays the bass only, that of the pianist the piano only. Now, imagine you map the piano onto the bass, right? Then the bassist will hear both instruments, but the pianist will hear only silence, won't he?

So, I really think what you need is the Jack audio server. It's solving exactly this issue. It should be compilable for Windows as well, have you had a look at it?

@jjceresa
Copy link
Collaborator Author

This makes a MIDI channel mapping change realtime possible during the song
Ok, but I don't understand why this so important?

For example, when a musician plays a phrase, the mapping allow him to play temporarily dry audio "solo" (the fx is muted) or fx audio "solo" (the audio dry is muted) and then come back to both (dry + fx) during the same phrase (using foot or key switch).

Also, there is another point that I still don't get: Assuming you have a pianist playing on MIDI chan 0 and a bassist on MIDI chan1. Each musician wears a headphone.

When musicians plays together they never use headphones they use load speakers to be able to hear each other because they need mutual learning. So the mapping of 2 instruments on the same output has only sense if this output is connected to
loud speakers. Some musicians prefer to use headphones physically connected on the same output than load speakers because they simply don't want to be bothering by the room acoustic.

So, I really think what you need is the Jack audio server. It's solving exactly this issue.
have you had a look at it?

When a musician play an instrument he is busy with this instrument and don't want to be disturbed by the use of a GUI application. Jack is well suited for predefined off line I/O audio connection settings, but not adapted for a musician playing in realtime.

@mawe42
Copy link
Member

mawe42 commented Sep 11, 2020

JJC, excuse me for being so persistent here, but I would really like to understand if we are talking about a concrete need you have or if this is more like "it would probably be nice for other people".

So for me to understand where you are coming form:

  • Do you yourself play together with other musicians on a single Fluidsynth instance, either via loud speaker or headphones?
  • And have you yourself experienced the need to change channel mappings and switch off fx in real-time during such a music session?
  • Do you actually use the Fluidsynth API when playing in this context? Or do you use fluidsynth from the command line or via some other frontend?

@jjceresa
Copy link
Collaborator Author

jjceresa commented Sep 11, 2020

JJC, excuse me for being so persistent here, but I would really like to understand if we are talking about a concrete need you have.

Yes Tom Marcus, this is a real concrete need I have. I am a musician keyboardist (not professional) and my main concern is the ability a musician have (using maximum 10 fingers and 2 foots) to achieve what he need.

Do you yourself play together with other musicians on a single Fluidsynth instance, either via loud speaker

I play with other musician and would like to do so during training lesson using only one instance and only one audio card multi channels connected to load speakers. Also I would like to get the minimum of hardware/software complexity as possible using this alone (at home).

And have you yourself experienced the need to change channel mappings and switch off fx in real-time during such a music session?

Yes, for example, when playing 2 instruments (i.e a flute [+ bit of reverb] accompanied by a piano [+ bit of reverb]) this gives the dimension (illusion) of 4 instruments present. This is possible also when the 2 instruments are played by only one musician on only one MIDI keyboard.

Do you actually use the Fluidsynth API when playing in this context?

Yes, for the "solo fx/dry on/off" experiment I used a tiny craft application that intercepts MIDI events coming from the MIDI driver and then call fluidsynth API. Doing this kind of "solo" experiment using the fluidsynth console command line doesn't allow to get the expected real-time feedback.

@derselbst
Copy link
Member

my main concern is the ability a musician have (using maximum 10 fingers and 2 foots) to achieve what he need.

So, given your 10 fingers and 2 foots, how exactly do you intend to change the channel mapping while playing the piano? You talked about a footswitch. That's a little too vague. I would like to get some more details for a better understanding: What does the footswitch trigger? A shell command? Or an API call? And does it trigger a simple pre-defined mapping, or does it somehow dynamically react to your situation? (Sry, I really have no clue.)

@jjceresa
Copy link
Collaborator Author

So, given your 10 fingers and 2 foots, how exactly do you intend to change the channel mapping while playing the piano? You talked about a footswitch. That's a little too vague.

During the playing of the instruments notes with the hand , the foot-switch (or hand push-button switch) triggers a pre-defined logic mapping that will use the MIDI channel of the current note. As you say the reaction is a dynamic to the playing situation (i.e based to the current MIDI channel of the instrument played by the hand). The mapping is executed on API call.
Before playing the song, if the keyboard is split in 2 instruments piano and flute (i.e 2 key-range ,each key-range assigned with its own MIDI channel 1 and 2), then during the playing, if a mapping is triggered this mapping will act on the piano or the flute.

@derselbst
Copy link
Member

the foot-switch triggers a pre-defined logic mapping that will use the MIDI channel of the current note. [...] The mapping is executed on API call.

Ok, how about using Jacks API for manipulating ports to rearrange the mapping?

And if you wanted to temporarily disable fx, you could add a default modulator whose secondary source is a switch that pulls CC91 and CC93 to zero.

Sorry for being so nit-picky here, but I really think that this channel mapping use-case should be implemented on a high level. Not by adding more complexity into rvoice_mixer and exposing it to the user. I'm afraid that this will become a burden as soon as we need to change an implementation detail deep down in the mixer and we find that for some reason we cannot do this because it would break the API usage.

@jjceresa
Copy link
Collaborator Author

Ok, how about using Jacks API for manipulating ports to rearrange the mapping?

I'm afraid that this will become a burden as soon as we need to change an implementation detail deep down in the mixer and we find that for some reason we cannot do this because it would break the API usage.

Ok, I understand your point of view as a maintainer. Unfortunately, Jack covers only a small part of the real need of musicians using MIDI. Also please, be aware that actually rvoice_mixer fixed channel mapping cannot be solved by changing the MIDI channel of the MIDI controller that send MIDI messages nor by doing any substitution of the channel value somewhere between the MIDI driver and the fluidsynth instance.

Sorry for being so nit-picky here, but I really think that this channel mapping use-case should be implemented on a high level. Not by adding more complexity into rvoice_mixer and exposing it to the user.

Just a note, I still doesn't understand why you think this PR is adding "more complexity into rvoice_mixer" ?. This PR does a simple straightforward substitution of the expression (channel % z) by the value of a variable set by an API.

So, we can close this PR. I will continue with a custom version of fluidsynth.

@derselbst
Copy link
Member

I still doesn't understand why you think this PR is adding "more complexity into rvoice_mixer" ?. This PR does a simple straightforward substitution of the expression (channel % z) by the value of a variable set by an API.

Currently, there are no constraints of what and how we map things in rvoice_mixer. The user doesn't need to know / doesn't need to care about that. Thus we can use a simple fixed mapping. Now you want to make this mapping variable and expose it to the user. Hence we will get a bigger API and constrain ourselves to rvoice_mixer's current implementation.

If there were no other options to achieve your use-case, I would buy it. However, given the number of alternative approaches on a higher level (new_fluid_audio_driver2(), Jack, or as Marcus said VST, DSSI), I'm cautious with this step.

So, we can close this PR. I will continue with a custom version of fluidsynth.

Seems like we need a third (or fourth) opinion. @mawe42 What do you think? Are you "still wondering if the use-case justifies the additional public API functions and new features."? Should we discuss that feature on the mailing list? You can also tell me I'm mistaken, then I will give up my reservations on that topic.

@mawe42
Copy link
Member

mawe42 commented Sep 12, 2020

What do you think? Are you "still wondering if the use-case justifies the additional public API functions and new features."?

I'm really in two minds about this. On the one hand I think that the MIDI channel to buffer mapping should be limited to two modes of operation:

  1. all MIDI channels on a single stereo output with effects mixed in, or
  2. all MIDI channels and fx on separate outputs.

Option 1 is probably what 90% of users will ever need. Option 2 is for the small number of people who want to use fluidsynth for advanced things like real-time multi-channel live performance. For those advance use-cases, there are really good tools available (Carla, jack + friends, Ableton Live, ...) that can simply take fluidsynth multi-channel output (or multiple fluidsynth instances) and offer very flexible and user-friendly real-time control for live performance. And if you miss functionality (for example to switch an instrument to a different output, controlled via a MIDI foot pedal), you can either search for a plugin that does what you want, or quickly write a plugin yourself.

Using those real-time performance hosts also has the advantage that you can add any effect to the outputs, and set them up so that you can control the effects via MIDI foot pedals or other controllers as well.

So... when I follow this train of thought, I would argue against these changes and would instead propose to rip out existing functionality. Get rid of the audio-groups modulo stuff, even get rid of the whole LADSPA subsystem.

(Side note: I proposed a rewrite of the LADSPA system because I wanted to use additional effects with fluidsynth in my embedded application. It served quite well until recently... but I now have some additional requirements that mean I need more flexibility. So I will switch over to jack and multi-channel output instead. Which is something I should have done from the beginning, I think.)

But I said I'm in two minds about this. So the other way to think about this is: we already have (most of) the features that JJC wants, so let's expose them to the user in the most useful way possible.

We already offer limited control over the mapping via the audio-groups setting. But that is quite restrictive and complicated to understand for the user, I think. Giving users more explicit control over the routing sounds good. But we should take the existing interfaces (e.g the audio-groups setting) into account as well and clean that up at the same time.

And we already have the separate fx units, so adding new API functions to control their parameters separately makes sense. Here I would like to ask: if we allow individual fx unit control via the API and shell, should we also expose this via the settings?

And if we want to actively support real-time manipulation, to support the use of standalone fluidsynth in live performances, then we need to provide a way for non-API users to access the functionality as well, I think. And no, the shell does not really count. :-)

So maybe we need to implement an OSC handler for real-time live performance control?

@derselbst
Copy link
Member

So the other way to think about this is: we already have (most of) the features that JJC wants, so let's expose them to the user in the most useful way possible. Giving users more explicit control over the routing sounds good.

Ok, I'll buy it. I'll review #672 in a more detailed manner tomorrow.

But we should take the existing interfaces (e.g the audio-groups setting) into account as well and clean that up at the same time.

Cleaning it up... do you have something specific in mind?

if we allow individual fx unit control via the API and shell, should we also expose this via the settings?

IMO, no. I see the settings more like a basic initialization of the synth, that should be easy to understand and use. If one needs to set details, one should use the synth API. Esp. since you might want to manipulate those parameters in real-time. (I never liked these "real-time" settings. They only make synth API calls under the hood. I prefer direct API usage.)

So maybe we need to implement an OSC handler for real-time live performance control?

Open Sound Control - sounds interesting. I never really had a close look into it, so I don't know. But I think we should keep this kind of real-time manipulation at a minimum. As you said initially, there is already a bunch of software out there for that purpose.

@jjceresa
Copy link
Collaborator Author

Ok, I'll buy it. I'll review #672 in a more detailed manner tomorrow.

Thanks. Please no need to hurry .

@jjceresa
Copy link
Collaborator Author

Thanks you both for your useful feedback and your time.

Currently, there are no constraints of what and how we map things in rvoice_mixer. The user doesn't need to know / doesn't need to care about that.Thus we can use a simple fixed mapping.

That is right for the default settings value of audio-groups (1) and effects-group(1). As soon the user envisage to augment these settings to acquire its need he is faced to the fixed mapping (modulo stuff) which is quite straightforward when both audio-groups and effects-group have the same value . When both settings are different, things begin difficult to understand and the user is now aware he is seriously constrained by this default fixed mapping.

If I want to make this mapping variable and expose it to the user. Hence we will get a bigger API and constrain ourselves to rvoice_mixer's current implementation.

Right. I am aware that this new mapping API will move the user constraint toward the developer side that is now constrained to the rvoice_mixer current implementation. I am also100 % aware of any fair about the risk of breaking the mapping API if we change some details in the rvoice_mixer . Actually rvoice_mixer implementation behaviour is fully and only defined by the semantic of settings audio-groups and effects-groups and the new mapping API respect this semantic (ie the new API is only dependent of audio-groups and effects-groups). The only things I see that could break the API would be to suppress one of this settings. So the 2 questions I wonder are, 1) did we intend to suppress the actual multiple stereo feature of rvoice_mixers ?. 2) did we intend to suppress the presence of more than one internal unit fx ?.

But we should take the existing interfaces (e.g the audio-groups setting) into account as well and clean that up at the same
time.

May be you did a typo mistake and you are talking about audio-channels ?, in this case , please have a look at #663

@jjceresa
Copy link
Collaborator Author

IMO, no. I see the settings more like a basic initialization of the synth, that should be easy to understand and use.

These settings initialize all fx unit with the same values. Individual fx unit initialization seems not necessary. (please note that settings API (key,value) accepts only one value for each named key. )

@mawe42
Copy link
Member

mawe42 commented Sep 13, 2020

May be you did a typo mistake and you are talking about audio-channels ?, in this case , please have a look at #663

No, I really mean audio-groups... and really also effects-groups. Above you write:

That is right for the default settings value of audio-groups (1) and effects-group(1). [...] When both settings are different, things begin difficult to understand and the user is now aware he is seriously constrained by this default fixed mapping.

That's what I mean. The relationship between audio-channels, audio-groups and effects-groups are quite hard to understand, in my opinion. And even hard to explain. And my feeling is that the only reason they are implemented in this way is because using a single number and doing modulo on the channel number was simple to implement.

So in my opinion, if we have shell commands to give users explicit control over the MIDI channel to audio channel, MIDI channel to fx unit, and fx unit to audio channel mappings, then that should be the one and only way to configure different channel mappings. Completely remove the audio-groups and effects-groups settings. Only keep audio-channels and add a new effects-units setting. By default they only only create additional output channels and effects units, but they are unused. To use them, the user has to create a configuration file with shell commands that change the default from "all on the first output and first effects unit" to something else.

So in essence I think we should design this feature from the user perspective. What do users need, how can they achieve what they want. And then provide one and only one way to configure it for each interface (fluidsynth exec, API).

And then maybe think about adding OSC or MIDI SysEx commands for the real-time control that you wanted.

@mawe42
Copy link
Member

mawe42 commented Sep 13, 2020

Thinking about this some more. Instead of simply implementing what somebody currently needs, I would rather design this feature. So think about what usage scenarios we want to support. Then decide what the best way would be to support and (more importantly) how to actually configure fluidsynth for those scenarios.

I can think of the following:

  1. Default: All MIDI channels on single stereo output with effects mixed in.
  2. MIDI channels on any number of internal dry and fx sub-groups, so that LADSPA effects can be added to those individual sub-groups. Then render all sub groups together on a single stereo output.
  3. MIDI channels on any number of internal dry and fx sub-groups. Maybe add LADSPA effects to those individual sub-groups. Then render sub groups on any number of stereo outputs.
  4. All MIDI channels on individual output channels, fx units on dedicated stereo output channel. No LADSPA.

(Edit: added fourth option)

Are there any more?

@derselbst
Copy link
Member

how to actually configure fluidsynth for those scenarios. I can think of the following:

The scenarios you describe can already be achieved with fluid_synth_process(), as far as I know.

I would rather design this feature.

Ok, sounds good. But in this case, we should continue this discussion on the mailing list. I don't think that we three can reach a common sense here that covers all use-cases, while it's still easier to understand and use than the current implementation.

@mawe42
Copy link
Member

mawe42 commented Sep 13, 2020

The scenarios you describe can already be achieved with fluid_synth_process(), as far as I know.

That might be, but it's not what I was thinking about. I tried to look from the user perspective. So try to imagine what people need. Then decide which of those use-cases we want to support. Then decide how the user-interface should work. And only then look at the existing implementation and decide whats possible and how.

I don't know... it could also be that I'm thinking too big here.

Ok, sounds good. But in this case, we should continue this discussion on the mailing list. I don't think that we three can reach a common sense here that covers all use-cases, while it's still easier to understand and use than the current implementation.

Good idea.

@derselbst
Copy link
Member

I don't know... it could also be that I'm thinking too big here.

We will see once brought to the mailing list. Perhaps you Marcus could / should start a discussion. I'm probably too biased.

@jjceresa
Copy link
Collaborator Author

Completely remove the audio-groups and effects-groups settings. Only keep audio-channels and add a new effects-units setting.

I am not sure to understand. You probably means:

  • keeping audio-groups functionality but rename this setting audio-channels.
  • keeping effects-groups functionality but rename this setting effects-units.
  • Completely remove audio-channels settings.
    Is it right ?.

@jjceresa
Copy link
Collaborator Author

| So think about what usage scenarios we want to support.

I see only 2 types of scenarios,

  • default (numbered 1 in previous comment) which is the one adopted at synth instance initialization: All MIDI channels to buff 0, All MIDI channel to fx0, All fx unit to buff 0 (regardless of buff count and fx unit count).
    I see this default mapping as the only one predictible and comprehensible because it is independent of buff count and fx unit count.
  • custom . This custom mapping is chosen by the user and will belong of course in one of the scenarios you described in previous comment (2 or 3 or 4).

Then decide what the best way would be to support and (more importantly) how to actually configure fluidsynth for those scenarios.

At this stage for the user I see only the basics way, (i.e using the mapping API, and companion shell commands). imo, this should be sufficient for now. Later we could add, high level interface (i.e through sysex or osc) only at appropriate time when necessary. The real needs of these high level interface will probably appear progressively in the time from the user experience using the basics (API, shell commands). I mean this should come from the user experience, and this will take long time. At short terms we should provide only the basics ways then let the user do what he want with these and wait.

Default: All MIDI channels on single stereo output with effects mixed in.

@jjceresa
Copy link
Collaborator Author

Default: All MIDI channels on single stereo output with effects mixed in.

Please ignore this sentence.

@mawe42
Copy link
Member

mawe42 commented Oct 31, 2020

I've just tried to write a post for fluid-dev to start the discussion about this feature, but I'm having a really hard time in trying to come up with a good explanation and (more importantly) with a good question to ask the community.

For starters, I was unsure which audio drivers actually support multi-channel output at the moment. Because I couldn't find any documentation about this, I created the following wiki page: https://github.com/FluidSynth/fluidsynth/wiki/Audio-Drivers

Please have a look at that page and let me know if it is correct. I think such a page would be useful for our users, so I would like to link it into the main documentation in the wiki. Any objections to that?

So, coming back to the discussion... I'm really unsure what to ask the community. JJC has said he has a real use-case for real-time channel-mapping. And we seem to agree that most of what we need to support this is already available in FS, so it would be a good change to make.

The open question is how the API for this feature should look and behave. And here I'm really unsure what JJCs plans for the real-time control is. @jjceresa can you elaborate a little more on how you intend this real-time control to work? I mean the actual practical usage of the feature? Would you create a little script that listens to MIDI events and send text commands to the fluid shell via telnet? Or would you write a wrapper program that uses fluidsynth via the API and implement your own MIDI event handler to control the channel mapping?

And there is the "big question" I came up with: instead of patching another layer on top of the current multi-channel logic, shouldn't we design the multi-channel output from the ground up instead? Which also means revising the audio-channels, audio-groups, effects-channels and effects-groups settings, which are really hard to understand for normal users, in my opinion. When trying to write the post starting the discussion, I also attempted to explain how multi-channel currently works. And that turned out to be really hard to understand, so I dropped that idea.

I'm unsure on how to proceed here. I feel quite strongly that adding the extra layer of complexity in this PR on an already hard to understand feature is problematic. So I really think we should come up with a new and unified way to control the MIDI channel to output channel and MIDI channel to effects-unit mapping. Once that is clean, we should add the real-time controls to change the mapping.

@jjceresa
Copy link
Collaborator Author

I think such a page would be useful for our users, so I would like to link it into the main documentation in the wiki.

Good Idea, thanks for this page.

Please have a look at that page and let me know if it is correct.

waveout (like dsound) support multi-channel too.

A lot of unix-like driver (alsa,...) yet does not support multi-channels but this could be done. For example, for alsa, I looked how to add this support but never proposed a PR because I cannot test locally here (please see #665).

@mawe42
Copy link
Member

mawe42 commented Oct 31, 2020

waveout (like dsound) support multi-channel too.

Thanks, I've updated the page.

A lot of unix-like driver (alsa,...) yet does not support multi-channels but this could be done.

Sure, I just wanted to document the current state.

@jjceresa
Copy link
Collaborator Author

So, coming back to the discussion... I'm really unsure what to ask the community.

audio-channels, audio-groups, effects-channels and effects-groups settings, which are really hard to understand for normal users.

I think first the actual fx mixer behaviour should be be documented a bit in plain text. I can do that. This should show where is the mixer in the overall audio audio path. Also this document must explain that "multi-channels" in fluidsynth is simply multiple stereo output nothing else. This document is required to understand what these settings represent inside the mixer. (A small presentation of the mixer should be added to the wiki pointing to this document).

Then later it will be possible to expose to the mail-list the API proposal based on this document.

@jjceresa
Copy link
Collaborator Author

@jjceresa can you elaborate a little more on how you intend this real-time control to work? I mean the actual practical usage of the feature?

  1. simple case: let 2 musicians playing with their MIDI controller (ewi, keyboard) connected to the same synth instance with an audio driver with 2 stereo outputs. The fluid instance must appears like if each musician have its own synthesizer and its own stereo output. The synth instance is configured before playing the song using shell commands.
  • MIDI messages from ewi controller are received by the synth on MIDI channels 0 to 15 and mapped to stereo output 0, same for fx unit .
  • MIDI messages from keyboard controller are received by the synth on MIDI channels 15 to 31 and mapped to stereo output 1
    same for fx units.
  1. more elaborated case involving reverb mapping to output or reverb mute/solo during the song: This is relevant to only one musician (i.e the ewi player) and triggered by this musician. In this case a MIDI event handler is implemented to intercept MIDI message coming from the MIDI driver and call API to control the ewi MIDI channels mapping.

@mawe42
Copy link
Member

mawe42 commented Oct 31, 2020

I must admit I still don't understand where your use-case is coming from. I mean I understand what you want to achieve, but I don't understand why you want to achieve it in this way. But maybe that isn't really important... Rewriting the multi-channel configuration interface has merit in itself, and your use-case would naturally benefit from that, I think.

I think I have an idea how to bring this to the mailing-list now. I will write a proposal what I think would be a really nice and clean way to configure the channel and buffer mapping.

@derselbst
Copy link
Member

And I must admit that I never saw a use-case for multiple synth instances. However, the use-cases JJC has just described seem like a very suitable case for creating two synth instances. The purpose of squashing this functionality into a single instance is not quite clear to me.

@jjceresa
Copy link
Collaborator Author

the use-cases JJC has just described seem like a very suitable case for creating two synth instances.

right, and squashing these 2 synth instances into one requires only one audio driver driving only one audio card(having at least 2 stereo outputs of course). Otherwise with 2 synth instances we are forced to create 2 audio drivers which require 2 distinct audio cards and also we lose any possibility of mapping/mixing any MIDI channels to any audio output.
Side note: making use of only one MIDI driver multi device capable, use case 1 can be extended to more than 2 MIDI USB input devices without requiring external MIDI box merging. All these also leads to simplify hardware requirement.

derselbst pushed a commit that referenced this issue Nov 22, 2020
This PR addresses #669 point 2.1.
It proposes set/get API functions to change/read fx unit parameters.
The deprecated shell commands are updated. Now the commands line have 2 parameters:
- first parameter is the fx unit index.
- second parameter is the value to apply to the fx unit.
CartoonFan pushed a commit to CartoonFan/fluidsynth that referenced this issue Dec 20, 2020
* Properly handle overlapping notes when using fluid_event_note() (FluidSynth#637)

* Fix regression introduced in a893994

Mentioned commit broke fluid_synth_start() when using a DLS soundfont.

* Fix an uninitialized memory access

that could possibly trigger an FPE trap for instruments that use the exclusive class generator

* Update API docs

* Bump to 2.1.4

* Update Doxyfile

* Turn incompatible-pointer-types warning into error

* Fix passing arguments from incompatible pointer type

* Fix a NULL deref in jack driver

* Fix a possible race condition during midi autoconnect

* Fix printf format warnings

* Update Android Asset loader to new callback API

* Update Travis CI (FluidSynth#658)

* update to Ubuntu Focal
* use clang10
* avoid unintentional fallbacks to  default `/usr/bin/c++` compiler
* fix related compiler warnings

* fix NULL permitted for out and fx pointer buffer

Closes FluidSynth#659

* CMakeLists.txt: fix build with gcc 4.8 (FluidSynth#661)

-Werror=incompatible-pointer-types is unconditionally used since version
2.1.4 and 137a14e. This will raise a
build failure when checking for threads on gcc 4.8:

/home/buildroot/autobuild/run/instance-3/output-1/host/bin/arm-none-linux-gnueabi-gcc --sysroot=/home/buildroot/autobuild/run/instance-3/output-1/host/arm-buildroot-linux-gnueabi/sysroot -DTESTKEYWORD=inline  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Os -Wall -W -Wpointer-arith -Wcast-qual -Wstrict-prototypes -Wno-unused-parameter -Wdeclaration-after-statement -Werror=implicit-function-declaration -Werror=incompatible-pointer-types -Wbad-function-cast -Wcast-align   -DNDEBUG -fPIE   -o CMakeFiles/cmTC_98946.dir/CheckIncludeFile.c.o   -c /home/buildroot/autobuild/run/instance-3/output-1/build/fluidsynth-2.1.4/CMakeFiles/CMakeTmp/CheckIncludeFile.c
cc1: error: -Werror=incompatible-pointer-types: no option -Wincompatible-pointer-types

Fixes:
 - http://autobuild.buildroot.org/results/13cbba871db56ef8657a3d13c6ac8e1b4da0d244

Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>

* TravisCI: add a build for GCC 4.8 (FluidSynth#662)

* Remove unused member variable

* Limiting audio-channels to audio-groups (FluidSynth#663)

* Use a runtime check to detect version of libinstpatch (FluidSynth#666)

It could be that during runtime an older version of libinstpatch is used than the one fluidsynth was compiled against. In this case, libinstpatch will fail to load DLS fonts, because libinstpatch's initialization semantics don't match those compiled into fluidsynth.

* Add a chart about voice mixing and rendering

* Mapping of fx unit output to dry buffers in mix mode. (FluidSynth#668)

Currently, all fx unit output (in mix mode) are mapped to the `first buffer`.
This is not appropriate for synth.audio-groups > 1

This PR allows the mapping of fx output based on `fx unit index` and `synth.audio-groups` value.
This allows us to get the `fx output `mixed to the respective  `buffer` on which a `MIDI channel` is mapped.
For example: with `synth.audio-groups = 3` and  `synth.effect-groups = 3`:
- MIDI chan 0 (dry + fx0) is mapped to buf 0
- MIDI chan 1 (dry + fx1) is mapped to buf 1
- MIDI chan 2 (dry + fx2) is mapped to buf 2

* Add multi channels support for audio driver. (FluidSynth#667)

This PR addresses FluidSynth#665.

1) Add new functions for multi channels support: `fluid_synth_write_float_channels()`, `fluid_synth_write_s16_channels()`
2) `dsound` and `waveout` driver make use of this support. tested on 2 audio devices: 
    - creative SB Live! (6 channels).
    - Realtek: ALC889A (8 channels).

* Bump to 2.1.5

* Add SonarQube static code analysis (FluidSynth#671)

* Add SonarQube and LGTM badges to README

* Remove fluid_event_any_control_change() from public API (FluidSynth#674)

Originally, I have only marked it deprecated. But since we have an SOVERSION bump next release and because this function was only meant for internal usage, I think it's safe to remove it right now.

* Remove dead code

* Fix an impossible NULL deref

* Fix a NULL dereference

Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'prev_preset'), if `size` is negative. Problem is: Parameter `size` is `chunk.size` and should be unsigned.

* Fix another NULL dereference

Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'pr'), if size is negative. However, size should be unsigned.

* Remove a FIXME

I don't see any problem calling fluid_channel_init() from within synth context

* Remove a FIXME

I don't see any 'allocation' of preset. And ALL public synth functions have a mutex lock which might potentially block when called from synth context, but only then if the client app pessimizes this situation by extensively calling the synth from outside the synth context.

* Remove a FIXME

I have no clue what it refers to or what it's meant by that.

* Add comment into empty block

* Remove a FIXME

Not aware of any problems caused by the old glib thread API. It will be removed sooner or later anyway.

* Remove a FIXME

* Set the systemd unit target to default.target

fluidsynth.service.in:
The [Install] section [1] in systemd unit declares in which target the
service will be started.
The `multi-user.target` [2] - managed by the systemd _system_ service
manager - is used in the `fluidsynth.service`.
However, as it is a _user_ unit it needs to be pulled in by the
`default.target` [3] instead, which is the main target for the user
session (as started by `user@.service` [4]).

[1] https://www.freedesktop.org/software/systemd/man/systemd.unit.html#%5BInstall%5D%20Section%20Options
[2] https://www.freedesktop.org/software/systemd/man/systemd.special.html#multi-user.target
[3] https://www.freedesktop.org/software/systemd/man/systemd.special.html#default.target1
[4] https://www.freedesktop.org/software/systemd/man/user@.service.html

* Define FLUIDSYNTH_API on OS/2

Previously, CMake on OS/2 exported all the symbols unconditionally. Now
it exports necessary symbols only. As a result, it's necessary to
define FLUIDSYNTH_API correctly.

Addresses FluidSynth#678

* Make winmidi driver multi devices capable. (FluidSynth#677)

* Fix minor bug in windows audio driver (FluidSynth#680)

* Improve error reporting in Waveout and DSound drivers

* Fix Windows build

* Add proper unicode support to Windows error reporting

* Fix build on Windows 9x/ME

Addresses FluidSynth#679

* Promote Controller/Pressure/Bend event functions to 32bits (FluidSynth#670)

* Elaborate on synth.cpu-cores

* Add FluidMixer chart to API docs

* Ensure WaveOut compatibility with Win9X/NT (FluidSynth#687)

* Update and rename README.Android.md to README.md

* Update Android CircleCI build to use latest orb, Android API, Oboe and Cerbero (FluidSynth#690)

This fixes the currently-broken CircleCI build for Android-useable .so files.

Currently the Cerbero build is based off https://github.com/falrm/cerbero until https://gitlab.freedesktop.org/gstreamer/cerbero/-/merge_requests/641 is merged and deployed to the GitHub cerbero mirror.

Here is a successful build with the updated CircleCI workflow: https://app.circleci.com/pipelines/github/falrm/fluidsynth-android/31/workflows/0ad3186a-394c-4736-984b-96496b608053/jobs/32

Fixes FluidSynth#688

* Replace FreeBSD 13.0 with 11.4 (FluidSynth#692)

13.0 hasn't been released yet and the CI build keeps failing for long.

* Remove unused variable

* Fix possible uninitialized use of dry_idx variable

* avoid an unlikely race condition

* Add hint message when compiled without getopt support (FluidSynth#697)

* Add getopt support to CMake summary

* Add public API to pin and unpin presets to the sample cache (FluidSynth#698)

Following the discussion about an API to pin and unpin preset samples in the sample cache here:
https://lists.nongnu.org/archive/html/fluid-dev/2020-10/msg00016.html

Short explanation of the change:

Only the default loader currently supports dynamic sample loading, so I thought it might be a good idea to keep the changes for this feature mostly contained in the default loader as well. I've added two new preset notify flags (FLUID_PRESET_PIN and FLUID_PRESET_UNPIN) that are handled by the preset->notify callback and trigger the loading and possibly unloading of the samples.

* Revert "remove VintageDreamsWaves-v2.sf3"

This reverts commit a36c06c. We've got
explicit permission from Ian Wilson to convert it to SF3.

Addresses FluidSynth#701.

* Updated XSL / styling for fluidsettings.xml

* Cleanup section label markup and rendering

* Use (empty string) for empty default values of str settings

* shell.port is an int setting, not num

* Update periods and period-size with current values from source

* Consistently format all floats

* Better explain currently unused effects-channels

* Update effects-groups description to avoid the word "unit"

* Update ladspa.active description

Use 1 (TRUE) for consistency and mention LADSPA documentation

* As gs is default for midi-bank-select, list it as first option for clarity

* Options seems to be more widely used, so use that instead of Choices

* Remove FLUIDSYNTH_API and FLUID_DEPRECATED macros from documentation

* Remove "References" and "Referenced by" links from doc

They auto generated links are quite long on some functions, making
the documentation harder to read.

* Enable navigation sidebar

* Make larger enums easier to read

* Move doxygen customizations into separate directory

* Restructure devdocs into separate pages

* Change files into groups / modules

* Some additional subgrouping

* Use xsltproc to include settings in API documentation

* Replace all links to fluidsettings.xml with proper \ref's

* Command Shell group for all shell related commands

With subgroups for command handler, shell and server.

* Audio output group

With subgroups Audio Driver and File Renderer

* Logging interface

* MIDI input group

Contains MIDI Driver, MIDI Router, MIDI Player and MIDI Events

* MIDI Seqencer documentation

* Settings documentation

* Miscellaneous group

* SoundFont API

Includes Generators, Modulators, Loader etc

* Add version defines and functions to misc group

* Rename setting reference page name to lowercase, for consistency

* Structure the large synth header into subgroups

Also include version.h and ladspa.h in the Synthesizer group.

* Consistent capitalization of usage guide section names

* Some more brief message abbreviation hints

* Custom doxygen layout to rename modules to API Reference

* Sort groups/modules, briefs and members

* Updated documentation styling

* Remove footer, as it takes away valuable vertical space

* Make sure libxslt is only searched if doxygen is available as well

* Also update the styling of the deprecated list

* Mark settings with callbacks as realtime and output this in the generated docs

* Separate new_* and delete_* functions from the rest

* Sync the static Doxyfile with Doxyfile.cmake

Still missing is the integration of the generated fluidsettings.txt,
as that requires a build script currently not available on the
server generating the public API docs.

* Split doxygen INPUT into separate lines, for easier readability

* Move recent changes into separate file

* Move usage guide pages into separate files in doc/usage

* Move examples into doc/examples directory

* Split HTML_EXTRA_FILEs into separate lines

* Use \image for images and improve quality of FluidMixer image

* Use custom \setting{} alias to link to fluid settings

* Smaller cleanup and reformatting of long lines.

* Add generated fluidsettings.txt for fluidsynth.org API doc build

Probably not the final solution, but works for now.

* Hide nav sync toggle button

* Style improvements for small screens
- hide side nav
- hide search box
- make content full height

* Improve styling of field tables (enum values)

* Document how to revert the styling and layout changes

* Add documentation hints to style guide

* Make top links black on hover, not white

* Add missing group brief descriptions

* Remove debug leftover

* Remove obsolete doxygen config options

* Add intro text to deprecated list

* Use SVG for fluid mixer image

* Workaround for doxygen bug with linebreaks in ALIASES

Using \_linebr is not ideal, as it's an internal command. But that
seems to be the most compatible way to specify line breaks in ALIASES
accross different doxygen versions at the moment.

* GitHub Action to build the API docs from master branch (FluidSynth#704)

Uploads the complete HTML API docs as an artifact called api_docs.zip

* Remove unused command alias and sync Doxyfile.cmake and Doxyfile

* Settings reference style more consistent with rest of reference pages

* Update generated fluidsettings.txt for API doc build on fluidsynth.org

* Fx unit api (FluidSynth#673)

This PR addresses FluidSynth#669 point 2.1.
It proposes set/get API functions to change/read fx unit parameters.
The deprecated shell commands are updated. Now the commands line have 2 parameters:
- first parameter is the fx unit index.
- second parameter is the value to apply to the fx unit.

* Update owner of the SoundFont registered trademark. (FluidSynth#706)

As of the time of this PR, the SoundFont registered trademark is owned by Creative Technology Ltd.
http://tmsearch.uspto.gov/bin/showfield?f=doc&state=4803:rj74xq.2.1
http://assignments.uspto.gov/assignments/q?db=tm&qt=sno&reel=&frame=&sno=74325965

* Handle GS SysEx messages for setting whether a channel is used for rhythm part. (FluidSynth#708)

Some MIDI files that uses the GS standard uses channels other than channel 10 as percussion channel. Currently FluidSynth ignores the messages setting that up, causing notes meant to be played with a drum instrument played with a melodic instrument or vice versa. This patch will partially fix the issue.

Currently the implementation in this patch doesn't cover a specific "quirk" in Roland GS Modules: they seem to remember the last used instrument in the selected mode. This patch simply sets the instrument number to 0 instead.

A test file is attached. If played correctly (with `-o synth.device-id=16`) no out of place drum or piano sounds should be heard.

[wikipedia_MIDI_sample_gstest.mid.gz](https://github.com/FluidSynth/fluidsynth/files/5610727/wikipedia_MIDI_sample_gstest.mid.gz)

* Fix Windows CI

Remove fake pkg-config

* Re-enable unit tests with mingw

and allow them to fail to ensure build artifacts are being published

* Update API doc build to upload to GH pages

* Fix build path in API doc publish step

* Clean existing files in API doc on GH pages

* Fix commit message for deploying API doc

* Also set commit name and email for api doc build commits

* Commit to test API doc build

Will be removed with next commit again.

* Revert "Commit to test API doc build"

This reverts commit fd39f6e.

* Make some strings const (FluidSynth#716)

* Replace g_ascii_strtoll() with FLUID_STRTOL() (FluidSynth#717)

* Elaborate on synth.device-id

* Breaking unit tests for WindowsXP should be fatal

* Update Issue templates to point to GitHub discussion

Co-authored-by: Tom M <tom.mbrt@googlemail.com>
Co-authored-by: jjceresa <jjc_fluid@orange.fr>
Co-authored-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
Co-authored-by: jjceresa <32781294+jjceresa@users.noreply.github.com>
Co-authored-by: David Runge <dave@sleepmap.de>
Co-authored-by: KO Myung-Hun <komh@chollian.net>
Co-authored-by: Jon Latané <jonlatane@gmail.com>
Co-authored-by: Marcus Weseloh <marcus@weseloh.cc>
Co-authored-by: Nathan Umali <some1namednate@gmail.com>
Co-authored-by: Chris Xiong <chirs241097@gmail.com>
Co-authored-by: Carlo Bramini <30959007+carlo-bramini@users.noreply.github.com>
@derselbst
Copy link
Member

I must admit that I still have some reservations regarding this feature. However, I have found a potential use-case and would like to hear what you think whether it would fit in here:

Think of MIDI files: Usually, they are built the following: You have one MIDI track that only plays the piano. You have another track that plays only strings. Now you assign the piano track to MIDI channel 0, and the string track to channel 1. Simple and straight forward, great.

Now, I found that the developers of Mario Artist Paint Studio complicate things here: They cut those two tracks into many individual pieces. And then they randomly assign those tiny-tracks to either channel 0 or channel 1. That way, the piano sometimes plays on channel 0 and sometimes on channel 1, meanwhile the strings play on some other channel. And they do this with all 16 channels in a completely time-random way! (probably for copy protection reasons)

In order to obtain a nicely rendered multichannel piece of audio, where each instrument really plays on its dedicated stereo channel, one could

  • either reorder buffer assignments before calling fluid_synth_process(), or
  • use this API, JJC is suggesting.

Any thoughts?

@jjceresa
Copy link
Collaborator Author

And they do this with all 16 channels in a completely time-random way! (probably for copy protection reasons)

I think they do that to simulate the moving of instrument but of course it would preferable to ask the developers directly.

The reorder buffer assignments before calling fluid_synth_process() to obtain the rendering you described can be controlled
by the mapping set by this API which reside in the mixer.
For example when some user code is about to call fluid_synth_process(), this code could call the getter functions that should be exposed by the synth's mixer to get the mapping information (dry and fx) and doing the according buffer assignment.

I don't see any incompatibility with the suggested API and fact that the mapping set by this API could be exploited outside of the mixer.

@derselbst
Copy link
Member

I don't see any incompatibility with the suggested API and fact that the mapping set by this API could be exploited outside of the mixer.

I'm still struggling with the redundancy: one could simply reorder the buffers provided to fluid_synth_process(), or one could use this new API. I know there is a tiny difference in both approaches, as your API nicely works for realtime mapping, which may be useful when e.g. notes are still playing in release phase. But I'm still not sure whether this justifies this kind of redundancy.

@mawe42 Do you have any preference, comment or thought about my comment above? If not, no problem. Then I would try to implement that kind of "channel unscattering" for Mario Artist Paint Studio by
a) using JJCs proposed API, and
b) reorder buffers directly.
(But this would probably take a few weeks/months... )

@mawe42
Copy link
Member

mawe42 commented Feb 22, 2021

Sorry for the late reply! My initial reaction to your Mario use-case was: that sounds like a perfect use-case for a more elaborate MIDI router. Something stateful, so that you can store values from previous messages and use them as replacements in following messages. It might be overkill... but would probably a fun project :-)

Thinking about it some more it sounds like a job for a short Python script, reading the original MIDI data and spitting out a cleaned up version of it with each instrument on it's own track. Why would you want to convert it on the fly in FluidSynth?

@derselbst
Copy link
Member

There surely are various approaches to solve my problem. I was just trying to find a possible use-case for this API. But I'm still not convinced, sorry :/

@mawe42
Copy link
Member

mawe42 commented Feb 22, 2021

Same here. Of course it could be a(nother?) use-case for this API, but it feels a little bit like looking for a problem to fit the solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants