New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Realtime MIDI channel and fx mapping while musicians are playing. #669
Comments
A typo error: During MIDI instrument playing it could be possible to change the mapping of fx1 output |
This new API functions will look like the actual one but with an additional parameter. For example: Doing this way allows to maintain backward API compatibility, but new applications should only make use of new functions API. What do you think ? |
|
The API you suggest would be ok for me. I have a slight preference for calling it However, I am struggling in general whether we need that overall flexibility. Perhaps it would be helpful to propose the new APIs for 1.1 and 2.1 on the mailing list. |
This overall flexibility is appreciated when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument. API 1.1 helps to mix instruments to appropriate load speakers or headphones.
Yes, i will propose this. |
|
@jjceresa Is that a use-case that you have encountered yourself? Or do you know anybody who has expressed interest in that use-case? Edit: I mean the use-case "when the same synth instance is used by more than one musician simultaneously, each playing its own MIDI instrument". Is that something you need? Or know someone who needs this? And if so... why? :-) |
Yes, having only one software synth instance able to play multiple MIDI input is something I need. In home studio, this allows to group connection with existing external MIDI hardware (for example 2 keyboards). |
To me, that sounds like the "custom audio processing before audio is sent to audio driver" use-case as provided by |
As drafted in point 1 of first comment, this mapping API is intended to be placed at /**
* Set mixer MIDI channel mapping to audio buffers.
* These mapping allows:
* (1) Any `MIDI channels` mapped to any audio `dry buffers`.
* (2) Any `MIDI channel` mapped to any `fx unit inpu`t .
* (3) Any `unit fx output` mapped to any audio `dry buffers`.
*
* The function allows the setting of mapping (1) or/and(2) or/and (3)
* simultaneously or not:
* 1)Mapping between MIDI `channel chan_to_out` and audio dry output at
* index `out_from_chan`. If `chan_to_out` is –1 this mapping is ignored,
* otherwise the mapping is done with the following special case:
* if `out_from_chan` is –1, this disable dry audio for this MIDI channel.
* This allows playing only fx (with dry muted temporarily).
*
* @param synth FluidSynth instance.
* @param `chan_to_out`, MIDI channel to which `out_from_chan` must be mapped.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_chan`, audio output index to map to `chan_to_out`.
* Must be in the range (-1 to synth->audio_groups-1).
*
* 2)Mapping between MIDI channel `chan_to_fx` and fx unit input at
* index `fx_from_chan`. If `chan_to_fx` is –1 this mapping is ignored,
* otherwise the mapping is done with the following special case:
* if `fx_from_chan` is –1, this disable fx audio for this MIDI channel.
* This allows playing only dry (with fx muted temporarily).
*
* @param `chan_to_fx`, MIDI channel to which `fx_from_chan` must be mapped.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `fx_from_chan`, fx unit input index to map `to chan_to_fx`.
* Must be in the range (-1 to synth->effects_groups-1).
*
* 3)Mapping beetwen `fx unit output `(which is mapped to `chanfx_to_out`) and
* audio dry output at index index `out_from_fx`. If `chanfx_to_out` is -1,
* this mapping is ignored.
*
* @param `chanfx_to_out`, indicates the fx unit (which is actually mapped
* to `chanfx_to_out`) whose output must be mapped with `out_from_fx`.
* Must be in the range (-1 to MIDI channel count - 1).
* @param `out_from_fx`, audio output index that must be mapped to fx unit output
* (which is actually mapped to `chanfx_to_out`).
* Must be in the range (0 to synth->audio_groups-1).
*
* @return #FLUID_OK on success, #FLUID_FAILED otherwise
*/ |
|
The 3 mapping described in the API (see previous comment) are represented in branch master
All 3 mapping type are set by the API in realtime:
|
|
I must admit I'm still wondering if the use-case justifies the additional public API functions and new features. I imagine that most people who want or need this kind of flexibility are already using either multi-channel output with something like jack, or use multiple fluidsynth VST or DSSI instances in a plugin host. In both cases, channel routing and different external effects per channel are already very easy to configure. The first part (1) of the proposal has its own merit, I guess. We already have a limited ability to change the channel routing, and 1.1 makes this more explicit and flexible. I think if we implemented that, then we should also change the fluidsynth command-line options and rework the audio-groups / audio-channels settings. But (2) sounds a little too much like going down the route of implementing more and more things that jack and plugin hosts already do very well. I feel like it would broaden the scope of fluidsynth too much. But I also don't want to be the guy who always rejects new and larger extensions to the codebase. Maybe I'm too cautious here... so please don't take this as a downvote. It's more me thinking out loud about the scope of fluidsynth. |
I am not using jack, nor VST nor DSSI. I am just using directly an audio driver multi channel capable (i.e using an audio device card multi channels ). please see #667
Point 2 is about internal unit fx parameters. When 2 distinct MIDI instruments are connected to 2 distinct internal unit reverb, I would expect that these 2 reverb unit have different parameters. Actually all internal fx unit have the same parameters which is a lack particularly when these 2 instruments are routed to distinct stereo speakers. As the issue is about internal fx unit lack, it appears in the scope of fluidsynth. Please, note also that API implementation (pr 672,673) require small part of code. |
|
I didn't want to comment first to avoid biasing Marcus. However, I do share his concerns. Let me go one step back to your use-case (the practical part of it, not the initial theoretical one):
Ok, every musician plays his instrument on one MIDI channel. So we need 4 stereo channels, i.e.
Understood. But let's replace "instruments" with "MIDI channels". This use-case you have is absolutely valid. However, you're providing a bottom-up solution for it by adding more complexity into Instead, I would have voted for a top-down solution:
It is up to you whether the client program in 1. is your own demo program or fluidsynth's command shell. And ofc. 2. requires WaveOut and dsound drivers to learn to support Now coming back to your use-case: While the mapping of MIDI channels to audio channels is indeed rigid in The only drawback of my solution I see is that it would also affect voices that are already playing. Whereas your solution only applies the new mapping to new voices. But does this issue really justify the added complexity of this PR? Esp. since we are talking about "temporary mappings", as far as I understand. The drawback of your proposal is that it duplicates functionality (i.e. flexibility of buffer mapping) that is already provided by So, in summary, I'm sorry to say, but given this API propsal, I don't see any "new features" that can't be already achieved with
The changes proposed in #673 is actually ok for me, but let's talk about this later separately. |
Yes,
yes, the rvoice_mixer realtime MIDI channels mapping solution allowing easy
Please, be aware that I am sensible and aware of the powerful of Now coming back to the your application proposed above (client that administrates the
I summary,
|
Ok, but I don't understand why this so important? Is it only to avoid potential audio artifacts? If so, wouldn't a simple fade in / fade out easily solve this? Also, there is another point that I still don't get: Assuming you have a pianist playing on So, I really think what you need is the Jack audio server. It's solving exactly this issue. It should be compilable for Windows as well, have you had a look at it? |
For example, when a musician plays a phrase, the mapping allow him to play temporarily dry audio "solo" (the fx is muted) or fx audio "solo" (the audio dry is muted) and then come back to both (dry + fx) during the same phrase (using foot or key switch).
When musicians plays together they never use headphones they use load speakers to be able to hear each other because they need mutual learning. So the mapping of 2 instruments on the same output has only sense if this output is connected to
When a musician play an instrument he is busy with this instrument and don't want to be disturbed by the use of a GUI application. Jack is well suited for predefined off line I/O audio connection settings, but not adapted for a musician playing in realtime. |
|
JJC, excuse me for being so persistent here, but I would really like to understand if we are talking about a concrete need you have or if this is more like "it would probably be nice for other people". So for me to understand where you are coming form:
|
Yes
I play with other musician and would like to do so during training lesson using only one instance and only one audio card multi channels connected to load speakers. Also I would like to get the minimum of hardware/software complexity as possible using this alone (at home).
Yes, for example, when playing 2 instruments (i.e a flute [+ bit of reverb] accompanied by a piano [+ bit of reverb]) this gives the dimension (illusion) of 4 instruments present. This is possible also when the 2 instruments are played by only one musician on only one MIDI keyboard.
Yes, for the "solo fx/dry on/off" experiment I used a tiny craft application that intercepts MIDI events coming from the MIDI driver and then call fluidsynth API. Doing this kind of "solo" experiment using the fluidsynth console command line doesn't allow to get the expected real-time feedback. |
So, given your 10 fingers and 2 foots, how exactly do you intend to change the channel mapping while playing the piano? You talked about a footswitch. That's a little too vague. I would like to get some more details for a better understanding: What does the footswitch trigger? A shell command? Or an API call? And does it trigger a simple pre-defined mapping, or does it somehow dynamically react to your situation? (Sry, I really have no clue.) |
During the playing of the instruments notes with the hand , the foot-switch (or hand push-button switch) triggers a pre-defined logic mapping that will use the MIDI channel of the current note. As you say the reaction is a dynamic to the playing situation (i.e based to the current MIDI channel of the instrument played by the hand). The mapping is executed on API call. |
Ok, how about using Jacks API for manipulating ports to rearrange the mapping? And if you wanted to temporarily disable fx, you could add a default modulator whose secondary source is a switch that pulls CC91 and CC93 to zero. Sorry for being so nit-picky here, but I really think that this channel mapping use-case should be implemented on a high level. Not by adding more complexity into rvoice_mixer and exposing it to the user. I'm afraid that this will become a burden as soon as we need to change an implementation detail deep down in the mixer and we find that for some reason we cannot do this because it would break the API usage. |
Ok, I understand your point of view as a maintainer. Unfortunately, Jack covers only a small part of the real need of musicians using MIDI. Also please, be aware that actually
Just a note, I still doesn't understand why you think this PR is adding "more complexity into rvoice_mixer" ?. This PR does a simple straightforward substitution of the expression So, we can close this PR. I will continue with a custom version of fluidsynth. |
Currently, there are no constraints of what and how we map things in rvoice_mixer. The user doesn't need to know / doesn't need to care about that. Thus we can use a simple fixed mapping. Now you want to make this mapping variable and expose it to the user. Hence we will get a bigger API and constrain ourselves to rvoice_mixer's current implementation. If there were no other options to achieve your use-case, I would buy it. However, given the number of alternative approaches on a higher level (
Seems like we need a third (or fourth) opinion. @mawe42 What do you think? Are you "still wondering if the use-case justifies the additional public API functions and new features."? Should we discuss that feature on the mailing list? You can also tell me I'm mistaken, then I will give up my reservations on that topic. |
I'm really in two minds about this. On the one hand I think that the MIDI channel to buffer mapping should be limited to two modes of operation:
Option 1 is probably what 90% of users will ever need. Option 2 is for the small number of people who want to use fluidsynth for advanced things like real-time multi-channel live performance. For those advance use-cases, there are really good tools available (Carla, jack + friends, Ableton Live, ...) that can simply take fluidsynth multi-channel output (or multiple fluidsynth instances) and offer very flexible and user-friendly real-time control for live performance. And if you miss functionality (for example to switch an instrument to a different output, controlled via a MIDI foot pedal), you can either search for a plugin that does what you want, or quickly write a plugin yourself. Using those real-time performance hosts also has the advantage that you can add any effect to the outputs, and set them up so that you can control the effects via MIDI foot pedals or other controllers as well. So... when I follow this train of thought, I would argue against these changes and would instead propose to rip out existing functionality. Get rid of the (Side note: I proposed a rewrite of the LADSPA system because I wanted to use additional effects with fluidsynth in my embedded application. It served quite well until recently... but I now have some additional requirements that mean I need more flexibility. So I will switch over to jack and multi-channel output instead. Which is something I should have done from the beginning, I think.) But I said I'm in two minds about this. So the other way to think about this is: we already have (most of) the features that JJC wants, so let's expose them to the user in the most useful way possible. We already offer limited control over the mapping via the And we already have the separate fx units, so adding new API functions to control their parameters separately makes sense. Here I would like to ask: if we allow individual fx unit control via the API and shell, should we also expose this via the settings? And if we want to actively support real-time manipulation, to support the use of standalone fluidsynth in live performances, then we need to provide a way for non-API users to access the functionality as well, I think. And no, the shell does not really count. :-) So maybe we need to implement an OSC handler for real-time live performance control? |
Ok, I'll buy it. I'll review #672 in a more detailed manner tomorrow.
Cleaning it up... do you have something specific in mind?
IMO, no. I see the settings more like a basic initialization of the synth, that should be easy to understand and use. If one needs to set details, one should use the synth API. Esp. since you might want to manipulate those parameters in real-time. (I never liked these "real-time" settings. They only make synth API calls under the hood. I prefer direct API usage.)
Open Sound Control - sounds interesting. I never really had a close look into it, so I don't know. But I think we should keep this kind of real-time manipulation at a minimum. As you said initially, there is already a bunch of software out there for that purpose. |
Thanks. Please no need to hurry . |
|
Thanks you both for your useful feedback and your time.
That is right for the default settings value of
Right. I am aware that this new mapping API will move the user constraint toward the developer side that is now constrained to the rvoice_mixer current implementation. I am also100 % aware of any fair about the risk of breaking the mapping API if we change some details in the rvoice_mixer . Actually rvoice_mixer implementation behaviour is fully and only defined by the semantic of settings
May be you did a typo mistake and you are talking about |
These settings initialize all fx unit with the same values. Individual fx unit initialization seems not necessary. (please note that settings API (key,value) accepts only one |
No, I really mean
That's what I mean. The relationship between So in my opinion, if we have shell commands to give users explicit control over the MIDI channel to audio channel, MIDI channel to fx unit, and fx unit to audio channel mappings, then that should be the one and only way to configure different channel mappings. Completely remove the So in essence I think we should design this feature from the user perspective. What do users need, how can they achieve what they want. And then provide one and only one way to configure it for each interface (fluidsynth exec, API). And then maybe think about adding OSC or MIDI SysEx commands for the real-time control that you wanted. |
|
Thinking about this some more. Instead of simply implementing what somebody currently needs, I would rather design this feature. So think about what usage scenarios we want to support. Then decide what the best way would be to support and (more importantly) how to actually configure fluidsynth for those scenarios. I can think of the following:
(Edit: added fourth option) Are there any more? |
The scenarios you describe can already be achieved with
Ok, sounds good. But in this case, we should continue this discussion on the mailing list. I don't think that we three can reach a common sense here that covers all use-cases, while it's still easier to understand and use than the current implementation. |
That might be, but it's not what I was thinking about. I tried to look from the user perspective. So try to imagine what people need. Then decide which of those use-cases we want to support. Then decide how the user-interface should work. And only then look at the existing implementation and decide whats possible and how. I don't know... it could also be that I'm thinking too big here.
Good idea. |
We will see once brought to the mailing list. Perhaps you Marcus could / should start a discussion. I'm probably too biased. |
I am not sure to understand. You probably means:
|
|
| So think about what usage scenarios we want to support. I see only 2 types of scenarios,
At this stage for the user I see only the Default: All MIDI channels on single stereo output with effects mixed in. |
Please ignore this sentence. |
|
I've just tried to write a post for fluid-dev to start the discussion about this feature, but I'm having a really hard time in trying to come up with a good explanation and (more importantly) with a good question to ask the community. For starters, I was unsure which audio drivers actually support multi-channel output at the moment. Because I couldn't find any documentation about this, I created the following wiki page: https://github.com/FluidSynth/fluidsynth/wiki/Audio-Drivers Please have a look at that page and let me know if it is correct. I think such a page would be useful for our users, so I would like to link it into the main documentation in the wiki. Any objections to that? So, coming back to the discussion... I'm really unsure what to ask the community. JJC has said he has a real use-case for real-time channel-mapping. And we seem to agree that most of what we need to support this is already available in FS, so it would be a good change to make. The open question is how the API for this feature should look and behave. And here I'm really unsure what JJCs plans for the real-time control is. @jjceresa can you elaborate a little more on how you intend this real-time control to work? I mean the actual practical usage of the feature? Would you create a little script that listens to MIDI events and send text commands to the fluid shell via telnet? Or would you write a wrapper program that uses fluidsynth via the API and implement your own MIDI event handler to control the channel mapping? And there is the "big question" I came up with: instead of patching another layer on top of the current multi-channel logic, shouldn't we design the multi-channel output from the ground up instead? Which also means revising the I'm unsure on how to proceed here. I feel quite strongly that adding the extra layer of complexity in this PR on an already hard to understand feature is problematic. So I really think we should come up with a new and unified way to control the MIDI channel to output channel and MIDI channel to effects-unit mapping. Once that is clean, we should add the real-time controls to change the mapping. |
Good Idea, thanks for this page.
waveout (like dsound) support multi-channel too. A lot of unix-like driver (alsa,...) yet does not support multi-channels but this could be done. For example, for alsa, I looked how to add this support but never proposed a PR because I cannot test locally here (please see #665). |
Thanks, I've updated the page.
Sure, I just wanted to document the current state. |
I think first the actual fx mixer behaviour should be be documented a bit in plain text. I can do that. This should show where is the mixer in the overall audio audio path. Also this document must explain that "multi-channels" in fluidsynth is simply Then later it will be possible to expose to the mail-list the API proposal based on this document. |
|
|
I must admit I still don't understand where your use-case is coming from. I mean I understand what you want to achieve, but I don't understand why you want to achieve it in this way. But maybe that isn't really important... Rewriting the multi-channel configuration interface has merit in itself, and your use-case would naturally benefit from that, I think. I think I have an idea how to bring this to the mailing-list now. I will write a proposal what I think would be a really nice and clean way to configure the channel and buffer mapping. |
|
And I must admit that I never saw a use-case for multiple synth instances. However, the use-cases JJC has just described seem like a very suitable case for creating two synth instances. The purpose of squashing this functionality into a single instance is not quite clear to me. |
right, and squashing these 2 synth instances into one requires only one audio driver driving only one audio card(having at least 2 stereo outputs of course). Otherwise with 2 synth instances we are forced to create 2 audio drivers which require 2 distinct audio cards and also we lose any possibility of mapping/mixing any MIDI channels to any audio output. |
This PR addresses #669 point 2.1. It proposes set/get API functions to change/read fx unit parameters. The deprecated shell commands are updated. Now the commands line have 2 parameters: - first parameter is the fx unit index. - second parameter is the value to apply to the fx unit.
* Properly handle overlapping notes when using fluid_event_note() (FluidSynth#637) * Fix regression introduced in a893994 Mentioned commit broke fluid_synth_start() when using a DLS soundfont. * Fix an uninitialized memory access that could possibly trigger an FPE trap for instruments that use the exclusive class generator * Update API docs * Bump to 2.1.4 * Update Doxyfile * Turn incompatible-pointer-types warning into error * Fix passing arguments from incompatible pointer type * Fix a NULL deref in jack driver * Fix a possible race condition during midi autoconnect * Fix printf format warnings * Update Android Asset loader to new callback API * Update Travis CI (FluidSynth#658) * update to Ubuntu Focal * use clang10 * avoid unintentional fallbacks to default `/usr/bin/c++` compiler * fix related compiler warnings * fix NULL permitted for out and fx pointer buffer Closes FluidSynth#659 * CMakeLists.txt: fix build with gcc 4.8 (FluidSynth#661) -Werror=incompatible-pointer-types is unconditionally used since version 2.1.4 and 137a14e. This will raise a build failure when checking for threads on gcc 4.8: /home/buildroot/autobuild/run/instance-3/output-1/host/bin/arm-none-linux-gnueabi-gcc --sysroot=/home/buildroot/autobuild/run/instance-3/output-1/host/arm-buildroot-linux-gnueabi/sysroot -DTESTKEYWORD=inline -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Os -Wall -W -Wpointer-arith -Wcast-qual -Wstrict-prototypes -Wno-unused-parameter -Wdeclaration-after-statement -Werror=implicit-function-declaration -Werror=incompatible-pointer-types -Wbad-function-cast -Wcast-align -DNDEBUG -fPIE -o CMakeFiles/cmTC_98946.dir/CheckIncludeFile.c.o -c /home/buildroot/autobuild/run/instance-3/output-1/build/fluidsynth-2.1.4/CMakeFiles/CMakeTmp/CheckIncludeFile.c cc1: error: -Werror=incompatible-pointer-types: no option -Wincompatible-pointer-types Fixes: - http://autobuild.buildroot.org/results/13cbba871db56ef8657a3d13c6ac8e1b4da0d244 Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com> * TravisCI: add a build for GCC 4.8 (FluidSynth#662) * Remove unused member variable * Limiting audio-channels to audio-groups (FluidSynth#663) * Use a runtime check to detect version of libinstpatch (FluidSynth#666) It could be that during runtime an older version of libinstpatch is used than the one fluidsynth was compiled against. In this case, libinstpatch will fail to load DLS fonts, because libinstpatch's initialization semantics don't match those compiled into fluidsynth. * Add a chart about voice mixing and rendering * Mapping of fx unit output to dry buffers in mix mode. (FluidSynth#668) Currently, all fx unit output (in mix mode) are mapped to the `first buffer`. This is not appropriate for synth.audio-groups > 1 This PR allows the mapping of fx output based on `fx unit index` and `synth.audio-groups` value. This allows us to get the `fx output `mixed to the respective `buffer` on which a `MIDI channel` is mapped. For example: with `synth.audio-groups = 3` and `synth.effect-groups = 3`: - MIDI chan 0 (dry + fx0) is mapped to buf 0 - MIDI chan 1 (dry + fx1) is mapped to buf 1 - MIDI chan 2 (dry + fx2) is mapped to buf 2 * Add multi channels support for audio driver. (FluidSynth#667) This PR addresses FluidSynth#665. 1) Add new functions for multi channels support: `fluid_synth_write_float_channels()`, `fluid_synth_write_s16_channels()` 2) `dsound` and `waveout` driver make use of this support. tested on 2 audio devices: - creative SB Live! (6 channels). - Realtek: ALC889A (8 channels). * Bump to 2.1.5 * Add SonarQube static code analysis (FluidSynth#671) * Add SonarQube and LGTM badges to README * Remove fluid_event_any_control_change() from public API (FluidSynth#674) Originally, I have only marked it deprecated. But since we have an SOVERSION bump next release and because this function was only meant for internal usage, I think it's safe to remove it right now. * Remove dead code * Fix an impossible NULL deref * Fix a NULL dereference Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'prev_preset'), if `size` is negative. Problem is: Parameter `size` is `chunk.size` and should be unsigned. * Fix another NULL dereference Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'pr'), if size is negative. However, size should be unsigned. * Remove a FIXME I don't see any problem calling fluid_channel_init() from within synth context * Remove a FIXME I don't see any 'allocation' of preset. And ALL public synth functions have a mutex lock which might potentially block when called from synth context, but only then if the client app pessimizes this situation by extensively calling the synth from outside the synth context. * Remove a FIXME I have no clue what it refers to or what it's meant by that. * Add comment into empty block * Remove a FIXME Not aware of any problems caused by the old glib thread API. It will be removed sooner or later anyway. * Remove a FIXME * Set the systemd unit target to default.target fluidsynth.service.in: The [Install] section [1] in systemd unit declares in which target the service will be started. The `multi-user.target` [2] - managed by the systemd _system_ service manager - is used in the `fluidsynth.service`. However, as it is a _user_ unit it needs to be pulled in by the `default.target` [3] instead, which is the main target for the user session (as started by `user@.service` [4]). [1] https://www.freedesktop.org/software/systemd/man/systemd.unit.html#%5BInstall%5D%20Section%20Options [2] https://www.freedesktop.org/software/systemd/man/systemd.special.html#multi-user.target [3] https://www.freedesktop.org/software/systemd/man/systemd.special.html#default.target1 [4] https://www.freedesktop.org/software/systemd/man/user@.service.html * Define FLUIDSYNTH_API on OS/2 Previously, CMake on OS/2 exported all the symbols unconditionally. Now it exports necessary symbols only. As a result, it's necessary to define FLUIDSYNTH_API correctly. Addresses FluidSynth#678 * Make winmidi driver multi devices capable. (FluidSynth#677) * Fix minor bug in windows audio driver (FluidSynth#680) * Improve error reporting in Waveout and DSound drivers * Fix Windows build * Add proper unicode support to Windows error reporting * Fix build on Windows 9x/ME Addresses FluidSynth#679 * Promote Controller/Pressure/Bend event functions to 32bits (FluidSynth#670) * Elaborate on synth.cpu-cores * Add FluidMixer chart to API docs * Ensure WaveOut compatibility with Win9X/NT (FluidSynth#687) * Update and rename README.Android.md to README.md * Update Android CircleCI build to use latest orb, Android API, Oboe and Cerbero (FluidSynth#690) This fixes the currently-broken CircleCI build for Android-useable .so files. Currently the Cerbero build is based off https://github.com/falrm/cerbero until https://gitlab.freedesktop.org/gstreamer/cerbero/-/merge_requests/641 is merged and deployed to the GitHub cerbero mirror. Here is a successful build with the updated CircleCI workflow: https://app.circleci.com/pipelines/github/falrm/fluidsynth-android/31/workflows/0ad3186a-394c-4736-984b-96496b608053/jobs/32 Fixes FluidSynth#688 * Replace FreeBSD 13.0 with 11.4 (FluidSynth#692) 13.0 hasn't been released yet and the CI build keeps failing for long. * Remove unused variable * Fix possible uninitialized use of dry_idx variable * avoid an unlikely race condition * Add hint message when compiled without getopt support (FluidSynth#697) * Add getopt support to CMake summary * Add public API to pin and unpin presets to the sample cache (FluidSynth#698) Following the discussion about an API to pin and unpin preset samples in the sample cache here: https://lists.nongnu.org/archive/html/fluid-dev/2020-10/msg00016.html Short explanation of the change: Only the default loader currently supports dynamic sample loading, so I thought it might be a good idea to keep the changes for this feature mostly contained in the default loader as well. I've added two new preset notify flags (FLUID_PRESET_PIN and FLUID_PRESET_UNPIN) that are handled by the preset->notify callback and trigger the loading and possibly unloading of the samples. * Revert "remove VintageDreamsWaves-v2.sf3" This reverts commit a36c06c. We've got explicit permission from Ian Wilson to convert it to SF3. Addresses FluidSynth#701. * Updated XSL / styling for fluidsettings.xml * Cleanup section label markup and rendering * Use (empty string) for empty default values of str settings * shell.port is an int setting, not num * Update periods and period-size with current values from source * Consistently format all floats * Better explain currently unused effects-channels * Update effects-groups description to avoid the word "unit" * Update ladspa.active description Use 1 (TRUE) for consistency and mention LADSPA documentation * As gs is default for midi-bank-select, list it as first option for clarity * Options seems to be more widely used, so use that instead of Choices * Remove FLUIDSYNTH_API and FLUID_DEPRECATED macros from documentation * Remove "References" and "Referenced by" links from doc They auto generated links are quite long on some functions, making the documentation harder to read. * Enable navigation sidebar * Make larger enums easier to read * Move doxygen customizations into separate directory * Restructure devdocs into separate pages * Change files into groups / modules * Some additional subgrouping * Use xsltproc to include settings in API documentation * Replace all links to fluidsettings.xml with proper \ref's * Command Shell group for all shell related commands With subgroups for command handler, shell and server. * Audio output group With subgroups Audio Driver and File Renderer * Logging interface * MIDI input group Contains MIDI Driver, MIDI Router, MIDI Player and MIDI Events * MIDI Seqencer documentation * Settings documentation * Miscellaneous group * SoundFont API Includes Generators, Modulators, Loader etc * Add version defines and functions to misc group * Rename setting reference page name to lowercase, for consistency * Structure the large synth header into subgroups Also include version.h and ladspa.h in the Synthesizer group. * Consistent capitalization of usage guide section names * Some more brief message abbreviation hints * Custom doxygen layout to rename modules to API Reference * Sort groups/modules, briefs and members * Updated documentation styling * Remove footer, as it takes away valuable vertical space * Make sure libxslt is only searched if doxygen is available as well * Also update the styling of the deprecated list * Mark settings with callbacks as realtime and output this in the generated docs * Separate new_* and delete_* functions from the rest * Sync the static Doxyfile with Doxyfile.cmake Still missing is the integration of the generated fluidsettings.txt, as that requires a build script currently not available on the server generating the public API docs. * Split doxygen INPUT into separate lines, for easier readability * Move recent changes into separate file * Move usage guide pages into separate files in doc/usage * Move examples into doc/examples directory * Split HTML_EXTRA_FILEs into separate lines * Use \image for images and improve quality of FluidMixer image * Use custom \setting{} alias to link to fluid settings * Smaller cleanup and reformatting of long lines. * Add generated fluidsettings.txt for fluidsynth.org API doc build Probably not the final solution, but works for now. * Hide nav sync toggle button * Style improvements for small screens - hide side nav - hide search box - make content full height * Improve styling of field tables (enum values) * Document how to revert the styling and layout changes * Add documentation hints to style guide * Make top links black on hover, not white * Add missing group brief descriptions * Remove debug leftover * Remove obsolete doxygen config options * Add intro text to deprecated list * Use SVG for fluid mixer image * Workaround for doxygen bug with linebreaks in ALIASES Using \_linebr is not ideal, as it's an internal command. But that seems to be the most compatible way to specify line breaks in ALIASES accross different doxygen versions at the moment. * GitHub Action to build the API docs from master branch (FluidSynth#704) Uploads the complete HTML API docs as an artifact called api_docs.zip * Remove unused command alias and sync Doxyfile.cmake and Doxyfile * Settings reference style more consistent with rest of reference pages * Update generated fluidsettings.txt for API doc build on fluidsynth.org * Fx unit api (FluidSynth#673) This PR addresses FluidSynth#669 point 2.1. It proposes set/get API functions to change/read fx unit parameters. The deprecated shell commands are updated. Now the commands line have 2 parameters: - first parameter is the fx unit index. - second parameter is the value to apply to the fx unit. * Update owner of the SoundFont registered trademark. (FluidSynth#706) As of the time of this PR, the SoundFont registered trademark is owned by Creative Technology Ltd. http://tmsearch.uspto.gov/bin/showfield?f=doc&state=4803:rj74xq.2.1 http://assignments.uspto.gov/assignments/q?db=tm&qt=sno&reel=&frame=&sno=74325965 * Handle GS SysEx messages for setting whether a channel is used for rhythm part. (FluidSynth#708) Some MIDI files that uses the GS standard uses channels other than channel 10 as percussion channel. Currently FluidSynth ignores the messages setting that up, causing notes meant to be played with a drum instrument played with a melodic instrument or vice versa. This patch will partially fix the issue. Currently the implementation in this patch doesn't cover a specific "quirk" in Roland GS Modules: they seem to remember the last used instrument in the selected mode. This patch simply sets the instrument number to 0 instead. A test file is attached. If played correctly (with `-o synth.device-id=16`) no out of place drum or piano sounds should be heard. [wikipedia_MIDI_sample_gstest.mid.gz](https://github.com/FluidSynth/fluidsynth/files/5610727/wikipedia_MIDI_sample_gstest.mid.gz) * Fix Windows CI Remove fake pkg-config * Re-enable unit tests with mingw and allow them to fail to ensure build artifacts are being published * Update API doc build to upload to GH pages * Fix build path in API doc publish step * Clean existing files in API doc on GH pages * Fix commit message for deploying API doc * Also set commit name and email for api doc build commits * Commit to test API doc build Will be removed with next commit again. * Revert "Commit to test API doc build" This reverts commit fd39f6e. * Make some strings const (FluidSynth#716) * Replace g_ascii_strtoll() with FLUID_STRTOL() (FluidSynth#717) * Elaborate on synth.device-id * Breaking unit tests for WindowsXP should be fatal * Update Issue templates to point to GitHub discussion Co-authored-by: Tom M <tom.mbrt@googlemail.com> Co-authored-by: jjceresa <jjc_fluid@orange.fr> Co-authored-by: Fabrice Fontaine <fontaine.fabrice@gmail.com> Co-authored-by: jjceresa <32781294+jjceresa@users.noreply.github.com> Co-authored-by: David Runge <dave@sleepmap.de> Co-authored-by: KO Myung-Hun <komh@chollian.net> Co-authored-by: Jon Latané <jonlatane@gmail.com> Co-authored-by: Marcus Weseloh <marcus@weseloh.cc> Co-authored-by: Nathan Umali <some1namednate@gmail.com> Co-authored-by: Chris Xiong <chirs241097@gmail.com> Co-authored-by: Carlo Bramini <30959007+carlo-bramini@users.noreply.github.com>
|
I must admit that I still have some reservations regarding this feature. However, I have found a potential use-case and would like to hear what you think whether it would fit in here: Think of MIDI files: Usually, they are built the following: You have one MIDI track that only plays the piano. You have another track that plays only strings. Now you assign the piano track to MIDI channel 0, and the string track to channel 1. Simple and straight forward, great. Now, I found that the developers of Mario Artist Paint Studio complicate things here: They cut those two tracks into many individual pieces. And then they randomly assign those tiny-tracks to either channel 0 or channel 1. That way, the piano sometimes plays on channel 0 and sometimes on channel 1, meanwhile the strings play on some other channel. And they do this with all 16 channels in a completely time-random way! (probably for copy protection reasons) In order to obtain a nicely rendered multichannel piece of audio, where each instrument really plays on its dedicated stereo channel, one could
Any thoughts? |
I think they do that to simulate the moving of instrument but of course it would preferable to ask the developers directly. The reorder buffer assignments before calling fluid_synth_process() to obtain the rendering you described can be controlled I don't see any incompatibility with the suggested API and fact that the mapping set by this API could be exploited outside of the mixer. |
I'm still struggling with the redundancy: one could simply reorder the buffers provided to @mawe42 Do you have any preference, comment or thought about my comment above? If not, no problem. Then I would try to implement that kind of "channel unscattering" for Mario Artist Paint Studio by |
|
Sorry for the late reply! My initial reaction to your Mario use-case was: that sounds like a perfect use-case for a more elaborate MIDI router. Something stateful, so that you can store values from previous messages and use them as replacements in following messages. It might be overkill... but would probably a fun project :-) Thinking about it some more it sounds like a job for a short Python script, reading the original MIDI data and spitting out a cleaned up version of it with each instrument on it's own track. Why would you want to convert it on the fly in FluidSynth? |
|
There surely are various approaches to solve my problem. I was just trying to find a possible use-case for this API. But I'm still not convinced, sorry :/ |
|
Same here. Of course it could be a(nother?) use-case for this API, but it feels a little bit like looking for a problem to fit the solution. |
A proposal to gain benefice of mixer and fx unit capabilities.
1)Actually in fluidsynth the
mixeroffers the potential to map distinctMIDI instrumentto distinct stereobuffer.2)Similarly distinct
MIDI instrumentcan be mapped to distinctFx unit input.1)When the musician think for a MIDI instrument mapping configuration, he decide of 3 mapping at MIDI level:
- MIDI chan x to dry buf i (actually i = x % synth.audio-groups)
- MIDI chan x to fx unit j (actually j = x % synth.effects-groups)
- fx j to dry buf k (actually k = j % synth.audio-groups)
Actually these mapping are rigid and aren't real time capable while MIDI instrument are played.
1.1) A new simple API should offer mapping flexibility in real time situation allowing:
For example, this allows:
MIDI instrument i1 mapped to dry1 buffer.
MIDI instrument i1 mapped to fx1 input, and fx1 output mapped to dry1 buffer.
MIDI instrument i2 mapped to dry2 buffer.
MIDI instrument i2 mapped to fx2 input., and fx2 output mapped to dry2 buffer.
During MIDI instrument playing it could be possible to change the mapping of fx1 output
to dry1 buffer. So that we can hear that instrument fx1 leave dry i1 and now is mixed
with fx2 of instrument i2.
Another real time feature is the ability for a musician to play temporarily only fx (dry is silence)
or only dry (fx is silence).
2.1) Now we need a new API that allows to change a particular fx unit parameter.
I will propose a PR for these new API (1.1) and (2.1).
The text was updated successfully, but these errors were encountered: