Better rawMIDI parser and support for Virtual RawMIDI port #1204
Conversation
|
This change is great for me, namely the ability to target a "Virtual Port". I often use a text-based MIDI sequencer I wrote and have been really interested in running one directly from my norns to feed MIDI into my scripts. The virtual port introduced here does exactly what I need. I tested |
|
There's no need to re-open PRs if you'd like to squash or edit commits. one way to do that is to rebase your branch on the upstream, while squashing commits if necessary. See the following link for an example of rebasing: https://github.com/servo/servo/wiki/Beginner%27s-guide-to-rebasing-and-squashing (so we can keep the discussion continuous) As to your new PR, consider the above comments and I have a follow-up question to #1202 (comment) - how easy it will be for a norns user to configure rtpmidi and/or connect other programs to matron? |
|
Ok... I'm failing collapsing all the commits into one after they are pushed. I apologize for that. |
|
How do I add |
|
i think having an rtmidi endpoint is a great use case. but agree that there needs to be some way for end users (not developers) to acheive this. (and of course - this can be left for future work.) re: fluidsynth code. we unfortunately can't do this. fluidsynth is GPL2. norns is GPL3 which is more permissive, it's not "backwards compatible." we would have to adopt the older license for norns, or ask the fluidsynth authors for a "GPL2+" license. i'd prefer to lose the parsing code if possible. we are already doing parsing in Lua. if we need to extend the lua parser or pass more bytes to it, let's just do that.
add it to the source list here |
07dc6aa
to
64ea80b
|
Isn't parsing in lua at this level extremely slow (never used lua before, but guessing it's a script language there could be a serious performance hit) |
|
The AUTHORS documentation points to @derselbst as contact |
|
i guess it depends what you mean by "extremely" - it is slower for sure. i doubt it's significant. the main performance hits in lua come from memory management (e.g. string processing.) parsing midi messages in lua happens here: this table of 3 bytes is just copied to the stack through but in any case - events generated from your new code end up hitting the lua parsing code anyway, so its a totally moot point. the only way around that would be to have a separate lua glue function for each midi event type. as you've pointed out, there are cases that we are missing, like double-precision CCs (i think?).
[ed] ok, seems to indicate that there are plenty of opportunities for e.g. clock bytes to show up in other control sequences: can anyone confirm that this breaks our current system? if it seems best to handle everything in C and add many, many more cases to the event processing loop (beyond generic MIDI_EVENT) then that is fine with me, but let's not pretend we're speeding things up by adding a second parsing layer. paging @derselbst the question here is whether it would be OK to include some of fluidsynth's MIDI stream parsing code in a GPL3 project. (fluidsynth being GPL2.) none of us are laywers, but it seems like it is "technically" not allowed without specific permission. |
|
Please don't get me wrong, I'm not pretending that a second parse is faster. It's for sure more expensive that the previous implementation. I'm not familiar with LUA at all, so I thought resolving this at a lower level will be faster that doing the same inside an interpreter. My only experience with them is through Python which is surprisingly slow. The reasoning to bring FluidSynth parse to the table is mostly because it's a mature project that already resolve a lot of common and edge cases. |
|
oh indeed, understood. sorry if that sounded aggressive. the rest of my comment is more relevant maybe: if we are gonna do all stream parsing in C, then the C->lua interface should deal with parsed data and not raw bytes. this of course is way less convenient when it comes to creating that interface. (or i dunno, maybe i'm wrong and there is some benefit to having the C side catch edge cases like interleaved messages, dropped bytes etc, and have the lua interface handle "cleaned-up" 3- or 4-byte packets.) in any case, i don't think performance is the main issue - certainly would need profiling to support any decision around that. |
|
It have been a very interesting trip of discoveries : ) (I'm really new to audio/midi, I have more experience on the CG world). I notice that only USB devices send 3 bytes packages. They look like this:
While a program like
This packages comes in pairs, groups of 3 or even 5 bytes. depending if the last type of event is different and if it can package other bytes under the same package. Took me a while to understand that it was the previous parsing reading every 3 bytes was the reason I was getting events only when channels were change and usually the order of the parameters was weird. As I said, I'm new to audio/midi processing, so VERY probably this is totally obvious. I guess I'm sharing my own learning process : ) In any case I leave it to the experts : ) |
|
wooooo ok gotcha. i had no idea that the format was totally different and that ALSA would omit status bytes like that. how horrible! (obviously i am no ALSA midi expert either, most of my MIDI experience is with hardware.) much becomes clear. |
|
If I remember correctly the USB encapsulation for MIDI disallows “running status” which why the messages are always 3 bytes (in reality I believe the full USB encapsulation is 4 bytes which includes the endpoint information). It looks like the virtual raw MIDI might be using running status which is disappointing. (apologies for the open/close on the ticket, was adding a comments via a phone) |
|
thanks @ngwese. "running status" was the key concept that i had totally forgotten. but if that is the only edge case, it seems like it would be really simple to handle in parser: check the top bit in terms of updating the existing system to handle this: it seems like it would be sufficient to (A) include a flag for "running status" in the midi event, or even (B) just pass the 2 bytes to lua and let the lua parser check for status bit on first byte. |
Glad that you guys care :) I'm not a lawyer either, but note that fluidsynth is LGPL-2.1 licensed. And according to the license compatibility matrix: "LGPLv2.1 gives you permission to relicense the code under any version of the GPL since GPLv2." So, I would say, just take whatever code you need, keep the "FluidSynth - Peter Hanappe and others" comment (so that people know where this is coming from) and everything will be fine :) |
|
Thank you @derselbst |
|
i think the present consensus is, this needs a little more work:
if we do end up with edge cases like corrupted or split MIDI packets, then we could revisit adding a more robust parsing mechanism to oh, BTW: you mention above that you have seen 5-byte packets as well. that seems strange; or at least, doesn't fit with the running-status behavior. can you capture an example of this so we can determine what's going on and how best to handle it? |
|
The 5 bytes were a header, 2 pairs and other 2 pairs. But all resolve in the same event. About your points, is there someone willing to take that work? My original goal wasn't to deal with byte parsing but to get Virtual RawMIDI working, it just happens that the parser was the issue. I found the original parsing a bit confusing to read and deal with. So I went for the best efficient and low lever OSS implementation I could found. I'm happy to pass the torch, or break this PR into the Vitual port stuff and the parser issue, so someone else can follow the consensus. |
|
I'm mildly curious about unlocking |
|
Thanks! |
|
Progress update. I have a first pass at reworking the lower level midi input handling such that it reads data into a buffer instead by individual bytes and the changes include running status support. The next step is to adapt the virtual device support from @patriciogonzalezvivo to using this new input handling scheme. |
|
@ngwese Exciting! Thanks for picking this up! If you need help with RTPMIDI or the my code feel free to DM through twitter |
|
I was able to get the virtual midi device change and rtpmidi setup working without any significant problems. I have a bit of cleanup I’d like to do, including trying to find a way to name the virtual midi device ant the alsa level much like snd_seq interface appears to do. After that I’ll put something up for review. |
|
That's fantastic progress! Did you see how network devices appear under the RTPMidi port? Do you think in a future we can monitor those and expose them also? |
|
I did see how rtpmidi sources appear under the rtpmidi device. Overall I found rtpmidi itself to be fairly fragile. Hosts coming and going weren’t handled consistently and 2 of the 3 sessions I connected for testing randomly dropped the network connection without warning. The virtual port stuff seems fine but there looks to be a good amount of testing and experimentation needed before rtpmidi itself would be a candidate for direct support out of the box. |
|
Closing this out as a result of merging #1206 @patriciogonzalezvivo if you pull down |
Context for this PR can be found in:
This PR introduce:
Virtual RawMIDI initialization. This is mostly useful for devices where you have access through
sshand you are willing to send MIDI events to matron from other programs in the same device likeaplaymidior thorough the network withrtpmidiReimplement the MIDI parser. Why? non-USB midi devices don't constrain into the 3 bytes packages for CC and NOTE_ON, NOTE_OFF, etc. For example they don't send the event type if is the same of the previous (unless it change the channel which efectively change the type
unsigned charvalue). Also they will pack as much events in a single call. For example you can get one type byte and two pair of parameters in a package of a total of 5 bytes. All this deviate from the previous implementation where the parsing was every other or every 3bytes. This represent a problem for all non-usb midi devices. With this PR that's solve that and open the doors for better MIDI support. Much of this re-implementation repurposed some code from FluidSynth that can be found here: https://github.com/FluidSynth/fluidsynth/blob/master/src/midi/fluid_midi.c. The reasoning behind that is that FluidSynth is a mature open source project that already have accounted for common and edge cases mitigating the risk of introducing regressions into the codeI test this with:
aplaymidi)The most exciting use case I found is the use of RTPMIDI. This allows MIDI over network using this protocol compatible with iOS/OSX devices. There is no need for Bluetooth! Right now, with the code as is, this require some wiring through a
sshsession (which obviously is not user friendly):I think in furthers PRs and through collaboration, we could work on simplifying this process through the nice UI Norns have : )