-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backpressure exposure for asynchronous send() #158
Comments
This does sound like a pretty good fit for a writable stream... In fact MIDIOutput looks pretty WritableStream-like in general. It is specifically designed to take care of this queuing and backpressure-signaling concern for you, so that other specs can build on top of it and reuse the architecture. I'm not sure about MIDIInput, as I'm not familiar enough with MIDI. The choice would be between the current spec's no-backpressure, event-based model (where if you aren't subscribed you miss the event), vs. a readable stream model with the data buffered for future reading, and potential backpressure when it's not read. Unfortunately the spec's current setup where MIDIInput and MIDIOutput both derive from MIDIPort doesn't seem very easy to fit with the stream API. I'm not sure how we'd do this without just creating a parallel WritableMIDIPort type (and maybe ReadableMIDIPort). There's also the issue that writable streams need some spec love, but if this is a urgent use case I can turn my attention to them and get that straightened out quickly. |
The event-based model for MIDIInput is fine, since MIDI is essentially a multicast system, with no backpressure possible at the protocol level. MIDI doesn't guarantee reliable delivery. Can WritableStream handle framing? We do parse the bytes and require that they are valid MIDI packets (1, 2, or 3 bytes in the common case, or arbitrary length in the case of sysex). Is this implementable as chunks in the streams standard? Can a stream reject an invalid chunk? |
Yes, for sure. The underlying source's write hook (and optional writev hook, once we get that working) can process the data in arbitrary ways, returning a promise for when the processing is finished---which can be rejected to indicate that the input was invalid. |
👍 for using streams from me. We could add a new method to However, how do we solve sending timed messages? |
I'm somewhat reluctant for adding streams, it's seems like mainly adding clutter to the WebMIDI API and implementation, while providing very little new functionality. Can't we just add a function
That's really bad. In that case, the send() function should either block, or throw an exception of some sort (which would need to be defined in the spec). |
I'm REALLY uncomfortable rebasing on top of Streams, since it's highly On Thu, Feb 11, 2016 at 9:53 AM, Florian notifications@github.com wrote:
|
This would just be duplicating stream functionality, except in a more awkward and less interoperable way. I can see plenty of use cases for MIDI messages in Streams. For example:
|
I agree that such a design is just duplicating streams and will need to reinvent the queuing mechanisms and so forth they are specifically designed to let other specs reuse. The "current available output message size" is further evidence of that kind of duplication (of the
I'd assume each chunk would be of the form |
That sounds reasonable. |
Can someone who understands streams put together a sketch of an API based on it, and examples of simple and complex usage? |
Maybe something like this: interface MIDIOutput {
// ...
Promise<WritableStream> openAsStream({ size, highWaterMark });
}
interface MIDIInput {
// ...
Promise<ReadableStream> openAsStream({ size, highWaterMark });
} here's 2 usage examples - the first is just a dumb non-synced sequencer writing to the first available EDIT: Note that the advanced example uses |
Reply for the first Adam's description: Maybe using Streams would be a right approach. But I feel it's a little complicated as Chris said. Also, I feel Web MIDI should be aligned with other similar modern APIs. For instance Web Bluetooth and WebUSB return Promise. So my preference is bome's Here is my proposal. |
The sketch in #158 (comment) is pretty reasonable, although we'd update it to use the writer design. I'll add some explicit demonstrations of backpressure support: destination.openAsStream().then(s => s.getWriter()).then(async (writer) => {
console.log(writer.desiredSize); // how many bytes (or other units) we "should" write
// note: 0 means "please stop sending stuff", not "the buffer is full and
// we will start discarding data". So, desiredSize can go negative.
writer.write({ data: data1, timestamp: ts1 }); // disregard backpressure
// // wait for successful processing before writing more
await writer.write({ data: data2, timestamp: ts2 });
await writer.waitForDesiredSize(100); // wait until desiredSize goes >= 100
writer.write({ data: oneHundredTwentyBytes, timestamp: ts3 });
await writer.waitForDesiredSize(); // wait until desiredSize goes >= high water mark
// also, maybe default timestamp to performance.now()?
}); I might also collapse @jussi-kalliokoski's |
After thinking about incremental writing of a large sysex, I noticed that it will allow a malicious attack to lock all output ports exclusively. E.g. just sending a "sysex start" byte will lock the port forever. So, we should keep on having the restriction that a user can not send an incomplete or a fragment message. So even if we have a back-pressure, the sysex size will be limited to the maximum size of ArrayBuffer. |
@domenic Have you ever talked with Web Bluetooth and WebUSB guys before? |
In an offline thread @cwilso mentioned that @jussi-kalliokoski's examples are too complex and he'd like a three-liner. Here you go (slightly changed from the above since I am not sure why @jussi-kalliokoski made stream acquisition async): const writer = midiOutput.asStream().getWriter();
await writer.write({ data: d1, timestamp: ts1 });
await writer.write({ data: d2, timestamp: ts2 }); |
@toyoshim you can intersperse short messages during an ongoing sys ex message, and the implementation could also use a timeout to abandon stalled sys ex messages after a while. Silently dropping MIDI messages is always bad, maybe the current I agree that |
I previously talked with Jeffrey about Web ... Bluetooth? ... and the conclusion was that since there was no backpressure support it wasn't urgent and we could wait on adding streams until later. |
Stream acquisition is async since you could block for an arbitrary amount of time with an OS-level |
That's fine; that just means that the first |
Is there a way to force the |
I don't understand what "force the open to complete" would mean. You can see an example implementation here if it helps: https://streams.spec.whatwg.org/#example-ws-backpressure (unfortunately the usage example still does not have writers and uses the coarse-grained |
So, we currently have the notion of It's good to know when the MIDI port is fully open, since we can signal in a UI that "preflight" is complete and all devices are fully ready to send/receive. |
@bome Web MIDI backend needs to multiplex all sending messages from multiple MIDIAccess and once one of it contains 'sysex start' byte, and it was sent to an actual device, we can not abandon stalled sysex in any way, right? Making The answer for the second question of only one request is we need to ensure message orders in cases where one of in-flight message fails in the middle. Please imagine a case there the second request fails asynchronously, and a user already send the third request that may success. That will cause something a user do not expect. |
Let me introduce a classic MIDI use case, and folks can weigh in on various ideas for making it work. An SMF file contains a stream of Right now, Web MIDI clients have to schedule themselves to submit some number of events with timestamps, and use If we want to get fancy, allow for user-supplied tempo changes which take effect immediately, while the SMF is streaming. |
@domenic hum... It probably makes sense that Web Bluetooth does not need Streams at this point. Bluetooth defines write ack in the protocol level, and OS level APIs seem to expose this model directly. So, mapping write and write ack to a write method with returning Promise sounds a straight-forward reasonable solution. But, I believe WebUSB will need Streams more than Web MIDI does. |
Here is my answer for async or sync Probably we should make MIDIOutput explicitly require |
For SMF playback, I'd prefer to use requestAnimationFrame(timestamp => { }) even though my task isn't related to graphics. We can calculate delta time in the callback, and send next messages that won't make it in the next callback cycle as estimated with the calculated delta. |
How about the close operation? I mean, what is the relation between |
When we started this discussion, Streams API was not matured yet. We prefer to keep the existing send() method together for compatibility and simple uses. That sounds reasonable to me. @cwilso Any thoughts? You were sceptical about using Streams in the past, but WDYT today? I'm happy to implement it in Chrome if the spec is updated. |
What is the current status of this issue? |
Discussed at TPAC, and agreed that using Streams is the best here, but still we need to clarify detailed API surface. |
I built a midiWebApi sequenser that work to playback,,playup, record "simultaneously" well more or less. But the playup lags because i use timeout, i would like to change the timeout solution to a buffered playup. Will it remove the playup lag? Very few examples showing how buffered playup work. |
Hey @JonasTH, I wrote an article a long time ago about Web Audio clocks, that also applies to Web MIDI - in particular, why using setTimeout is not a good idea. https://www.html5rocks.com/en/tutorials/audio/scheduling/. The issue you're having is unrelated to this issue (this isssue is essentially about sending massive amounts of data on a slow MIDI link). |
But i heard webmidi players that i at least "perceive" as in synch not slowing down passing notes, sure enough there is no bar to measure if they lag or not so i really don't know, but you claim it is webMidi itself create the bottleneck for passing notes?
Thanks for your kind reply, seldom i get any.
Jonas
…________________________________
Från: Chris Wilson <notifications@github.com>
Skickat: den 26 februari 2021 20:41
Till: WebAudio/web-midi-api <web-midi-api@noreply.github.com>
Kopia: jonasth <jonas.thornvall@hotmail.com>; Mention <mention@noreply.github.com>
Ämne: Re: [WebAudio/web-midi-api] Backpressure exposure for asynchronous send() (#158)
Hey @JonasTH<https://github.com/jonasth>, I wrote an article a long time ago about Web Audio clocks, that also applies to Web MIDI - in particular, why using setTimeout is not a good idea. https://www.html5rocks.com/en/tutorials/audio/scheduling/.
The issue you're having is unrelated to this issue (this isssue is essentially about sending massive amounts of data on a slow MIDI link).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#158 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEX6WHG4J7MJG3HCQ3VFP5TTBABN5ANCNFSM4B27BMUA>.
|
function STARTPLAY(){
console.log("STARTPLAY()");
copyEv = playEv.slice();
last = copyEv[copyEv.length - 1]
copyEv[copyEv.length] = last;
var waittime = copyEv.shift();
console.log("waittime="+waittime);
midEvent = 0; myTime=waittime;
//console.log(SF2PLAY);
if (SF2PLAY==true){stopJump=setTimeout(SF2Playup, waittime);}
else {stopJump=setTimeout(doPlayup, waittime);}
}
//WHILE MESSAGES TO PLAY DO PLAYUP
function doPlayup() {
if (keepGoing) {
if (copyEv.length) {
if (track[playtrack].midiMess[midEvent].data0<192 || track[playtrack].midiMess[midEvent].data0>207)
{noteMessage = [track[playtrack].midiMess[midEvent].data0, track[playtrack].midiMess[midEvent].data1, track[playtrack].midiMess[midEvent].data2];
} else { noteMessage = [track[playtrack].midiMess[midEvent].data0, track[playtrack].midiMess[midEvent].data1];}
if (mode=="Play"){
pianoKeypressOut(); scrollPianoOut();
}
outportarr[outportindex].send(noteMessage);
waittime = copyEv.shift();
midEvent++;
setTimeout(doPlayup, waittime);
} else { stopPLAY();}
}
}
This is the playup function
…________________________________
Från: Chris Wilson <notifications@github.com>
Skickat: den 26 februari 2021 20:41
Till: WebAudio/web-midi-api <web-midi-api@noreply.github.com>
Kopia: jonasth <jonas.thornvall@hotmail.com>; Mention <mention@noreply.github.com>
Ämne: Re: [WebAudio/web-midi-api] Backpressure exposure for asynchronous send() (#158)
Hey @JonasTH<https://github.com/jonasth>, I wrote an article a long time ago about Web Audio clocks, that also applies to Web MIDI - in particular, why using setTimeout is not a good idea. https://www.html5rocks.com/en/tutorials/audio/scheduling/.
The issue you're having is unrelated to this issue (this isssue is essentially about sending massive amounts of data on a slow MIDI link).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#158 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEX6WHG4J7MJG3HCQ3VFP5TTBABN5ANCNFSM4B27BMUA>.
|
What is the status of this issue? Without any way to detect backpressure I have to artificially slow down sending bursts of messages to account for older midi devices that don't use USB. For what it's worth I agree with @bome:
A method like his proposed |
I second the other requests for providing a solution for this issue. It is important. For many of the reasons already discussed in this thread. First, I would like to commend all the work done on Web MIDI API so far... it is truly exciting (to me) what browser-based MIDI means! At the same time, and at this stage of the development of the API, it is quite surprising this foreseeable and basic reality (the need for SysEx throttling) has not yet been accommodated by the API (and is the subject of so much debate and "needed explanation"). The need for throttling is simply a reality of the mature MIDI ecosystem and its legacy, much of which is just as relevant today as it ever was. Pointing fingers at old or new devices, drivers, etc. (or wherever the need for throttling may arise in a particular system) does very little to advance all the good that the Web MIDI API otherwise brings. Every piece of MIDI software of consequence created in the last few decades that deals with SysEx has recognized that providing an ability to throttle SysEx transmission is a necessity, either to accommodate limitations of the receiving device or transmitting system (OS, drivers, interfaces, etc. -- and now potentially even arbitrary internal browser limitations!). Indeed, the computer, with its infinite flexibility for adaptation and reprogramming, is in the best possible position to accommodate "harder" limitations found elsewhere in a MIDI system (within "set-in-stone" old hardware, or "hard-to-update" firmware, etc.). As a component of that flexibility, the Web MIDI API very clearly has a role to play here, IMO. The Web MIDI API should provide (or at least enable) a solution for throttling SysEx transmission, period. If it does not, whole classes of "imperfect", yet still valuable and viable, MIDI devices and software (synthesizers, interfaces, and even apps, etc.) will be needlessly cut out of the benefits of browser-based MIDI. An arguably important "modern" case in point: Many of the cheap "commodity" USB MIDI cables available today (which usually have full-speed SysEx transmission bugs) could benefit (read, be made to "actually work") via a simple adaptation of the software running on the computer (to throttle SysEx). Wouldn't it actually further the goals of the Web MIDI API to enable as many MIDI devices (and users!) as possible, such as these commodity interfaces, despite their flaws? Web MIDI API's current lack of any means for SysEx throttling, as well as its insistence that whole SysEx messages be provided for transmission are, IMO, arbitrary and harmful limitations of the API, and prohibit certain valid and creative uses of MIDI. For example, why can't my browser-based MIDI app "stream" SysEx (generating it over time), or "open" a bidirectional SysEx connection using a single 0xF0 on each side, and then communicate asynchronously entirely with an "exclusive" protocol of 7-bit messages? Both things are trivially possible with MIDI itself, but currently not the Web MIDI API. (It should be pointed out, some of the earliest -- and perfectly valid -- SysEx implementations in Casio and Roland devices are effectively asynchronous 7-bit protocols, reliant on intra-SysEx handshaking, something the Web MIDI API is currently incompatible with, for no good reason, IMO.) MIDI has grown to its popularity and ubiquity today, yes, based on the efforts of many to adhere to a robust, simple, and dare we say "perfect" common specification, but also in no small part due to the creative (and arguably "necessarily obvious") accommodations that have been made, especially in software, aimed at making sure MIDI works as well as possible even for some of the most "imperfect" members of the ecosystem, and that unforeseen innovations are not precluded by assumed "valid uses" of features like System Exclusive. It would be awesome if the Web MIDI API wholeheartedly recognized and continued in this tradition. |
As a separate note, I would like the suggest that, the Adding these two basic abilities... asynchrony and divisibility... are enough to "re-enable" everything that is natively possible with SysEx over MIDI itself, and would provide the means for the Web MIDI API to fully accommodate past and future innovations, and other vital use-cases that have been with us since MIDI's birth (see below). A suitable It should also be pointed out that SysEx throttling isn't the only issue of importance here... the ability for apps to freely interject MIDI realtime messages (such as I applaud any and all principled efforts to keep the Web MIDI API simple and easy-to-use (achieving this for any API is no small feat), but for all the reasons above, a synchronous, atomic |
2023 TPAC Audio WG Discussion: |
I will schedule this for CR / V1 until we have a chance to discuss it in the Audio Working Group, but we will likely push it to future work. |
Audio Working Group 2023-10-05 meeting conclusions:
|
Thanks @mjwilson-google looking forward to seeing progress on this issue! |
TPAC 2024 notes: This is actually somewhat covered by MIDI 2.0, which is out of scope for version 1 of the Web MIDI specification. I propose moving this to future work, and not fixing this version 1.0 of the specification. |
Does MIDI 2.0 contain some kind of flow control within the protocol? If so, that is different than this proposal, which is meant to reflect the backpressure that the existing MIDI 1.0 APIs already provide (by blocking at the OS level). |
MIDI 2.0 UMP does not contain a low-level flow control. There are some flow control mechanisms within MIDI-CI. |
This shouldn't be related to MIDI 2.0; this was a problem reported by real-world use cases. (When sending sysex, typically.). |
I did not check details in the MIDI 2.0 spec. I am seeing strong objections to pushing this out so we'll keep it scheduled for the current CR milestone. But I still don't have a good idea of how to specify this yet, or how to conduct a valid survey. I think I will try to fix the other CR issues first. If this becomes the last blocker it will be easier to focus on. I am aware it may take some time to resolve this issue which is motivation for trying to get things moving sooner, but it also seems like it's possible to make changes during CR review and it should be easier to get more eyes on the spec during that process. We may end up drafting a change that we have low confidence in, and use the wide review to help verify if it's sufficient or not. Thanks for the quick responses, and more feedback is always welcome. |
Happy to brainstorm a solution with you, @mjwilson-google |
We have so far been able to use synchronous
send
but this provides no mechanism for backpressure.The current implementation in Chrome uses large internal buffers to hide this issue, but this is wasteful and results in the browser silently dropping messages when the buffer is overrun. This condition is not common with normal MIDI messages, but sysex messages can be of arbitrary length and can be slow to send, so Chrome also puts limits on those as well.
In order to allow implementations to avoid these buffers and arbitrary limits, we need a mechanism for non-blocking
send
.The text was updated successfully, but these errors were encountered: