The spec says that AudioBufferSourceNode "is useful for playing short audio assets which require a high degree of scheduling flexibility". For longer assets it seems to encourage using MediaElementSourceNode. However, you then lose the ability to precisely schedule playback, which seems unfortunate. I started looking into writing a multitrack recorder using the Web Audio API, but this seems like a bit of a showstopper since it doesn't seem possible to precisely schedule the playing of longer samples.