Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shorten recording length after recording has completed #105

Open
kmturley opened this issue Feb 11, 2015 · 18 comments
Open

Shorten recording length after recording has completed #105

kmturley opened this issue Feb 11, 2015 · 18 comments

Comments

@kmturley
Copy link

I am using this awesome library to record loops on my site. To do this I have to overcome two big problems.

  1. Latency, due to the buffer/hardware latency I have to allow the user to offset the recording time so it starts at the same time no matter which device/browser you are on. Would be great to build this offset functionality into Recorder.js

  2. Recording length, due to the buffer length and the sample rate, the recording either stops before or after 3 seconds. Never exactly on 3 seconds. This is because the buffer doesn't divide exactly into the sample rate. The solution to this is suggest to my question here:
    http://stackoverflow.com/questions/28424111/recorder-js-calculate-and-offset-recording-for-latency/

I have created an example page where the sound is recorded longer then needed, and then chopped back to fit the desired length. What i'm wondering is whether we can add this functionality into Recorder.js also? Here is my version:
http://kmturley.github.io/Recorderjs/

My suggestion would be to allow you to set the recording length after the recording has occurred:

recorder.js
this.setLength = function(max) {
    worker.postMessage({ command: 'setLength', max: max })
}

recorderWorker.js
case 'setLength':
  setLength(e.data.max);
  break;

However this throws errors when you start exporting wavs or buffers :(

@kmturley
Copy link
Author

Managed to make this work by doing the following:

recorderWorker.js

function setLength(max){
    maxLength = max;
}

function getBuffer(){
  var buffers = [];
  for (var channel = 0; channel < numChannels; channel++){
      if (maxLength) {
        buffers.push(mergeBuffers(recBuffers[channel], recLength).subarray(0, maxLength));
      } else {
        buffers.push(mergeBuffers(recBuffers[channel], recLength));
      }
  }
  this.postMessage(buffers);
}

Frame/time accurate recordings!! woop

@ghost
Copy link

ghost commented Feb 16, 2015

I raised an issue very similar to this last week. I had some sample code that did manual processing of the WAV file. I actually think RecorderJS should not include this functionality as it seems to sort of pollute the basic idea of what the lib actually does.

I do however wish the export wav function was a little more generic. I had to process the header on my own which required a lot of know-how, and I didn't want to pollute my local copy of the lib by chopping and extending the functionality, especially since this lib uses web workers which, IMO, are not exactly production safe.

Rather, I think an editing tool built on top of this lib makes more sense and should maybe be included with Recorder or as a linked project to Recorder. You are most likely going to want to customize the processing of these files. However, having the worker do that manipulation seems somewhat unsafe and not really in the spirit of how this tool is built.

@ghost
Copy link

ghost commented Feb 16, 2015

And by editing tools -- things such as downsampling, chopping, concatenating, etc. etc.

@kmturley
Copy link
Author

I would imagine it's more efficient for the Worker to chop the length of the array before it does any further processing. Also because it already has a loop which creates the buffers, it could save processing time from the start.

An editing library on top of Recorder.js would be great for more advanced functions, maybe there could be a beforeProcessing function hook which allows you to modify the raw recording data before Recording.js continues processing

@ghost
Copy link

ghost commented Feb 16, 2015

You're right, it is probably more efficient. Sort of a balance though, since thread safety with web workers is definitely a question. IMO, if the workers were more stable then I'd agree. For now, I'd try to avoid interacting/using the worker as much as possible since I think there are serious questions regarding the stability/safety of them. Maybe I'm overly paranoid?

I might be interested in putting together a library of editing tools. Similarly, I think there needs to be some kind of playback state machine as well, which I am already working on as part of my job. I'll see what I can do about open-sourcing some of those things.

@kmturley
Copy link
Author

Yeah agreed! playback would be awesome. I have had to write it all from scratch which has been long and frustrating. I needed the ability to playback backing and vocal loops, in SYNC! which is massively complex. Also things like webkit audio is muted until a touch start interaction!!

I think I have it achieved it with this (feel free to reuse this code for your open source library):

     /**
     * @method init
     */
    init: function (options) {
        var me = this;
        this.options = options || {};
        try {
            window.AudioContext = window.AudioContext || window.webkitAudioContext  || window.mozAudioContext || window.msAudioContext;
            navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
            window.URL = window.URL || window.webkitURL || window.mozURL  || window.msURL;
            this.context = new window.AudioContext();
        } catch (e) {
            window.alert('Your browser does not support WebAudio, try Google Chrome');
        }

        // unlock iOS safari webaudio after first interaction
        window.addEventListener('touchstart', function () {
            var buffer = me.context.createBuffer(1, 1, 22050),
                source = me.context.createBufferSource();
            source.buffer = buffer;
            source.connect(me.context.destination);
            source.noteOn(0);
        }, false);
    },
    /**
     * @method load
     */
    load: function (left, right, callback) {
        var me = this;
        //console.log('load', left, right);
        me.clearTimers();
        if (left && right && left !== me.left) {
            me.left = left;
            me.cue(left.ItemUrl.S, function (file1) {
                if (me.left === left) {
                    me.right = right;
                    if (right.ItemUrl.S === 'record') {
                        me.right = right;
                        me.stopAll();
                        me.backing.push(me.play(file1));
                    } else {
                        me.cue(right.ItemUrl.S, function (file2) {
                            if (me.right === right) {
                                me.stopAll();
                                me.backing.push(me.play(file1));
                                me.vocals.push(me.play(file2));
                                if (callback) { callback(); }
                            }
                        });
                    }
                }
            });
        } else {
            me.right = right;
            if (right.ItemUrl.S === 'record') {
                me.stopSync(me.vocals[0], me.backing[0]);
            } else {
                me.cue(right.ItemUrl.S, function (file2) {
                    if (me.right === right) {
                        me.stopSync(me.vocals[0], me.backing[0]);
                        me.vocals.push(me.playSync(file2, me.backing[0]));
                        if (callback) { callback(); }
                    }
                });
            }
        }
    },
    /**
     * @method getPermissions
     */
    getPermissions: function () {
        var me = this;
        if (!navigator.getUserMedia) {
            navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
        }
        if (navigator.getUserMedia) {
            navigator.getUserMedia({audio: true}, function (stream) {
                var input = me.context.createMediaStreamSource(stream);
                me.recorder = new Recorder(input);
            }, function (e) {
                window.alert('Please enable your microphone to begin recording');
            });
        } else {
            window.alert('Your browser does not support recording, try Google Chrome');
        }
    },
    /**
     * @method cue
     */
    cue: function (url, callback) {
        //console.log('cue', url);
        var me = this;
        if (this.request) {
            this.request.abort();
        } else {
            this.request = new XMLHttpRequest();
        }
        this.request.open('GET', url, true);
        this.request.responseType = 'arraybuffer';
        this.request.onload = function () {
            me.context.decodeAudioData(me.request.response, function (buffer) {
                callback(buffer);
            });
        };
        this.request.send();
    },
    /**
     * @method play
     */
    play: function (data, startTime) {
        if (!this.context.createGain) { this.context.createGain = this.context.createGainNode; }
        var me = this,
            source = this.context.createBufferSource(),
            gainNode = this.context.createGain();
        if (!source.start) { source.start = source.noteOn; }
        if (!source.stop) { source.stop = source.noteOff; }
        source.connect(gainNode);
        gainNode.connect(this.context.destination);
        source.buffer = data;
        if (window.location.search === '?mute=true') {
            gainNode.gain.value = 0;
        }
        source.loop = true;
        source.startTime = this.context.currentTime;
        if (startTime) {
            //console.log('play', startTime);
            source.start(startTime);
            if (this.playTimer) {
                window.clearTimeout(this.playTimer);
            }
            this.playTimer = window.setTimeout(function () {
                //console.log('playTimer', me.context.currentTime);
            }, (startTime - this.context.currentTime) * 1000);
        } else {
            source.start(0);
        }
        return source;
    },
    /**
     * @method playSync
     */
    playSync: function (source, target) {
        if (target) {
            var offset = (this.context.currentTime - target.startTime) % target.buffer.duration,
                time = this.context.currentTime + target.buffer.duration - offset;
            //console.log('playSync', time);
            return this.play(source, time);
        } else if (source) {
            return this.play(source);
        }
    },
    /**
     * @method stop
     */
    stop: function (source, stopTime) {
        if (source) {
            if (stopTime) {
                //console.log('stop', source, stopTime);
                source.stop(stopTime);
            } else if (source) {
                source.stop(0);
            }
        }
    },
    /**
     * @method stopAll
     */
    stopAll: function () {
        var i = 0;
        this.clearTimers();
        for (i = 0; i < this.backing.length; i += 1) {
            this.stop(this.backing[i]);
        }
        for (i = 0; i < this.vocals.length; i += 1) {
            this.stop(this.vocals[i]);
        }
        this.backing = [];
        this.vocals = [];
    },
    /**
     * @method stopSync
     */
    stopSync: function (source, target) {
        if (target) {
            var offset = (this.context.currentTime - target.startTime) % target.buffer.duration,
                time = this.context.currentTime + target.buffer.duration - offset;
            //console.log('stopSync', time);
            this.stop(source, time);
        } else if (source) {
            this.stop(source);
        }
    },
    /**
     * @method record
     */
    record: function (name, stopTime) {
        var me = this;
        // custom Recorder.js code here to limit the recording length
        me.recorder.setLength(Math.round(me.context.sampleRate * stopTime));
        me.recorder.record();
        //console.log('record', name, stopTime, Math.round(me.context.sampleRate * stopTime), this.context.currentTime);
        window.setTimeout(function () {
            //console.log('record end', me.context.currentTime);
            me.recorder.stop();
            me.recorder.getBuffer(function (buffers) {
                var buffer = me.context.createBuffer(2, buffers[0].length, me.context.sampleRate);
                buffer.getChannelData(0).set(buffers[0]);
                buffer.getChannelData(1).set(buffers[1]);
                // is it better to stop both, and play together?
                me.stop(me.backing[0]);
                me.stop(me.vocals[0]);
                me.backing.push(me.play(me.backing[0].buffer));
                me.vocals.push(me.play(buffer));
                me.el.className = 'player save';
                // or to sync to the next loop?
                //this.vocals = this.playSync(buffer);
                me.recorder.exportWAV(function (blob) {
                    if (me.options.onRecord) {
                        me.options.onRecord(name, blob);
                    }
                    me.recorder.clear();
                });
            });
        }, (stopTime * 1000) + 500);
    },
    /**
     * @method recordSync
     */
    recordSync: function (name, target) {
        if (name && target) {
            var latency = Number(window.localStorage.getItem('offset')) || -150,
                offset = (this.context.currentTime - target.startTime) % target.buffer.duration,
                time = (target.buffer.duration - offset) - (latency / 1000),
                me = this;
            if (this.firstMic === true) {
                this.firstMic = false;
                this.getPermissions();
                return false;
            }
            if (this.recordTimer) {
                window.clearTimeout(this.recordTimer);
            }
            //console.log('recordSync', this.context.currentTime + time, latency);
            this.recordTimer = window.setTimeout(function () {
                me.record(name, target.buffer.duration);
            }, time * 1000);
        } else if (name) {
            this.record(name);
        }
    },
    /**
     * @method clearTimers
     */
    clearTimers: function () {
        if (this.playTimer) {
            window.clearTimeout(this.playTimer);
        }
        if (this.recordTimer) {
            window.clearTimeout(this.recordTimer);
        }
    }

@ghost
Copy link

ghost commented Feb 16, 2015

Have a look at Howler which is an OSS js library for exactly dealing with multiple playbacks.

@ghost
Copy link

ghost commented Feb 16, 2015

I'll see if I can scratch together a player state machine in the next few days and get back to you. Nice work btw.

@kmturley
Copy link
Author

Thanks, it's not perfect but seems to work. Here is my list of requirements for a Recorder.js toolkit:

General

  • cross browser support for context object
  • fix for webkit audio muted on iOS devices
  • preload file using url, cue it as a buffer ready for instant playback

Record

  • record instantly
  • record in the future (e.g. start in 2.6 seconds)
  • record in sync with an playing buffer (save buffer start time and offset vs length of loop) and stop at end of buffer (longer recording needs to be made then chopped to shorten it to loop length)

Play

  • play instantly
  • play in the future (e.g. start in 2.6 seconds)
  • play in sync with another playing buffer (save buffer start time and offset vs length of loop)

Stop

  • stop instantly
  • stop in the future (e.g. start in 2.6 seconds)
  • stop in sync with another playing buffer (save buffer start time and offset vs length of loop)

@ghost
Copy link

ghost commented Feb 16, 2015

Sounds like this would be like some kind of loop pedal?

@kmturley
Copy link
Author

Yeah that's basically what i'm making. Even better would be if you could change the timing after the recording. Because it's so hard to get the offset, latency correct. If you could record it, then adjust the offset afterwards in time to the other loop. Then when you hit export, it would chop the wav to match that offset and at the correct length. I was going to try make it myself, but it made my head hurt :)

At the moment they have to set their latency offset before recording (which is -150ms means nothing to them!) then after the recording if the offset is wrong, they have to discard and re-record it again. Such a bad user experience!

@ghost
Copy link

ghost commented Feb 16, 2015

https://github.com/goldfire/howler.js/

Look into that, it might be helpful. I don't know how much, but it's a starting point.

The other problem with in-time recording/looping is tricky and the latency issues will be problematic. When you're cutting audio, you need to make sure you are padding the wav file with an extra byte if the payload is odd.

@kmturley
Copy link
Author

kmturley commented Mar 4, 2015

@b-d-b I've put together the requirements I think will create a better user experience for recording here:
http://stackoverflow.com/questions/28867006/record-audio-sync-to-loop-offset-latency-and-export-portion

Next step is to actually start writing the code!

@kmturley
Copy link
Author

kmturley commented Mar 8, 2015

Using this example I was able to find out how to insert blank space into the recording:
http://mdn.github.io/audio-buffer/

I've now managed to almost replicate the functionality I need, however the white noise seems off. Is there a miscalculation somewhere?
http://kmturley.github.io/Recorderjs/loop.html

@kmturley
Copy link
Author

kmturley commented Mar 9, 2015

@b-d-b I managed to solve this filling in the blank space at the start and end of the recording with silence, so the recording length always matches the original loop length (or a multiple of that loop length).

this means when they are played together they are always looped in time! Here's the logic for working out the blank spaces at the start and end:

diff = track2.startTime - track1.startTime
before = Math.round((diff % track1.duration) * 44100)
after = Math.round((track1.duration - ((diff + track2.duration) % track1.duration)) * 44100)
newAudio = [before data] + [recording data] + [after data]

and in javascript code it looks like this:

var i = 0,
    channel = 0,
    channelTotal = 2,
    num = 0,
    vocalsRecording = this.createBuffer(vocalsBuffers, channelTotal),
    diff = this.recorder.startTime - backingInstance.startTime + (offset / 1000),
    before = Math.round((diff % backingInstance.buffer.duration) * this.context.sampleRate),
    after = Math.round((backingInstance.buffer.duration - ((diff + vocalsRecording.duration) % backingInstance.buffer.duration)) * this.context.sampleRate),
    audioBuffer = this.context.createBuffer(channelTotal, before + vocalsBuffers[0].length + after, this.context.sampleRate),
    buffer = null;

// loop through the audio left, right channels
for (channel = 0; channel < channelTotal; channel += 1) {
    buffer = audioBuffer.getChannelData(channel);
    // fill the empty space before the recording
    for (i = 0; i < before; i += 1) {
        buffer[num] = 0;
        num += 1;
    }
    // add the recording data
    for (i = 0; i < vocalsBuffers[channel].length; i += 1) {
        buffer[num] = vocalsBuffers[channel][i];
        num += 1;
    }
    // fill the empty space at the end of the recording
    for (i = 0; i < after; i += 1) {
        buffer[num] = 0;
        num += 1;
    }
}
// now return the new audio which should be the exact same length
return audioBuffer;

I made a full working example here:
http://kmturley.github.io/Recorderjs/loop.html

@sanaali110
Copy link

Hello kmturley,
I really like your work on ur github profile for recorder.js but I dont understand how you have implemented audio control for playing the sound,where the audio controls comes after u have done recording.I have downloaded your files but cant find this feature.

how did you get this feature can you help??
what should I do if I want to implement the same thing ?

@kmturley
Copy link
Author

kmturley commented Jul 7, 2015

@sanaali110 thanks!
The sound player is this code:

this.context = new window.AudioContext();
source = this.context.createBufferSource(),
gainNode = this.context.createGain();
source.connect(gainNode);
gainNode.connect(this.context.destination);
source.buffer = data;
source.start(0);

and data is the decoded audio data which is loaded via an ajax request

@sanaali110
Copy link

thanks alot.. really appreciate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants