Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous decoding #146

Open
agiliator opened this issue Oct 27, 2015 · 19 comments
Open

Continuous decoding #146

agiliator opened this issue Oct 27, 2015 · 19 comments

Comments

@agiliator
Copy link

Hi,

I'm experimenting on decoding and playing a radio stream using aurora.js. So basically, I get the chunks of mp3/aac (XHR possibly the chunks of HLS) which I need to get decoded before playing. However, I hit some issues during the process:

  1. It seems that I need to emit 'end' event before the decoding starts - is there a way to get the decoding process running without the 'end'?
  2. (Maybe related): I don't seem to be able to emit several pieces of audio (or actually the emit goes through, but the second chunk does not get decoded.
  3. Question: It seems that the decoder does not get the frame sync - so I need to find the frame border on my own before passing to decoder?

My code to experiment auroras feasibility for this project is as follows.

Any help is greatly appreciated!

        var MySource = AV.EventEmitter.extend({
            start: function() {
                var source = this;
                var url = './media/audio.mp3'; 
                console.log(url);
                var request = new XMLHttpRequest();
                request.open('GET', url, true);
                request.responseType = 'arraybuffer';
                request.onload = function() {
                    var raw = new Uint8Array(request.response);
                    var rawLength = raw.byteLength;
                    source.emit('data', new AV.Buffer(raw)); // Consecutive ones ignored, frame sync required.
                    source.emit('data', 'end');  // Required?
                }
                request.send();
            },
            pause: function() {},
            reset: function() {}
        });

        var src = new MySource();
        var asset = new AV.Asset(src);

        // Alt 1. get the audio data.
        asset.on('data', function(buffer) {
            console.log("Decoded audio, len=" + buffer.length);
        });
        asset.on('error', function(err) {
            console.log("Error in decoding: " + err);
        });
        asset.start();

        // Alt 2. Play the audio direcly (for test purposes.)
        // var player = new AV.Player(asset);
        // player.play();
@crackofdusk
Copy link
Member

I think any new issues in this project won't be gotten any attention.

New issues do get attention, but it takes a while as the maintainers have to do it in their free time.

@iggyZiggy
Copy link

I understand the "in their free time" thing but can we get some update on this?
I'm also interested in this issue.
@agiliator did you find any workaround?

@DeusExLibris
Copy link

I had a similar issue and am happy to share my solution. First, I was not able to XHR because in order to get chunks of the stream, the endpoint needs to support range based requests. In my case, the source is live and so this was not an option. Consequently, I switched to using websockets. To accomplish this, I integrated the aurora-websockets library as a source.

This delivered the content quickly and as expected, but I had the same result as @agiliator in that it would begin to play and then stop. It turns out that this occurs because of the way that player#refill handles an underflow condition (at least it did in my case). I addressed this by adding the ability to Queue to switch back to a "buffering" state. The player constructor was updated as follows:

    @asset.on 'decodeStart', =>
        @queue = new Queue(@asset)
        @queue.once 'ready', @startPlaying
        @queue.on 'buffering', =>
            @device.stop()
            @queue.once 'ready', =>
                @device.start()

However, this was not the only issue. As I was decoding mp3 audio, I discovered a bug in the mp3.js decoder. Essentially, for any given batch of data (consisting of one or more mp3 frames), it would drop the last frame if that was the last frame currently available (recall that my audio is live so the data came in in batches of 3-4 frames every 100ms or so but the decoder worked much faster than that). The fix is more involved than I can post here, but I am putting together a pull request as I have some spare time.

I can post my changes to Queue if anyone is interested.

@fabslab
Copy link
Contributor

fabslab commented Feb 4, 2016

Awesome work @DeusExLibris!

@DeusExLibris
Copy link

I have made all my updates available in my fork - Aurora with WS. This includes the items above as well as integration of aurora-websocket as a native source and a version of aurora-websocket that handles the websockets in a web worker.

@agiliator
Copy link
Author

Thanks a lot @DeusExLibris! Trying out your changes is the first thing when the player project continues on our side (XHR and WS). Your fork still works also with with XHR, right? I think our setup has that side covered.

@DeusExLibris
Copy link

@agiliator - it should. Outside of changes needed to integrate websockets, I only made minor changes. I will add a note to the repository that describes them and the reasoning behind each.

One other note - websockets require methods for exchanging messages with the backend are somewhat arbitrary. For example, using {file:"/path/to/file/on/server"} to identify the asset you want sent. Because of this, the backend will need to support handling those messages or you will need to modify the websocket.coffee to use whatever standard your backend supports.

@agiliator
Copy link
Author

I tried your fork, @DeusExLibris. I'm assuming I should emit 'end' after each emit of data, as without 'end' the decoding process does not fire? So basically data-end pairs to get continuous audio? Does the data need to contain whole, healthy MP3 frames (not a problem if yes, just need to know..)?

I seem to also have "TypeError: Cannot read property '0' of undefined" at asset.on('error'..), even though I think I emit whole and healthy MP3 frames. Though, as asset.on('data'..) gets called first. However the second batch of data does not get decoded after that error.

Quite a few questions at the same comment here.. Sorry about that.

@DeusExLibris
Copy link

You shouldn't have to emit 'end' with each batch of data. The decoder is
engaged when you either call Player#preload or Player#play. The first few
packets need to be whole mp3 frames but I believe the decoder works fine if
subsequent frames are not complete. However, my code only sends on whole
frame boundaries so I could be wrong about that.

On Fri, Feb 26, 2016 at 7:12 AM, agiliator notifications@github.com wrote:

I tried your fork, @DeusExLibris https://github.com/DeusExLibris. I'm
assuming I should emit 'end' after each emit of data, as without 'end' the
decoding process does not fire? So basically data-end pairs to get
continuous audio? Does the data need to contain whole, healthy MP3 frames
(not a problem if yes, just need to know..)?

Quite a few questions at the same comment here.. Sorry about that.


Reply to this email directly or view it on GitHub
#146 (comment)
.

Joe Wilson
CTO, YourCall.TV
(404) 394-9493

@agiliator
Copy link
Author

Thanks for a quick response, that clarifies a lot. Any other way to engage the decoder than using the player? Basically I'd like to play it myself via WebAudioApi, so preferable way for me would be to collect pieces of PCM (asset.on('data', ...), etc) and feed them to my own player (which alters the audio somewhat). Btw, do you know if the audio received this way compatible with WebAudioApi?

@DeusExLibris
Copy link

I think if you your Asset#start, that will begin the decoding and then you
could use Asset#decodeToBuffer to capture the data. All this is
speculation, though as it is a bit outside my use case. Note that Aurora
already uses the WebAudioAPI (specifically, a ScriptProcessorNode) to play
the audio. Look at devices/webaudio.coffee. It might be easier to simply
add your code to modify the output inside this file rather than trying to
pull the PCM out of Asset. For this, you should look at
WebAudioDevice#refill as this is the onaudioprocess handler for the
ScriptProcessorNode.

On Fri, Feb 26, 2016 at 8:34 AM, agiliator notifications@github.com wrote:

Thanks for a quick response, that clarifies a lot. Any other way to engage
the decoder than using the player? Basically I'd like to play it myself via
WebAudioApi, so preferable way for me would be to collect pieces of PCM
(asset.on('data', ...), etc) and feed them to my own player (which alters
the audio somewhat). Btw, do you know if the audio received this way
compatible with WebAudioApi?


Reply to this email directly or view it on GitHub
#146 (comment)
.

Joe Wilson
CTO, YourCall.TV
(404) 394-9493

@agiliator
Copy link
Author

Thanks, that makes perfect sense.

@agiliator
Copy link
Author

Had another try. This time the there chunks are full frames and they are fed to aurora like below. add():ing consecutively.

    function AuroraPlayTest() {

        var MySource = AV.EventEmitter.extend({
            start: function() { console.log("MySource.start()") },
            pause: function() { console.log("MySource.pause()") },
            reset: function() { console.log("MySource.reset()") }
        });
        this.src = new MySource();

        this.add = function(mpframes) {
            if(!this.asset) {
                this.asset = new AV.Asset(this.src);
                this.asset.on('data', function(buffer) { console.log("asset.on(\'data\'), length: " + buffer.length); });
                this.asset.on('error', function(err) { console.log("asset.on(\'error\'), message: " + err); });
                // this.asset.start(); // Decoding only - alternative
                (this.player = new AV.Player(this.asset)).play();
            }
            this.src.emit('data', new AV.Buffer(mpframes));
            // this.asset.decodeToBuffer(); // Decoding only - alternative
        }
    }

First notion was that first chunk didn't trigger decoding/play. Thats not an issue, but just mentioning it if it helps any. Once consequent frames are added, only the first bunch seems to be decoded, following an error 'bad main_data_begin pointer'.

I wonder if this is related to the reservoir bits, as I get exactly the same error about the first bunch (once the second is added), if I start from the middle of mp3-stream instead (still at the frame border, though).

@DeusExLibris
Copy link

Yes - the "bad main_data_begin pointer" message occurs if you start in the
middle of an mp3 stream that uses reservoir bits. I changed my encoding
for live streams to not use it for this reason.

On Mon, Feb 29, 2016 at 8:34 AM, agiliator notifications@github.com wrote:

Had another try. This time the there chunks are full frames and they are
fed to aurora like below. add():ing consecutively.

function AuroraPlayTest() {

    var MySource = AV.EventEmitter.extend({
        start: function() { console.log("MySource.start()") },
        pause: function() { console.log("MySource.pause()") },
        reset: function() { console.log("MySource.reset()") }
    });
    this.src = new MySource();

    this.add = function(mpframes) {
        if(!this.asset) {
            this.asset = new AV.Asset(this.src);
            this.asset.on('data', function(buffer) { console.log("asset.on(\'data\'), length: " + buffer.length); });
            this.asset.on('error', function(err) { console.log("asset.on(\'error\'), message: " + err); });
            // this.asset.start(); // Decoding only - alternative
            (this.player = new AV.Player(this.asset)).play();
        }
        this.src.emit('data', new AV.Buffer(mpframes));
        // this.asset.decodeToBuffer(); // Decoding only - alternative
    }
}

First notion was that first chunk didn't trigger decoding/play. Thats not
an issue, but just mentioning it if it helps any. Once consequent frames
are added, only the first bunch seems to be decoded, following an error
'bad main_data_begin pointer'.

I wonder if this is related to the reservoir bits, as get exactly the same
error about the first bunch (once the second is added), if I start from the
middle of mp3-stream instead (still at the frame border, though).


Reply to this email directly or view it on GitHub
#146 (comment)
.

Joe Wilson
CTO, YourCall.TV
(404) 394-9493

@agiliator
Copy link
Author

Seems to be the case, yes - removing reservoir makes the issue disappear. AAC does not appear to suffer from that issue, at least with files/streams I tested.

@sassyn
Copy link

sassyn commented May 25, 2016

Hi DeusExLibris & agiliator

I was trying to use the aurora-websocket library written by @fabienbrooke without any success.

The situation is as follow:
I manage to run NodeJS websocket server, and to stream live MP3 using FFMPEG via stdout.

My command look something like this:

ffmpeg -y -i "rtmp://xxx.comt/app/streamname live=11" -vn -codec:a libmp3lame -b:a 128k -f mp3 -content_type audio/mpeg -reservoir 0 - | node aurora-ws-server.js

The webpage html look like this:

 <script type="text/javascript" src="aurora.js"></script>
 <script type="text/javascript" src="aurora-websocket.js"></script>
<script type="text/javascript" src="mp3.js"></script>
 <script type="text/javascript" src="flac.js"></script>
<script type="text/javascript" src="aac.js"></script>
<script>
 var player = AV.Player.fromWebSocket('ws://x.x.x.x:9080', 'x.mp3');
 player.play();
 </script>

Note: x.mp3 is not relevant here

When opening the web page I can hear the audio in firefox;chrome; and even safari but after few second audio stop.
Doing refresh on the same page - give nothing (I have to reset the FFMPEG command).

Looking at the web socket frames I see something I getting a pause command from the client.

When playing the same from a file, it works well.
I dig more and found you are doing some magic to solve this, but never manage to get your code working.

I'm not a node.js programer but your websocketWorker.js looks broken.
Is this only an example?

Could you provide in the WS_NOTES.md of yours how to use the code?

FYI, The audiocogs.js and MP3,js file were taken for the main repo.

Would welcome to get a feedback from you.

Thank You.

@DeusExLibris
Copy link

@sassyn - in order to stream the audio live, you need to use my fork of the aurora.js code found here. As explained in the WS_NOTES.md, there are several changes that I needed to make to aurora in order to support live streaming. This is especially true for the mp3.js decoder as it expects multiple MPEG frames to be available when it initially starts decoding.

Also, if you modify the standard aurora.js using the websocket code from @fabienbrooke, this would only support the websocket code running in the main browser thread. You need to add the code for the wsWebWorker as well. This is also part of my fork of aurora.

I originally submitted this code as a pull request to the aurora team, but closed it after reviewing the discussion with @fabienbrooke after PR 32 was submitted.

@stas-zozulja
Copy link

stas-zozulja commented Apr 11, 2017

Hi,

trying to play an OPUS live stream using aurora.js and opus.js decoder. A stream is going from a Firebase location (in base64 encoded chunks). So i beleive a flow is pretty similar as example from @DeusExLibris for WebSocket transport, because i was successfully adapted server and client sides to play stream (files from server) in base64 format. Sending as string data, decoding to Uint8Array in a browser and playing audio.

But when tried to use a Firebase location as a stream source having a problem with error:

error:A demuxer for this container was not found.

My code is following:
`//Firebase db config
var config = {
/** config */
};
var listenLocation = 'some/location';

        //decoding to buffer function
        function _base64ToBuffer(stringData) {
            var raw = window.atob(stringData);
            var rawLength = raw.length;

            var array = new Uint8Array(rawLength);
            for(var i = 0; i < rawLength; i++) {
                array[i] = raw.charCodeAt(i) & 0xff;
            }
            return array;
        }

        firebase.initializeApp(config);
        firebase.auth().signInAnonymously().catch(function(error) {
            console.log({error: error.code, message: error.message});
        });

        var database = firebase.database();
        var MySource = AV.EventEmitter.extend({
            start: function() {
                var source = this;
                source.audioChannel = database.ref(listenLocation);
                source.audioChannel.on('value', function(data) {
                    var buffer = new AV.Buffer(_base64ToBuffer(data.val()));
                    source.emit('data', buffer);
                });
            },
            pause: function() {},
            reset: function() {}
        });

        // create a source, asset and player
        var source = new MySource();
        var asset = new AV.Asset(source);
        var player = new AV.Player(asset);

        player.play();

        player.on('error', function (e) {
            console.log('error: ' + e);
        });

`
Checked data flow before audio decoding, data is the same. Also tried to use a @DeusExLibris fork of library but with same result.
Any help would be very appreciated!

@ycii
Copy link

ycii commented Oct 22, 2017

@agiliator according to code , it seem to work,but result is all zero. len=1152,any help would b appreciated~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants