Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network stats function #17

Merged
merged 4 commits into from
Oct 4, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 80 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -326,10 +326,68 @@ framesMonitor.on('error', err => {
# Video Quality Info

`video-quality-tools` ships with functions that help determining live stream info based on the set of frames
collected from `FramesMonitor`. It relies on
[GOP structure](https://en.wikipedia.org/wiki/Group_of_pictures) of the stream.
collected from `FramesMonitor`:
- `processFrames.networkStats`
- `processFrames.encoderStats`

The following example shows how to gather frames and pass them to the function that analyzes

## `processFrames.networkStats(frames, durationInMsec)`

Receives an array of `frames` collected for a given time interval `durationInMsec`.

This method doesn't analyze GOP structure and isn't dependant on fullness of GOP between runs. Method shows only
frame rate of audio and video streams received, bitrate of audio and video. Instead of `processFrames.networkStats`
this method allows to control quality of network link between sender and receiver (like RTMP server).

> Remember that this module must be located not far away from receiver server (that is under analysis). If link
between receiver and module affects delivery of RTMP packages this module indicates incorrect values. It's better
to run this module near the receiver.

```javascript
const {processFrames} = require('video-quality-tools');

const INTERVAL_TO_ANALYZE_FRAMES = 5000; // in milliseconds

let frames = [];

framesMonitor.on('frame', frame => {
frames.push(frame);
});

setInterval(() => {
try {
const info = processFrames.networkStats(frames, INTERVAL_TO_ANALYZE_FRAMES);

console.log(info);

frames = [];
} catch(err) {
// only if arguments are invalid
console.log(err);
process.exit(1);
}
}, INTERVAL_TO_ANALYZE_FRAMES);
```

There is an output for the example above:

```
{
videoFrameRate: 29,
audioFrameRate: 50,
videoBitrate: 1403.5421875,
audioBitrate: 39.846875
}
```

Check [examples/networkStats.js](examples/networkStats.js) to see an example code.


## `processFrames.encoderStats(frames)`

It relies on [GOP structure](https://en.wikipedia.org/wiki/Group_of_pictures) of the stream.

The following example shows how to gather frames and pass them to the function that analyzes encoder statistic.

```javascript
const {processFrames} = require('video-quality-tools');
Expand All @@ -346,7 +404,7 @@ framesMonitor.on('frame', frame => {
}

try {
const info = processFrames(frames);
const info = processFrames.encoderStats(frames);
frames = info.remainedFrames;

console.log(info.payload);
Expand Down Expand Up @@ -381,29 +439,30 @@ There is an output for the example above:
}
```

In given example the frames are collected in `frames` array and than use `processFrames` function for sets of 300 frames
(`AMOUNT_OF_FRAMES_TO_GATHER`). The function searches the
In given example the frames are collected in `frames` array and than use `processFrames.encoderStats` function for
sets of 300 frames (`AMOUNT_OF_FRAMES_TO_GATHER`). The function searches the
[key frames](https://en.wikipedia.org/wiki/Video_compression_picture_types#Intra-coded_(I)_frames/slices_(key_frames))
and measures the distance between them.

It's impossible to detect GOP structure for a set of frames with only one key frame, so `processFrames` returns
back all passed frames as an array in `remainedFrames` field.
It's impossible to detect GOP structure for a set of frames with only one key frame, so `processFrames.encoderStats`
returns back all passed frames as an array in `remainedFrames` field.

If there are more than 2 key frames, `processFrames` uses full GOPs to track fps and bitrate and returns all frames back
in the last GOP that was not finished. It's important to remember the `remainedFrames` output and push a new frame to
the `remainedFrames` array when it arrives.
If there are more than 2 key frames, `processFrames.encoderStats` uses full GOPs to track fps and bitrate and returns
all frames back in the last GOP that was not finished. It's important to remember the `remainedFrames` output
and push a new frame to the `remainedFrames` array when it arrives.

For the full GOPs `processFrames` calculates min/max/mean values of bitrates (in kbit/s), framerates and GOP duration
(in seconds) and returns them in `payload` field. The result of the check for the similarity of GOP structures for
the collected GOPs is returned in `areAllGopsIdentical` field. Fields `width`, `height` and `displayAspectRatio`
are taken from data from first frame of the first collected GOP. Value of `hasAudioStream` reflects presence of
audio frames.
For the full GOPs `processFrames.encoderStats` calculates min/max/mean values of bitrates (in kbit/s), framerates
and GOP duration (in seconds) and returns them in `payload` field. The result of the check for the similarity
of GOP structures for the collected GOPs is returned in `areAllGopsIdentical` field. Fields `width`, `height`
and `displayAspectRatio` are taken from data from first frame of the first collected GOP. Value of `hasAudioStream`
reflects presence of audio frames.

For display aspect ratio calculation method `processFrames::calculateDisplayAspectRatio` use list of
[current video aspect ratio standards](https://en.wikipedia.org/wiki/Aspect_ratio_(image))
with approximation error of frames width and height ratio. If ratio hasn't a reflection in aspect ratio standards then
[GCD algorithm](https://en.wikipedia.org/wiki/Greatest_common_divisor) is used.
To calculate display aspect ratio method `processFrames::calculateDisplayAspectRatio` uses list of
[video aspect ratio standards](https://en.wikipedia.org/wiki/Aspect_ratio_(image))
with approximation of frames width and height ratio. If ratio can't be found in list of known standards, even in delta
neighbourhood, then
[GCD algorithm](https://en.wikipedia.org/wiki/Greatest_common_divisor) is used to simplify returned value.

`processFrames` may throw `Errors.GopNotFoundError`.
`processFrames.encoderStats` may throw `Errors.GopNotFoundError`.

Also, you may extend the metrics. Check `src/processFrames.js` to find common functions.
58 changes: 58 additions & 0 deletions examples/realtimeStats.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
const {FramesMonitor, processFrames} = require('../index');
// or
// const {processFrames} = require('video-quality-tools');
// if you use it outside this repo

const INTERVAL_TO_ANALYZE_FRAMES = 5000; // in milliseconds
const STREAM_URI = 'rtmp://host:port/path';

const framesMonitorOptions = {
ffprobePath: '/usr/local/bin/ffprobe',
timeoutInSec: 5,
bufferMaxLengthInBytes: 100000,
errorLevel: 'error',
exitProcessGuardTimeoutInMs: 1000
};

const framesMonitor = new FramesMonitor(framesMonitorOptions, STREAM_URI);

let frames = [];

function firstVideoFrameListener(frame) {
if (frame.media_type === 'video') {
framesMonitor.removeListener('frame', firstVideoFrameListener);
framesMonitor.on('frame', frameListener);
startAnalysis();
}
}

function frameListener(frame) {
frames.push(frame);
}

function startAnalysis() {
setInterval(() => {
try {
const info = processFrames.networkStats(frames, INTERVAL_TO_ANALYZE_FRAMES);

console.log(info);

frames = [];
} catch (err) {
// only if arguments are invalid
console.log(err);
process.exit(1);
}
}, INTERVAL_TO_ANALYZE_FRAMES);
}

// We listens first video frame to start processing. We do such thing to avoid incorrect stats for the first
// run of networkStats function after the first interval.
framesMonitor.on('frame', firstVideoFrameListener);

framesMonitor.on('exit', reason => {
console.log('EXIT', reason);
process.exit();
});

framesMonitor.listen();
94 changes: 62 additions & 32 deletions src/processFrames.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ const _ = require('lodash');

const Errors = require('./Errors');

const MSECS_IN_SEC = 1000;

const AR_CALCULATION_PRECISION = 0.01;

const SQUARE_AR_COEFFICIENT = 1;
Expand All @@ -21,31 +23,31 @@ const UNIVISIUM_AR = '18:9';
const WIDESCREEN_AR_COEFFICIENT = 2.33;
const WIDESCREEN_AR = '21:9';

function processFrames(frames) {
function encoderStats(frames) {
if (!Array.isArray(frames)) {
throw new TypeError('process method is supposed to accept an array of frames.');
throw new TypeError('Method accepts only an array of frames');
}

const videoFrames = processFrames.filterVideoFrames(frames);
const {gops, remainedFrames} = processFrames.identifyGops(videoFrames);
const videoFrames = filterVideoFrames(frames);
const {gops, remainedFrames} = identifyGops(videoFrames);

if (_.isEmpty(gops)) {
throw new Errors.GopNotFoundError('Can not find any gop for these frames', {frames});
}

let areAllGopsIdentical = true;
const hasAudioStream = processFrames.hasAudioFrames(frames);
const hasAudioStream = hasAudioFrames(frames);
const baseGopSize = gops[0].frames.length;
const bitrates = [];
const fpsList = [];
const gopDurations = [];

gops.forEach(gop => {
areAllGopsIdentical = areAllGopsIdentical && baseGopSize === gop.frames.length;
const accumulatedPktSize = processFrames.accumulatePktSize(gop);
const gopDuration = processFrames.gopDurationInSec(gop);
areAllGopsIdentical = areAllGopsIdentical && baseGopSize === gop.frames.length;
const calculatedPktSize = calculatePktSize(gop.frames);
const gopDuration = gopDurationInSec(gop);

const gopBitrate = processFrames.toKbs(accumulatedPktSize / gopDuration);
const gopBitrate = toKbs(calculatedPktSize / gopDuration);
bitrates.push(gopBitrate);

const gopFps = gop.frames.length / gopDuration;
Expand Down Expand Up @@ -91,20 +93,27 @@ function processFrames(frames) {
};
}

processFrames.identifyGops = identifyGops;
processFrames.calculateBitrate = calculateBitrate;
processFrames.calculateFps = calculateFps;
processFrames.calculateGopDuration = calculateGopDuration;
processFrames.filterVideoFrames = filterVideoFrames;
processFrames.hasAudioFrames = hasAudioFrames;
processFrames.gopDurationInSec = gopDurationInSec;
processFrames.toKbs = toKbs;
processFrames.accumulatePktSize = accumulatePktSize;
processFrames.areAllGopsIdentical = areAllGopsIdentical;
processFrames.findGcd = findGcd;
processFrames.calculateDisplayAspectRatio = calculateDisplayAspectRatio;

module.exports = processFrames;
function networkStats(frames, durationInMsec) {
if (!Array.isArray(frames)) {
throw new TypeError('Method accepts only an array of frames');
}

if (!_.isInteger(durationInMsec) || durationInMsec <= 0) {
throw new TypeError('Method accepts only a positive integer as duration');
}

const videoFrames = filterVideoFrames(frames);
const audioFrames = filterAudioFrames(frames);

const durationInSec = durationInMsec / MSECS_IN_SEC;

return {
videoFrameRate: videoFrames.length / durationInSec,
audioFrameRate: audioFrames.length / durationInSec,
videoBitrate: toKbs(calculatePktSize(videoFrames) / durationInSec),
audioBitrate: toKbs(calculatePktSize(audioFrames) / durationInSec),
};
}

function identifyGops(frames) {
const GOP_TEMPLATE = {
Expand Down Expand Up @@ -160,10 +169,10 @@ function calculateBitrate(gops) {
let bitrates = [];

gops.forEach(gop => {
const accumulatedPktSize = processFrames.accumulatePktSize(gop);
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const calculatedPktSize = calculatePktSize(gop.frames);
const durationInSec = gopDurationInSec(gop);

const gopBitrate = processFrames.toKbs(accumulatedPktSize / gopDurationInSec);
const gopBitrate = toKbs(calculatedPktSize / durationInSec);

bitrates.push(gopBitrate);
});
Expand All @@ -175,8 +184,8 @@ function calculateBitrate(gops) {
};
}

function accumulatePktSize(gop) {
const accumulatedPktSize = gop.frames.reduce((accumulator, frame) => {
function calculatePktSize(frames) {
const accumulatedPktSize = frames.reduce((accumulator, frame) => {
if (!_.isNumber(frame.pkt_size)) {
throw new Errors.FrameInvalidData(
`frame's pkt_size field has invalid type ${Object.prototype.toString.call(frame.pkt_size)}`,
Expand Down Expand Up @@ -237,8 +246,8 @@ function calculateFps(gops) {
let fps = [];

gops.forEach(gop => {
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const gopFps = gop.frames.length / gopDurationInSec;
const durationInSec = gopDurationInSec(gop);
const gopFps = gop.frames.length / durationInSec;

fps.push(gopFps);
});
Expand All @@ -254,9 +263,9 @@ function calculateGopDuration(gops) {
const gopsDurations = [];

gops.forEach(gop => {
const gopDurationInSec = processFrames.gopDurationInSec(gop);
const durationInSec = gopDurationInSec(gop);

gopsDurations.push(gopDurationInSec);
gopsDurations.push(durationInSec);
});

return {
Expand Down Expand Up @@ -310,6 +319,10 @@ function filterVideoFrames(frames) {
return frames.filter(frame => frame.media_type === 'video');
}

function filterAudioFrames(frames) {
return frames.filter(frame => frame.media_type === 'audio');
}

function hasAudioFrames(frames) {
return frames.some(frame => frame.media_type === 'audio');
}
Expand All @@ -329,3 +342,20 @@ function findGcd(a, b) {

return findGcd(b, a % b);
}

module.exports = {
encoderStats,
networkStats,
identifyGops,
calculateBitrate,
calculateFps,
calculateGopDuration,
filterVideoFrames,
hasAudioFrames,
gopDurationInSec,
toKbs,
calculatePktSize,
areAllGopsIdentical,
findGcd,
calculateDisplayAspectRatio
};
8 changes: 4 additions & 4 deletions tests/Functional/processFrames.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -86,14 +86,14 @@ describe('processFrames functional tests', () => {
{key_frame: 0, pict_type: 'P'}
];

const {payload, remainedFrames} = processFrames(frames);
const {payload, remainedFrames} = processFrames.encoderStats(frames);

assert.deepEqual(payload, {
areAllGopsIdentical: true,
fps: {mean: expectedMeanFps, min: expectedMinFps, max: expectedMaxFps},
bitrate: {mean: expectedMeanBitrate, min: expectedMinBitrate, max: expectedMaxBitrate},
gopDuration: {mean: expectedMeanGop, min: expectedMinGop, max: expectedMaxGop},
aspectRatio: expectedAspectRatio,
displayAspectRatio: expectedAspectRatio,
height: expectedHeight,
width: expectedWidth,
hasAudioStream: expectAudio
Expand Down Expand Up @@ -161,14 +161,14 @@ describe('processFrames functional tests', () => {
{key_frame: 0, pict_type: 'P'}
];

const {payload, remainedFrames} = processFrames(frames);
const {payload, remainedFrames} = processFrames.encoderStats(frames);

assert.deepEqual(payload, {
areAllGopsIdentical: false,
fps: {mean: expectedMeanFps, min: expectedMinFps, max: expectedMaxFps},
bitrate: {mean: expectedMeanBitrate, min: expectedMinBitrate, max: expectedMaxBitrate},
gopDuration: {mean: expectedMeanGop, min: expectedMinGop, max: expectedMaxGop},
aspectRatio: expectedAspectRatio,
displayAspectRatio: expectedAspectRatio,
width: expectedWidth,
height: expectedHeight,
hasAudioStream: expectAudio
Expand Down
Loading