Skip to content

Commit

Permalink
Small fine tunings for release.
Browse files Browse the repository at this point in the history
  • Loading branch information
vbence committed Sep 28, 2014
1 parent 5007c6c commit e13cc8d
Show file tree
Hide file tree
Showing 4 changed files with 80 additions and 42 deletions.
90 changes: 54 additions & 36 deletions README.md
Expand Up @@ -20,75 +20,59 @@ Putting the stream URL in a `<video>` tag should work in all WebM-capable platfo

### <a name="format-h264"></a> H.264

**H.264 with AAC** is the second addition. The current video publishing solutions (like the [Open Broadcasting Software](https://obsproject.com/)) were created to be compatible with Adobe's *Flash Media Server*. To take advantage of these tools it was necessary for Stream-m to *"speak flash"*, therefore a minimal **RTMP** server implementation was written to handle incoming streams.
**H.264 with AAC** is the second addition. Current video publishing solutions (like the [Open Broadcasting Software](https://obsproject.com/)) were created to be compatible with Adobe's *Flash Media Server*. To take advantage of these tools it was necessary for Stream-m to *"speak flash"*, therefore a minimal **RTMP** server implementation was written to handle incoming streams.

Current RTMP support is limited to receiving streams from the publishing software (like *OBS*). Consuming (playing) streams through RTMP is not supported at the moment.

Playing H.264 live streams has been tested and working in Google Chrome so far (Android version included). Directly putting the stream's URL into a `<video>` tag won't work with this format. It will on the other hand, by using the [MediaSource API](http://www.w3.org/TR/media-source/) (currently used by YouTube's HTML5 video player).

A demo player is included, which will play the stream with the name `first` (the default name in the sample configuration file). The demo can be accessed (by default) on the following URL:


## FRAGMENTS

Live streams consist of fragments (self-contained units of frames which are not referencing any frame outside the fragment). A fragment always starts with a key-frame (intra-frame) therefore it is important that the encoder put in key frames regularly as this determines the size of the fragments.

The ideal fragment size is around 200 kBytes (or 1600 kbits). The key frame interval can be calculated with this formula:

1600k / <bitrate> * <framerate>

*e.g.* if you are publishing a 500 kbit stream with 16 fps, then:
`1600 / 500 * 16 = 51.2`
(or `1600000 / 500000 * 16 = 51.2`) so every 52nd video frame should be a key frame.

The server splits fragments when it seems necessary. A soft minimum for frame size is currently 100k (no new fragment is started if a new key frame arrives within 100 kBytes from a previous key frame).

The hard maximum for a fragments is 2048 kBytes. This is twice the size needed to hold 2 seconds of a 4096 kbit/sec HD stream.
http://localhost:8080/player-demo/player.html


## RUNNING THE SERVER

java StreamingServer <configfile>
java -jar stream-m.jar <configfile>

The release version has the classes in a .jar archive. Before running the server you should edit the sample config file (change password and choose a stream name). So you will end up with something like:
Before running the server you should edit the sample config file (change password and choose a stream name). So you will end up with something like:

java -cp lib/stream-m.jar StreamingServer server.properties
java -jar stream-m.jar server.properties


## HTTP INTERFACE

Streams are identified by a name. The program refers to this as StreamID or stream name. Note that the `<` and `>` characters are used just to indicate substitution, they must not be included in the resulting URL.
Streams are identified by a unique name. The program refers to this as StreamID or stream name. Note that the `<` and `>` characters are used just to indicate substitution, they must not be included in the resulting URL.

> **Note:** Many parts of this section are only relevant for streams using the WebM format. For peculiarities of H.264 streams please see the [specific section](#format-h264) above.
The default port for the built-in HTTP server is **8080**.

The name and password of each stream is defined in the config file. A stream must be sent with POST or PUT method to the following URL to start a broadcast:

`/publish/<streamname>?password=<streampass>`

/publish/<streamname>?password=<streampass>

A stream can be accessed (watched) on the following URL. You may want to insert this URL into a HTML5 `<video>` tag:

`/consume/<streamname>`
/consume/<streamname>

To support playback with the *MediaSource API* (requesting fragments through *XMLHttpRequest*) additional URL parameters can influence playback. Currently supported:
To support playback with the *MediaSource API* (requesting fragments through *XMLHttpRequest*) additional URL parameters can influence playback. Currently supported GET parameters:

name | default | effect
-----------------|---------|--------------------------
sendHeader | true | Sets whether to send header.
singleFragment | false | If enabled only a single fragment is sent.
fragmentSequence | n/a | Output will only start when a fragment with the given sequence number is available.

sendHeader | true | Sets whether to send header.
singleFragment | false | If enabled only a single fragment is sent.
fragmentSequence | *n/a* | Output will only start when a fragment with the given sequence number is available.


A snapshot (the first key frame of the last completed fragment) can be downloaded in WebP format on the URL:

`/snapshot/<streamname>`
/snapshot/<streamname>


Real-time information can be acquired through an AJAX based console (giving the name and password for the chosen stream on the UI):

`/console/client.html`
/console/client.html


## RTMP INTERFACE
Expand All @@ -99,7 +83,24 @@ The default port for the built-in RTMP server is **8081**.

The endpoint of the publishing software needs to have the following format:

`/publish/<streamname>?<streampass>`
/publish/<streamname>?<streampass>


## FRAGMENTS

Live streams consist of fragments (self-contained units of frames which are not referencing any frame outside the fragment). A fragment always starts with a key-frame (intra-frame) therefore it is important that the encoder put in key frames regularly as this determines the size of the fragments.

The ideal fragment size is around 200 kBytes (or 1600 kbits). The key frame interval can be calculated with this formula:

1600k / <bitrate> * <framerate>

*e.g.* if you are publishing a 500 kbit stream with 16 fps, then:
`1600 / 500 * 16 = 51.2`
(or `1600000 / 500000 * 16 = 51.2`) so every 52nd video frame should be a key frame.

The server splits fragments when it seems necessary. A soft minimum for frame size is currently 100k (no new fragment is started if a new key frame arrives within 100 kBytes from a previous key frame).

The hard maximum for a fragments is 2048 kBytes. This is twice the size needed to hold 2 seconds of a 4096 kbit/sec HD stream.


## PUBLISHING WEBM
Expand Down Expand Up @@ -134,12 +135,29 @@ So we are going to usee them both together: VLC will access the audio, compress

### On Linux Systems

*FFMpeg* can be used with the following command line (see assumptions above):
*FFmpeg* can be used with the following command line (see assumptions above):

ffmpeg -f video4linux2 -s 320x240 -r 16 -i /dev/video0 -f oss -i /dev/dsp -g 52 \
-acodec libvorbis -ab 64k -vcodec libvpx -vb 448k \
ffmpeg -f video4linux2 -s 320x240 -r 16 -i /dev/video0 -f oss -i /dev/dsp \
-g 52 -acodec libvorbis -ab 64k -vcodec libvpx -vb 448k \
-f webm http://example.com:8080/publish/first?password=secret`


## PUBLISHING H.264

### On Windows Systems

For the peculiarities of FFmpeg on Windows please see the section above (about WebM). I recommend using OBS or similar product over FFmpeg.

### On Linux Systems

The following command will stream media from your the system's webcam and microphone. Note that the *High* profile and *level 4.0* is used. Also tested and working is *Medium* profile with *level 3.1*.

ffmpeg -f video4linux2 -s 320x240 -r 16 -i /dev/video0 -f oss -i /dev/dsp \
-g 52 -strict experimental -acodec aac -ab 56k -vcodec libx264 -vb 452k \
-profile:v high -level 40 -r 16 \
-f flv "rtmp://example.com:8081/publish/first?secret"


## TESTING THE INSTALLATION

You can test the installation with the downloadable sample video, *univac.webm*. The file is encoded with an average of 512Kbps. *FFmpeg* can send the stream in real-time (real bit rate) to the server with the following command:
Expand All @@ -149,4 +167,4 @@ You can test the installation with the downloadable sample video, *univac.webm*.

You can watch it by positioning your (WebM-capable) browser to the following address:

`http://localhost:8080/consume/first`
http://localhost:8080/consume/first
2 changes: 1 addition & 1 deletion player-demo/player.html
@@ -1,6 +1,6 @@
<html>
<body>
<h2>Stream-m JavaScript Player Demo</h2>
<h1>Stream-m JavaScript Player Demo</h1>
<p>This page will play the stream called 'first' using the MediaSource API.</p>
<div id="playerContainer"></div>

Expand Down
12 changes: 9 additions & 3 deletions server.properties.sample
Expand Up @@ -32,15 +32,15 @@ streams.first.limit = 100

# static.<resourcename>
# if the value is "true" or "1" then the resource will be created
static.demo = true
#static.index = true

# static.<resourcename>.file
# determines the path and name of the static file to share
static.demo.file = src/demo.html
#static.index.file = index.html

# static.<resourcename>.file
# determines the virtual (web) path on which this resource will be served
static.demo.url = /demo.html
#static.index.url = /index.html


#
Expand All @@ -60,3 +60,9 @@ zip.console.file = console.zip
# ZIP archive will be exposed
zip.console.url = /console


# Player Demo
# Access it on the URL: /player-demo/player.html
zip.player-demo = true
zip.player-demo.file = player-demo.zip
zip.player-demo.url = /player-demo
18 changes: 16 additions & 2 deletions src/org/czentral/incubator/streamm/PublisherAppInstance.java
Expand Up @@ -88,7 +88,7 @@ public void onConnect(ApplicationContext context) {

@Override
public void invokeCommand(MessageInfo mi, RTMPCommand command) {
System.err.println("cmd: " + command.getName() + " " + command.getArguments());
// System.err.println("cmd: " + command.getName() + " " + command.getArguments());

if (command.getName().equals("connect")) {
cmdConnect(mi, command);
Expand All @@ -97,7 +97,8 @@ public void invokeCommand(MessageInfo mi, RTMPCommand command) {
sendSuccess(mi, command);

} else if (command.getName().equals("FCUnpublish")) {
sendSuccess(mi, command);
//sendSuccess(mi, command);
context.terminate();

} else if (command.getName().equals("createStream")) {
cmdCreateStream(mi, command);
Expand All @@ -108,6 +109,9 @@ public void invokeCommand(MessageInfo mi, RTMPCommand command) {
} else if (command.getName().equals("publish")) {
cmdPublish(mi, command);

} else if (command.getName().equals("deleteStream")) {
cmdDeleteStream(mi, command);

} else {
System.err.println("Unknown command: " + command.getName());
sendError(mi, command, null, null);
Expand Down Expand Up @@ -137,6 +141,16 @@ protected void sendSuccess(MessageInfo mi, RTMPCommand command) {
context.writeCommand(mi.chunkStreamID, response);
}

protected void cmdDeleteStream(MessageInfo mi, RTMPCommand command) {
AMFPacket response = new AMFPacket();
response.writeString("_result");
response.writeNumber(command.getTxid());
response.writeMixed(null);
response.writeMixed(null);
context.writeCommand(mi.chunkStreamID, response);
context.terminate();
}

protected void cmdCreateStream(MessageInfo mi, RTMPCommand command) {
AMFPacket response = new AMFPacket();
response.writeString("_result");
Expand Down

0 comments on commit e13cc8d

Please sign in to comment.