Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 14 additions & 13 deletions broadcasting/08_RTSP_Architecture.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,15 @@
# Architecture

Now let's discuss how the architecture of our solution will look like.
It will be a little different from the RTMP to HLS architecture.
The main component will be the pipeline, which will ingest RTP stream and convert it to HLS. Beyond that we will also need a Connection Manager, which will be responsible for establishing an RTSP connection with the server.
It will be a little different from the RTMP to HLS architecture. In most cases communication with a RTSP server is split into two phases:

![image](assets/rtsp_architecture.drawio.png)
- Negotiation of the stream parameters over RTSP.
- Receiving RTP stream(s) that the client and server have agreed upon.

When initializing, the pipeline will start a Connection Manager which starts an RTSP connection with the server. Once the connection is fully established, the pipeline will be notified.
Both of these phases are handled by RTSP Source. Let's take a closer look how each of them folds out:

Let's take a closer look on each of those components:

## Connection Manager
The role of the connection manager is to initialize RTSP session and start playing the stream.
## Establishing the connection
When establishing a connection the source will act as a connection manager, initializing the RTSP session and starting the stream playback.
It communicates with the server using the [RTSP requests](https://antmedia.io/rtsp-explained-what-is-rtsp-how-it-works/#RTSP_requests). In fact, we won't need many requests to start playing the stream - take a look at the desired message flow:

![image](assets/connection_manager.drawio.png)
Expand All @@ -20,18 +18,21 @@ First we want to get the details of the video we will be playing, by sending the
Then we call the `SETUP` method, defining the transport protocol (RTP) and client port used for receiving the stream.
Now we can start the stream using `PLAY` method.

## Pipeline
## Receiving the stream

The pipeline consists of a couple elements, each of them performing a specific media processing task. You can definitely notice some similarities to the pipeline described in the [RTMP architecture](02_RTMP_Introduction.md). However, we will only be processing video so only the video processing elements will be necessary.
The source is a bin containing a few elements, each of them performing a specific media processing task. You can definitely notice some similarities to the pipeline described in the [RTMP architecture](03_RTMP_Architecture.md). However, we will only be processing video so only the video processing elements will be necessary.

![image](assets/rtsp_pipeline.drawio.png)

We have already used the, `H264 Parser`, `MP4 H264 Payloader`, `CMAF Muxer` and `HLS Sink` elements in the RTMP pipeline, take a look at the [RTMP to HLS architecture](03_RTMP_SystemArchitecture.md) chapter for details of the purpose of those elements.
We have already used the `H264 Parser` and `HLS Sink Bin` elements in the RTMP pipeline, take a look at the [RTMP to HLS architecture](03_RTMP_Architecture.md) chapter for details of the purpose of those elements.

Let us describe briefly what is the purpose of the other components:

### UDP Source
This element is quite simple - it receives UDP packets from the network and sends their payloads to the next element.

### RTP SessionBin
RTP SessionBin is a Membrane's Bin, which is a Membrane's container used for creating reusable groups of elements. In our case the Bin handles the RTP session with the server, which has been set up by the Connection Manager.
### RTP Demuxer
This element is responsible for getting media packets out of the RTP packets they were transported in and routing them according to their [SSRC](https://datatracker.ietf.org/doc/html/rfc3550#section-3). In our case we only receive a single video stream, so only one output will be used.

### RTP H264 Depayloader
When transporting H264 streams over RTP they need to be split into chunks and have some additional metadata included. This element's role is to unpack the RTP packets it receives from the Demuxer into a pure H264 stream that can be processed further.
Original file line number Diff line number Diff line change
@@ -1,53 +1,43 @@
In the tutorial we won't explain how to implement the solution from the ground up - instead, we will run the existing code from [Membrane demos](https://github.com/membraneframework/membrane_demo).

To run the RTSP to HLS converter first clone the demos repo:
```console
```bash
git clone https://github.com/membraneframework/membrane_demo.git
```

```console
```bash
cd membrane_demo/rtsp_to_hls
```

Install the dependencies
```console
```bash
mix deps.get
```

Make sure you have those libraries installed as well:
- gcc
- libc-dev
- ffmpeg

On ubuntu:
```console
apt-get install gcc libc-dev ffmpeg
```

Take a look inside the `lib/application.ex` file. It's responsible for starting the pipeline.
We need to give a few arguments to the pipeline:
```elixir
@rtsp_stream_url "rtsp://rtsp.membrane.work:554/testsrc.264"
@output_path "hls_output"
@rtp_port 20000
rtsp_stream_url = "rtsp://localhost:30001"
output_path = "hls_output"
rtp_port = 20000
```

The `@output_path` attribute defines the storage directory for hls files and the `@rtp_port` defines on which port we will be expecting the rtp stream, once the RTSP connection is established.
The `output_path` attribute defines the storage directory for hls files and the `rtp_port` defines on which port we will be expecting the rtp stream, once the RTSP connection is established.

The `@rtsp_stream_url` attribute contains the address of the stream, which we will be converting. It is a sample stream prepared for the purpose of the demo.
The `rtsp_stream_url` attribute contains the address of the stream, which we will be converting. If you want to receive a stream from some accessible RTSP server, you can pass it's URL here. In this demo we'll run our own, simple server:

```bash
mix run server.exs
```

Now we can start the application:
```console
```bash
mix run --no-halt
```

The pipeline will start playing, after a couple of seconds the HLS files should appear in the `@output_path` directory. In order to play the stream we need to first serve them. We can do it using simple python server.

```console
python3 -m http.server 8000
```
The pipeline will start playing, after a couple of seconds the HLS files should appear in the `@output_path` directory.

Then we can play the stream using [ffmpeg](https://ffmpeg.org/), by pointing to the location of the manifest file:
```console
```bash
ffplay http://YOUR_MACHINE_IP:8000/rtsp_to_hls/hls_output/index.m3u8
```
71 changes: 0 additions & 71 deletions broadcasting/10_ConnectionManager.md

This file was deleted.

57 changes: 57 additions & 0 deletions broadcasting/10_RTSP_Pipeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
As explained in the [Architecture chapter](08_RTSP_Architecture.md), the pipeline will consist of a RTSP Source and a HLS Sink.

The initial pipeline will consist of the `RTSP Source`, which will start establishing the connection with the RTSP server, and the `HLS Sink Bin`. For now we won't connect this elements in any way, since we don't have information about what tracks we'll receive from the RTSP server which we're connecting with.

##### lib/pipeline.ex
```elixir
@impl true
def handle_init(_context, options) do
spec = [
child(:source, %Membrane.RTSP.Source{
transport: {:udp, options.port, options.port + 5},
allowed_media_types: [:video, :audio],
stream_uri: options.stream_url,
on_connection_closed: :send_eos
}),
child(:hls, %Membrane.HTTPAdaptiveStream.SinkBin{
target_window_duration: Membrane.Time.seconds(120),
manifest_module: Membrane.HTTPAdaptiveStream.HLS,
storage: %Membrane.HTTPAdaptiveStream.Storages.FileStorage{
directory: options.output_path
}
})
]

{[spec: spec], %{parent_pid: options.parent_pid}}
end
```

Once we receive the `{:set_up_tracks, tracks}` notification from the source we have the information what tracks have been set up during connection establishment and what we should expect. First we filter these tracks, so that we have at most one video and audio track each. Then we can create specs that will connect output pads of the source with input pads of the sink appropriately - audio to audio and video to video.

##### lib/pipeline.ex
```elixir
@impl true
def handle_child_notification({:set_up_tracks, tracks}, :source, _ctx, state) do
track_specs =
Enum.uniq_by(tracks, & &1.type)
|> Enum.filter(&(&1.type in [:audio, :video]))
|> Enum.map(fn track ->
encoding =
case track do
%{type: :audio} -> :AAC
%{type: :video} -> :H264
end

get_child(:source)
|> via_out(Pad.ref(:output, track.control_path))
|> via_in(:input,
options: [encoding: encoding, segment_duration: Membrane.Time.seconds(4)]
)
|> get_child(:hls)
end)

{[spec: track_specs], state}
end
```

By doing this we are prepared to receive the streams when a `PLAY` request is eventually sent by the source and the server starts streaming.
89 changes: 0 additions & 89 deletions broadcasting/11_RTSP_Pipeline.md

This file was deleted.

File renamed without changes.
File renamed without changes.
Binary file modified broadcasting/assets/rtsp_pipeline.drawio.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 6 additions & 7 deletions broadcasting/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,13 @@ part: 7
| ------ | -------------------------------- | -------------------------------- |
| 1 | General introduction | 01_General_Introduction.md |
| 2 | RTMP introduction | 02_RTMP_Introduction.md |
| 3 | RTMP to HLS system architecture | 03_RTMP_SystemArchitecture.md |
| 4 | Running the RTMP to HLS demo | 04_RTMP_RunningTheDemo.md |
| 3 | RTMP to HLS system architecture | 03_RTMP_Architecture.md |
| 4 | Running the RTMP to HLS demo | 04_RTMP_Running_The_Demo.md |
| 5 | RTMP to HLS - pipeline | 05_RTMP_Pipeline.md |
| 6 | Web player | 06_WebPlayer.md |
| 7 | RTSP to HLS introduction | 07_RTSP_Introduction.md |
| 8 | RTSP to HLS system architecture | 08_RTSP_Architecture.md |
| 9 | Running the RTSP to HLS demo | 09_RTSP_RunningDemo.md |
| 10 | Connection manager | 10_ConnectionManager.md |
| 11 | RTSP to HLS - pipeline | 11_RTSP_Pipeline.md |
| 12 | Summary | 12_Summary.md |
| 13 | (Suplement) H264 codec | 13_H264_codec.md |
| 9 | Running the RTSP to HLS demo | 09_RTSP_Running_The_Demo.md |
| 10 | RTSP to HLS - pipeline | 10_RTSP_Pipeline.md |
| 11 | Summary | 11_Summary.md |
| 12 | (Suplement) H264 codec | 12_H264_codec.md |