Skip to content

codeurjc/mediasoup-LLLS-experiments

Repository files navigation

Reproduction package for Section 4.3 "Horizontal scaling latency analysis"

Reproduction package for Section 4.3 of the PhD thesis "Scalability and Quality of Experience of WebRTC media servers for Large-Scale, Low-Latency Streaming". This description contains detailed steps to reproduce the results on the section.

The complete reproduction package can be found in Zenodo (DOI) and contains the following files:

.
├── experiment-results.zip          # Raw data results of the paper ready for analysis, including recorded videos, stats and OCR analysis (~32.5 GB uncompressed)
├── mediasoup-LLLS-experiments.zip  # Code implementing the experiment architecture, scripts for running the experiments and analysis tools
├── mediafiles.zip                  # Media files used for the client (~5 GB uncompressed)
└── README.md                       # This file

Index

Reproducing the experiments

Step 1. Setup

AWS EC2 instances are used, so an AWS account is needed, with permissions to create and terminate EC2 instances and create AMIs.

The OS used for the experimentation was Ubuntu 22.04.5 with the following software installed:

  • Node 20
  • pnpm
  • AWS CLI
  • Xvfb, x11vnc, and fvwm
  • ffmpeg
  • Chrome

To install all prerequisite software, you can use the following commands:

# Install Node.js 20, jq, Xvfb, x11vnc, fvwm and ffmpeg
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get update
sudo apt-get install -y nodejs jq xvfb x11vnc fvwm ffmpeg

# Install pnpm
corepack enable pnpm

# Install Chrome
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo apt install -y ./google-chrome-stable_current_amd64.deb

# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm awscliv2.zip

Unzip the mediasoup-LLLS-experiments.zip file.

Client expects a video and an audio file in directory client/media with the names and formats:

  • fakevideo.y4m
  • fakeaudio.wav Both files can be found in the reproduction package in mediafiles.zip.

You also need to create an AWS EC2 AMI with the master and worker ready to run. First, you will need to create at least the following resources:

  • A key pair for SSH access to the instances (save the name and file location for the next step)
  • An IAM instance profile (save the ARN for the next step)
  • A subnet in a VPC for the workers to be in the same network
  • The following security groups with the needed rules:
    • Worker security group:
      • Inbound rules: port 22 TCP, port 5000 TCP, port range 10000-10100 TCP & UDP from source 0.0.0.0/0
      • Outbound rules: All traffic
    • Master security group (save the ID, it will be needed for the next step):
      • Inbound rules: port 22 TCP, port 4000 TCP from source 0.0.0.0/0
      • Outbound rules: All traffic

You can then use the following command to get the available options for generating the AMIs using the create_amis.sh script:

./aws/create_amis.sh -h

Step 2. Running the experiments

The launch-experiment.sh script will serve to launch the experiments. The script automatically creates EC2 instances for the master (and workers), starts the client and retries the same configuration the number of types indicated in an argument. To get the arguments needed to run the script, run the following command. In the experiments run for the thesis, duration was 300s and number of tries was 5.

./launch-experiment.sh -h

The results will be downloaded from the instances and saved in the experiment_results directory (or the path specified).

Step 3. Analysis

Prerequisites:

  • Python
  • Pip
  • Tesseract OCR

Frame difference can be calculated by analyzing the fullscreen.mp4 video resulting for running a try in and experiment by running the python script:

pip3 install -r qoe_scripts/requirements.py
python3 qoe_scripts/rtt_analyzer.py --video fullscreen.mp4

Change the --video argument for the path to the fullscreen result video you want to analyze.

This will result in a text file with the OCR reading pairings between publisher and subscriber. This can be used by the analysis/analysis.ipynb Jupyter Notebook to analyze the results.

Sometimes the OCR readings might be wrong, the qoe_scripts/ocr_fixer.py script helps detect possible issues and correct them manually.

pip3 install -r qoe_scripts/requirements.py
python3 qoe_scripts/ocr_fixer.py --help

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published