Server machine: IP:100.71.102.33 or 192.168.100.33 Client machine: IP:100.71.102.37 or 192.168.100.37
Use the 192.168 channel for cleint-server communication. Use the 100.71 channel for X2Go connection.
You may have to run the individual experiments out of the main script in order to understand which network interface is being used. Then you may need to change it in the main script accordingly.
Note: 1. scream_receiver.cpp, line 108, rtcpFbInterval_ntp = screamRx->getRtcpFbInterval(); 2. scream_receiver.cpp, line 40-41, ackDiff and nReportedRtpPackets
To install the L4S supported linux kernel in the server machine: .. code-block:: console
$ git clone https://github.com/L4STeam/linux.git $ cd linux $ sudo apt install libelf-dev $ cp "/boot/config-$(uname -r)" .config $ vim .config # delete the flag, make the flag empty CONFIG_SYSTEM_TRUSTED_KEYS="" and unset this flag CONFIG_DEBUG_INFO_BTF=n $ make olddefconfig $ scripts/config -m TCP_CONG_PRAGUE $ scripts/config -m NET_SCH_DUALPI2 $ ./scripts/config -m TCP_CONG_DCTCP $ ./scripts/config -m TCP_CONG_BBR2 $ make -j$(nproc) LOCALVERSION=-prague-1 $ sudo make install $ sudo make modules_install $ sudo update-grub
How to purge kernel when it is messed up? .. code-block:: console
$ dpkg --list | egrep -i --color 'linux-image|linux-headers' | grep prague $ sudo apt-get purge linux-image-5.10.31-3cc3851880a1-prague-37 $ sudo apt purge linux-headers-5.10.31-3cc3851880a1-prague-37
Download the video from here: http://ftp.nluug.nl/pub/graphics/blender/demo/movies/ToS/ or https://download.blender.org/demo/movies/BBB/ Setup the desktop: .. code-block:: console
$ lsb_release -a #see the ubuntu version, it should be 20.04 $ sudo apt update && sudo apt upgrade $ python3 --version $ sudo apt install wireshark-gtk python3-pip $ sudo apt install git cscope cmake build-essential chromium-browser vlc ffmpeg ubuntu-restricted-extras libavcodec-dev iperf $ git clone https://github.com/ArghyaB118/webrtc-demo.git $ sudo apt-cache search libavcodec $ pip3 install numpy matplotlib pandas
[line 1] https://stackoverflow.com/questions/56972903/how-to-read-mkv-bytes-as-video#:~:text=import%20imageio%20%23%20Get%20bytes%20of%20MKV%20video,first%20few%20bytes%20of%20content%20look%20like%20this%3A [line 2] https://stackoverflow.com/questions/63195747/how-to-specify-start-and-end-frames-when-making-a-video-with-ffmpeg [line 4] Check the fps from Metadata [line 6] Save frames: https://stackoverflow.com/questions/10957412/fastest-way-to-extract-frames-using-ffmpeg#:~:text=If%20the%20JPEG%20encoding%20step%20is%20too%20performance,quality%20loss%20through%20quantization%20by%20transcoding%20to%20JPEG. [line 7] break the videos in a number of frames based on the metadata fps https://stackoverflow.com/questions/10957412/fastest-way-to-extract-frames-using-ffmpeg#:~:text=If%20the%20JPEG%20encoding%20step%20is%20too%20performance,quality%20loss%20through%20quantization%20by%20transcoding%20to%20JPEG. https://stackoverflow.com/questions/23342658/ffmpeg-converting-a-video-to-8-bit-bmp-frames https://stackoverflow.com/questions/28806816/use-ffmpeg-to-resize-image [line 9] This is about 20 times faster. We use fast seeking to go to the desired time index and extract a frame, then call ffmpeg several times for every time index. [line 10] Output one image every minute, named img001.jpg, img002.jpg, img003.jpg, etc. The %03d dictates that the ordinal number of each output image will be formatted using 3 digits. Change the fps=1/60 to fps=1/30 to capture a image every 30 seconds. Similarly if you want to capture a image every 5 seconds then change fps=1/60 to fps=1/5. [line 11] https://superuser.com/questions/1009969/how-to-extract-a-frame-out-of-a-video-using-ffmpeg Extra: https://www.programmerall.com/article/98572237830/#:~:text=FFMPEG%20Use%20Start_Number%20and%20Frames%3A%20v%20to%20specify,-framerate%2025%20-i%20k%253d.png%20-c%20copy%20-y%20OUTPUT.mp4 https://stackoverflow.com/questions/2401764/can-ffmpeg-be-used-as-a-library-instead-of-a-standalone-program https://stackoverflow.com/questions/10127470/ffmpeg-bitrate-change-dynamically .. code-block:: console
$ ffmpeg -i sample-5s.mp4 -start_number 10 -frames:v 30 -c:a copy -c:v vp9 -b:v 1M output.mkv $ pip3 install imageio ## with open('output.mkv', 'rb') as file: content = file.read()
How to run the client on the same computer. (line 1 && line 3) .. code-block:: console
$ sudo ./main_google.sh $ cd samples && npm start $ chromium-browser --disable-webrtc-encryption http://100.71.102.33:8080/src/content/capture/video-contenthint/
1. Get the scream repository. .. code-block:: console
$ git clone https://github.com/EricssonResearch/scream.git $ cd scream $ cmake . $ make
2. Generate a network BW profile. The python script network_profile_generator.py does that and saves the profile in profile.txt. .. code-block:: console
$ python3 network_profile_generator.py
3. Finally, run main.sh with sudo access on the server computer. (1st line) main.sh invokes the server with the network BW simulator internally. (2nd line) Note: my server is 192.168.100.33 and client is 192.168.100.37; the port used is 8080. Instantly invoke the client on the client machine with sudo access. (3rd line) .. code-block:: console
$ sudo ./main.sh $ scream/bin/scream_bw_test_tx -ect 1 -log scream/test.txt 192.168.100.37 8080 $ sudo bin/scream_bw_test_rx 192.168.100.33 8080
On client machine 192.168.18.123, run line 1 On server machine 192.168.18.34, run the rest. https://iperf.fr/iperf-doc.php https://www.ibm.com/cloud/blog/using-iperf-to-troubleshoot-speed-and-throughput-issues#:~:text=You%20can%20also%20do%20UDP%20tests%20using%20iPerf,The%20UDP%20bandwidth%20would%20be%20sent%20at%20bits%2Fsec. .. code-block:: console
$ iperf -s -u
$ sudo tc qdisc del dev eno1 root $ sudo tc qdisc add dev eno1 root handle 1:0 htb $ sudo tc class add dev eno1 parent 1:0 classid 1:1 htb rate 30Mbit burst 30Mbit ceil 30Mbit $ sudo tc filter add dev eno1 parent 1:0 protocol ip prio 1 u32 match ip dst 192.168.18.123/32 flowid 1:1$ sudo tc class add dev eno1 parent 1:1 classid 1:10 dualpi2 limit 100 target 20 tupdate 16000 alpha 0.3125 beta 3.125 l4s_ect coupling_factor 1 drop_on_overload step_thresh 1ms drop_dequeue split_gso classic_protection 10 [Error: Qdisc "dualpi2" is classless.] $ iperf -i 1 -t 10 -p 5001 -c 192.168.18.123 -b 50M -u
On server machine 192.168.18.34, run sudo ./test-ftp.sh. On client machine 192.168.18.123, run .. code-block:: python
$ python3 ftploop.py arghya Chang3me! 192.168.18.34 /home/arghya/webrtc-demo/ToS-4k-1920.mov
This example illustrates how to read frames from a webcam and send them to a browser.
First install the required packages:
$ pip install aiohttp aiortc
When you start the example, it will create an HTTP server which you can connect to from your browser:
$ python webcam.py
$ chromium-browser --disable-webrtc-encryption
Ref: https://peter.sh/experiments/chromium-command-line-switches/#disable-webrtc-encryption You can then browse to the following page with your browser:
Once you click Start the server will send video from its webcam to the browser.
If you want to play a media file instead of using the webcam, run:
$ python webcam.py --play-from video.mp4
$ python3 webcam.py --play-from ../../../server/sample-5s.mp4 --play-without-decoding --audio-codec audio/opus --video-codec video/H264 --verbose --host 127.0.0.1 --port 8080
If you want to play an OGG file containing Opus audio without decoding the frames, run:
$ python webcam.py --play-from audio.ogg --play-without-decoding --audio-codec audio/opus
You can generate an example of such a file using:
$ ffmpeg -f lavfi -i "sine=frequency=1000:duration=20" -codec:a libopus -f ogg audio.ogg
If you want to play an MPEGTS file containing H.264 video without decoding the frames, run:
$ python webcam.py --play-from video.ts --play-without-decoding --video-codec video/H264
You can generate an example of such a file using:
$ ffmpeg -f lavfi -i testsrc=duration=20:size=640x480:rate=30 -pix_fmt yuv420p -codec:v libx264 -profile:v baseline -level 31 -f mpegts video.ts
If you want to play a WebM file containing VP8 video without decoding the frames, run:
$ python webcam.py --play-from video.webm --play-without-decoding --video-codec video/VP8
You can generate an example of such a file using:
$ ffmpeg -f lavfi -i testsrc=duration=20:size=640x480:rate=30 -codec:v vp8 -f webm video.webm
The original idea for the example was from Marios Balamatsias.
Support for playback without decoding was based on an example by Renan Prata.