Skip to content
Reinforcement learning resources for PCC.
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
src Added Apache2.0 license boilerplate. Jun 24, 2019
.gitignore Initial commit Mar 5, 2019
LICENSE.txt Added Apache2.0 license boilerplate. Jun 24, 2019
README.md Updated readme. Jun 3, 2019

README.md

PCC-RL

Reinforcement learning resources for the Performance-oriented Congestion Control project.

Overview

This repo contains the gym environment required for training reinforcement learning models used in the PCC project along with the Python module required to run RL models in the PCC UDT codebase found at github.com/PCCProject/PCC-Uspace.

Training

To run training only, go to ./src/gym/, install any missing requirements for stable_solve.py and run that script. By default, this should replicate the model presented in A Reinforcement Learning Perspective on Internet Congestion Control, ICML 2019 [TODO: Hyperlink].

Testing Models

To test models in the real world (i.e., sending real packets into the Linux kernel and out onto a real or emulated network), download and install the PCC UDT code from github.com/PCCProject/PCC-Uspace. Follow the instructions in that repo for using congestion control algorithms with Python modules, and see ./src/gym/online/README.md for detailed instructions on how to load trained models.

You can’t perform that action at this time.