Skip to content

nclskoh/aurora-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aurora Evaluation

Overview

This repository contains scripts to evaluate Aurora and Vivace PCC on Mininet.

Requirements

Follow respective instructions in each repository to build from sources. Train RL models in Aurora-RL.

Other requirements:

Iperf3 should be compiled from source and installed. It provides RTT information that may not be available in earlier versions.

Description and usage

The API consists of just two things: command and FileData. A command encapsulates a Mininet command to run on hosts, and command/command.py contains the commands to run Aurora/Vivace/iperf3 with appropriate switches and logging. FileData parses log files generated by Aurora/Vivace/iperf3 into a container for its data.

As an overview, try:

python ./run_figure2.py -au /path/to/aurora-pcc -rl /path/to/aurora-rl -m /path/to/trained/aurora/model -vi /path/to/vivace/pcc [-l /where/to/store/logs]

Logs are by default stored in ./testing_logs.

In the log directory, you should see log files whose filenames follow a key-value format of the form "key:value", separated by "--". The expt tag is the primary key to organize all log files generated for the same experiment, and you may find it useful to add your own tags, e.g., by embedding your own key1:value1--...keyN:valueN in the string passed as the expt_tag argument to get_mininet_client_cmd (see run_figure2.py). This is a horrible hack, but should suffice for now. Also note that colons likely cause problems on Windows systems.

Then run:

python plot_rate_rtt.py -d ./testing_logs

This parses all log files in the directory and graphs them for throughput and RTT against time. You can use a -t <tag> to limit scope to a single experiment, or -f <file> for a single file. The output is written to <expt>.pdf in the log directory.

These two files should illustrate usage of the API, and the moving parts should be additional files like run_figure2.py that generates log data and plot_rate_rtt.py that consumes and graphs log data.

Be careful about changing the Aurora command in command/command.py; running with wrong switches or values will fail silently and just give bad data!

Specific scripts

Figures 2 and 3

For Figures 2 and 3, just run:

python run_figure2.py <same-as-above> -l testing_logs
python run_figure3.py <same-as-above> -l testing_logs

Plot their results using:

python plot_rate_rtt.py -d testing_logs -o <output-dir>

The reports figure2.pdf and figure3.pdf will appear in <output-dir>.

Figure 6

For Figure 6, run:

python run_figure6.py <same-as-above> -l testing_logs --all

This runs experiments for Figure 6(a) through 6(d). Use --all-bw, all-delay, all-queue, all-loss instead of all to just run (a)/(b)/(c)/(d).

Some experiments may actually fail silently for some reason, in which case you can also choose to run just one set of experiments to recover the data. Instead of the all-switches, use --bw BANDWIDTH --delay DELAY --queue QUEUE --loss LOSS. Omit the option if you want to use defaults. The data is tagged as expt:figure6-single-..., and you can overwrite the corrupted file corresponding to the failed experiment (and continue with the plotting script).

To plot the figure, run:

python plot_figure6.py -d testing_logs -o <output-dir>

Check the warnings to see if there are corrupted files corresponding to failed experiments. This should output figure6.pdf in the output directory.

Figure 7

For Figure 7, run:

python run_figure7.py <same-as-above> -l testing_logs

To plot the figure, run:

python plot_figure7.py -d testing_logs -o <output-dir>

This should output figure7.pdf in the output directory.

Fairness

There are two experiments: the first is to test for interfairness, where we pit every algorithm against every algorithm in a two-client network sharing a bottleneck link, and the second is to test for intra-fairness, where we put N clients with the same algorithm sharing a bottleneck link, for different choices of N.

For the first experiment:

python run_fairness_all_pairs.py <same-as-above> -l testing_logs

Then plot the results using:

python plot_rate_rtt.py -d testing_logs -o <output-dir>
python merge_fairness_reports.py -d testing_logs -o <outfile.pdf>

You should check to see that there are 18 figures, in case some experiment failed to run.

For the second experiment:

python run_intra_fairness.py <same-as-above> -l testing_logs

Then plot using:

python plot_intrafairness_time_series.py -d testing_logs -o <output-dir>

This should dump intrafairness.pdf in the output directory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages