Skip to content

Commit

Permalink
rename examples to code;
Browse files Browse the repository at this point in the history
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
  • Loading branch information
vsoch committed Nov 23, 2021
1 parent 6ee5035 commit f7f3e09
Show file tree
Hide file tree
Showing 7 changed files with 12 additions and 13 deletions.
1 change: 0 additions & 1 deletion association-analysis/examples

This file was deleted.

10 changes: 5 additions & 5 deletions association-analysis/hill-climb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ $ source env/bin/activate
$ pip install -r ../requirements.txt
```

Make sure examples are cloned one directory up!
Make sure examples (code) are cloned one directory up!

### Generate Flags

Expand All @@ -29,26 +29,26 @@ $ python hill-climb.py run ../../data/gpp_flags_filtered.json Prog.cpp
for a specific example:

```bash
$ python hill-climb.py run ../../data/gpp_flags_filtered.json ../examples/Aliases/Prog.cpp
$ python hill-climb.py run ../../data/gpp_flags_filtered.json ../code/Aliases/Prog.cpp
```

In practice, I found that using parallel made more sense (no workers in Python).
Here is how to test a single script:

```bash
$ python hill-climb.py run ../../data/gpp_flags.json "../examples/sizeof Operator/Prog.cpp" --outdir-num 1
$ python hill-climb.py run ../../data/gpp_flags.json "../code/sizeof Operator/Prog.cpp" --outdir-num 1
```

And then to run using parallel (`apt-get install -y parallel`)

```bash
$ find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num 1
$ find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num 1
```

There is a [run.sh](run.sh) script that I used, and ultimately ran between a range of 0 and 29 (to generate 30 runs of the same predictions for 100 iterations each). Finally, to run on a SLURM cluster:

```bash
for iter in {11..30}; do
for iter in {0..1}; do
sbatch run_slurm.sh $iter
done
```
Expand Down
2 changes: 1 addition & 1 deletion association-analysis/hill-climb/run.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
for iter in {0..9}; do
find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num $iter
find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num $iter
done
2 changes: 1 addition & 1 deletion association-analysis/hill-climb/run_slurm.sh
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#!/bin/bash
find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num $1
find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python hill-climb.py run ../../data/gpp_flags.json "%" --outdir-num $1
6 changes: 3 additions & 3 deletions association-analysis/montecarlo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ $ source env/bin/activate
$ pip install -r requirements.txt
```

Make sure examples are cloned one directory up!
Make sure examples (code) is cloned one directory up!

### Generate Flags

Expand All @@ -40,13 +40,13 @@ In practice, I found that using parallel made more sense (no workers in Python).
Here is how to test a single script:

```bash
$ python montecarlo-parallel.py run ../../data/gpp_flags.json "../examples/sizeof Operator/Prog.cpp" --outdir-num 1 --num-iter 2000
$ python montecarlo-parallel.py run ../../data/gpp_flags.json "../code/sizeof Operator/Prog.cpp" --outdir-num 1 --num-iter 2000
```

And then to run using parallel (`apt-get install -y parallel`)

```bash
$ find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num 1 --num-iter 2000
$ find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num 1 --num-iter 2000
```

There is a [run.sh](run.sh) script that I used, and ultimately ran between a range of 0 and 29 (to generate 30 runs of the same predictions for 100 iterations each). Finally, to run on a SLURM cluster:
Expand Down
2 changes: 1 addition & 1 deletion association-analysis/montecarlo/run.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
for iter in {0..9}; do
find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num $iter --num-iter 2000
find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num $iter --num-iter 2000
done
2 changes: 1 addition & 1 deletion association-analysis/montecarlo/run_slurm.sh
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
#!/bin/bash
find ../examples -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num $1 --num-iter 5000
find ../code -name "*Prog.cpp" | parallel -I% --max-args 1 python montecarlo-parallel.py run ../../data/gpp_flags.json "%" --outdir-num $1 --num-iter 5000

0 comments on commit f7f3e09

Please sign in to comment.