Skip to content

Commit

Permalink
Added raw-accuracies/ directory and code for printing raw accuracies …
Browse files Browse the repository at this point in the history
…to .txt files.
  • Loading branch information
rgeirhos committed Jun 13, 2017
1 parent 49be6fa commit e897b33
Show file tree
Hide file tree
Showing 8 changed files with 125 additions and 6 deletions.
12 changes: 8 additions & 4 deletions README.md
Expand Up @@ -4,10 +4,10 @@ This repository contains information, data and materials from the paper "Compari

Please don't hesitate to contact me at robert.geirhos@uni-tuebingen.de or open an issue in case there is any question!

This README is structured according to the repo's structure: one header per subdirectory.
This README is structured according to the repo's structure: one section per subdirectory.

## code
This subdirectory contains all image manipulation code used in our experiments (conversion to grayscale, adding noise, eidolon distortions, ..). The main method of `image-manipulation.py` walks you through the various degradations. Note that the eidolon manipulation that we use in one of our experiments is based on the [Eidolon github repository](https://github.com/gestaltrevision/Eidolon), which you will need to download / clone if you would like to use it. Note that we found and fixed a bug in the Python version of the toolbox, for which we created a pull request in August 2016 which has not (yet?) been merged (as of May 2017). Make sure to collect the files from the pull request as well, otherwise you'll get different images!
This subdirectory contains all image manipulation code used in our experiments (conversion to grayscale, adding noise, eidolon distortions, ..). The main method of `image-manipulation.py` walks you through the various degradations. Note that the eidolon manipulation that we use in one of our experiments is based on the [Eidolon github repository](https://github.com/gestaltrevision/Eidolon), which you will need to download / clone if you would like to use it. Note that we found and fixed a bug in the Python version of the toolbox, for which we created a pull request in August 2016 which has not (yet?) been merged (as of May 2017). Make sure to collect the files from the pull request as well, otherwise you will get different images!

## data-analysis
The `data-analysis/` subdirectory contains a main R script, `data-analysis.R`, which can be used to plot and analyze the data contained in `raw-data/`. We used R version 3.2.3 for the data analysis.
Expand All @@ -28,10 +28,13 @@ The response screen icons appeared on the response screen, and participants were

![response screen icons](./lab-experiment/response-screen-icons/response_screen.png "response screen icons")

## raw-accuracies
The `raw-accuracies/`directory contains a `.txt` file for each experiment with a table of all accuracies (split by experimental condition and subject/network). This therefore contains the underlying data used for all accuracy plots in the paper, and may be useful, for example, if one would like to generate new plots for comparing other networks to our human observers' accuracies.

## raw-data
Every `.csv` raw data file has a header with the **bold** categories below, here's what they stand for:

- **subj:** for DNNs (Deep Neural Networks), name of network; for human observers: number of subject. This number is consistent across experiments. Note that the subjects were not necessarily given consecutive numbers, therefore it can be the case that \'subject-04\' does not exist in some or all experiments.
- **subj:** for DNNs (Deep Neural Networks), name of network; for human observers: number of subject. This number is consistent across experiments. Note that the subjects were not necessarily given consecutive numbers, therefore it can be the case that e.g. \'subject-04\' does not exist in some or all experiments.

- **session:** session number

Expand Down Expand Up @@ -59,5 +62,6 @@ This is a concatenation of the following information (separated by \'_\'):
3. either e.g. \'s01\' for \'subject-01\', or \'dnn\' for DNNs
4. condition
5. category (ground truth)
6. image identifier in the form a_b.JPEG, with _a_ being the WNID (WordNet ID) of the corresponding synset and _b_ being an integer.
6. a number (just ignore it)
7. image identifier in the form a_b.JPEG, with _a_ being the WNID (WordNet ID) of the corresponding synset and _b_ being an integer.

50 changes: 49 additions & 1 deletion data-analysis/data-analysis-helper.R
Expand Up @@ -467,4 +467,52 @@ is.in.CI = function(a.num.successes, a.total,
} else {
return(0)
}
}
}

get.accuracy = function(dat) {
# Return data.frame with x and y for condition and accuracy.

tab = table(dat$is.correct, by=dat$condition)
false.index = 1
true.index = 2
acc = tab[true.index, ] / (tab[false.index, ]+tab[true.index, ])
d = as.data.frame(acc)

if(length(colnames(tab)) != length(unique(dat$condition))) {
stop("Error in get.accuracy: length mismatch.")
}

#enforce numeric ordering instead of alphabetic (otherwise problem: 100 before 20)
if(!is.factor(dat$condition)) {
#condition is numeric
d$order = row.names(d)
d$order = as.numeric(d$order)
d = d[with(d, order(d$order)), ]
d$order = NULL
e = data.frame(x = as.numeric(row.names(d)), y=100*d[ , ])
} else {
#condition is non-numeric
e = data.frame(x = row.names(d), y=100*d[ , ])
}
return(e)
}


print.accuracies.to.file = function(dat, path="./", filename=paste(path, unique(dat$experiment.name),
"_accuracies.txt", sep="")) {
# print a .txt file with a table of all accuracies for a certain experiment
# (split by experimental condition and subject/network)

colnames.here = c("condition", "human_observers(average)")
acc = get.accuracy(dat[dat$is.human==TRUE, ])
for(subj in get.all.subjects(dat, avg.human.data = TRUE)) {
if(subj$data.name %in% NETWORKS) {
acc = cbind(acc, get.accuracy(dat[dat$subj==subj$data.name, ])$y)
colnames.here = c(colnames.here, subj$name)
}
}
colnames(acc) = colnames.here
write.table(acc,
filename, sep=" ",
row.names = FALSE)
}
14 changes: 13 additions & 1 deletion data-analysis/data-analysis.R
Expand Up @@ -33,6 +33,7 @@ contrastpngdat$condition = as.character(as.numeric(contrastpngdat$condition)) #
noisedat = get.expt.data("noise-experiment")
noisedat$condition = as.character(noisedat$condition)
noisedat$condition = lapply(noisedat$condition, function(y){if(y=="0"){return("0.0")}else{return(substring(y, 2))}})
noisedat$condition = as.character(noisedat$condition)

# preprocessing eidolon-experiment
eidolondat = get.expt.data("eidolon-experiment")
Expand All @@ -59,4 +60,15 @@ difference.matrix(colordat[colordat$condition=="color" & colordat$is.human==TRUE
colordat[colordat$condition=="color" & colordat$subj=="vgg", ],
main = "Confusion difference matrix: color-experiment, color-condition, human vs. VGG-16",
divide.alpha.by = 16.0*17.0, # 16 columns * 17 rows
binomial = TRUE)
binomial = TRUE)

###################################################################
# print accuracies to .txt file
###################################################################

accuracy.printing.path = "../raw-accuracies/"
print.accuracies.to.file(colordat, path=accuracy.printing.path)
print.accuracies.to.file(contrastdat, path=accuracy.printing.path)
print.accuracies.to.file(contrastpngdat, path=accuracy.printing.path)
print.accuracies.to.file(noisedat, path=accuracy.printing.path)
print.accuracies.to.file(eidolondat, path=accuracy.printing.path)
3 changes: 3 additions & 0 deletions raw-accuracies/colour-experiment_accuracies.txt
@@ -0,0 +1,3 @@
"condition" "human_observers(average)" "AlexNet" "GoogLeNet" "VGG-16"
"color" 88.4895833333333 94.4419642857143 96.7410714285714 97.4553571428572
"grayscale" 86.6145833333333 86.71875 93.8392857142857 93.6607142857143
9 changes: 9 additions & 0 deletions raw-accuracies/contrast-experiment_accuracies.txt
@@ -0,0 +1,9 @@
"condition" "human_observers(average)" "AlexNet" "GoogLeNet" "VGG-16"
1 5.5 6.60714285714286 5.71428571428571 7.94642857142857
3 20.75 10.2678571428571 15.0892857142857 17.6785714285714
5 47.625 19.8214285714286 26.0714285714286 32.3214285714286
10 71.875 39.2857142857143 46.3392857142857 64.375
15 76.375 49.1964285714286 57.0535714285714 77.3214285714286
30 82.625 70.3571428571429 80.8928571428571 90.2678571428571
50 83.375 80.1785714285714 88.5714285714286 91.6964285714286
100 86.625 84.375 93.4821428571429 94.2857142857143
9 changes: 9 additions & 0 deletions raw-accuracies/contrast-png-experiment_accuracies.txt
@@ -0,0 +1,9 @@
"condition" "human_observers(average)" "AlexNet" "GoogLeNet" "VGG-16"
1 8.33333333333333 7.32142857142857 7.41071428571429 17.5
3 31.0416666666667 12.7678571428571 22.1428571428571 39.0178571428571
5 54.5833333333333 23.3035714285714 33.125 54.7321428571429
10 76.0416666666667 42.2321428571429 51.9642857142857 74.375
15 81.875 51.875 60.1785714285714 82.8571428571429
30 87.5 70.8035714285714 81.5178571428571 91.7857142857143
50 88.125 80.5357142857143 89.4642857142857 91.9642857142857
100 91.6666666666667 84.375 93.3928571428571 94.1964285714286
25 changes: 25 additions & 0 deletions raw-accuracies/eidolon-experiment_accuracies.txt
@@ -0,0 +1,25 @@
"condition" "human_observers(average)" "AlexNet" "GoogLeNet" "VGG-16"
"1-0-10" 83.375 76.25 85 86.25
"1-10-10" 86 83.75 90.3125 87.8125
"128-0-10" 5.75 5.3125 5.3125 8.4375
"128-10-10" 7.375 10.625 8.4375 7.8125
"128-3-10" 7.25 7.8125 6.5625 6.875
"1-3-10" 87.5 80 90.9375 90.9375
"16-0-10" 14 9.6875 10 6.875
"16-10-10" 50.25 19.375 18.75 17.1875
"16-3-10" 22.5 12.1875 9.0625 6.875
"2-0-10" 81.25 71.875 84.6875 74.375
"2-10-10" 86.125 77.5 87.1875 86.25
"2-3-10" 84.375 72.8125 83.125 76.5625
"32-0-10" 7.75 8.125 6.5625 6.25
"32-10-10" 29.125 9.6875 13.4375 10.3125
"32-3-10" 12 10 8.4375 7.1875
"4-0-10" 68.5 45.9375 55.9375 48.125
"4-10-10" 83.125 71.25 77.1875 71.875
"4-3-10" 79 58.4375 70.625 64.0625
"64-0-10" 6.5 7.8125 8.75 5.9375
"64-10-10" 12 10.9375 7.5 6.5625
"64-3-10" 8.25 6.875 7.5 5.625
"8-0-10" 40.625 16.25 11.875 10.9375
"8-10-10" 75.25 44.375 41.25 29.375
"8-3-10" 59.75 25 21.5625 19.375
9 changes: 9 additions & 0 deletions raw-accuracies/noise-experiment_accuracies.txt
@@ -0,0 +1,9 @@
"condition" "human_observers(average)" "AlexNet" "GoogLeNet" "VGG-16"
0 80.5 70 81.6964285714286 89.9107142857143
0.03 79.625 62.5 76.25 83.125
0.05 78.125 50.3571428571429 66.5178571428571 75.0892857142857
0.1 75.125 19.2857142857143 34.0178571428571 44.0178571428571
0.2 60.875 9.10714285714286 9.46428571428571 14.9107142857143
0.35 45.625 6.16071428571429 5.80357142857143 8.66071428571428
0.6 16.75 6.25 6.25 6.69642857142857
0.9 6 6.25 6.25 6.69642857142857

0 comments on commit e897b33

Please sign in to comment.