Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/haehn/CP
Browse files Browse the repository at this point in the history
  • Loading branch information
jamestompkin committed Mar 31, 2018
2 parents 17e18c9 + 8c9a9ae commit 55c2fee
Show file tree
Hide file tree
Showing 5 changed files with 38 additions and 37 deletions.
28 changes: 18 additions & 10 deletions PAPER/06_bars_and_framed_rectangles_experiment.tex
@@ -1,6 +1,17 @@
\begin{figure}[!t]
\includegraphics[width=\linewidth]{figure12_mlae_better_all.pdf}
\caption{\textbf{Computational results of the bars-and-framed-rectangles experiment.} \textit{Left:} Visual encodings of two bars for a task of length judgement (bottom) following Cleveland and McGill's proposed experiment. Perceiving which bar is longer is much easier for humans when a frame is added (top). For networks (trained from scratch, or * indicates ImageNet weights), there seems no significant difference between the encodings as reported by MLAE and 95\% confidence intervals.}
\label{fig:figure12_mlae}
\end{figure}


\section{Experiment: Bars and Framed Rectangles}

Visual cues can help converting graphical elements back to their real world variables. Cleveland and McGill introduced the bars and framed rectangles experiment which judges the elementary perceptual task of position along non-aligned scales~\cite{cleveland_mcgill}.
Visual cues can help converting graphical elements back to their real world variables. Cleveland and McGill introduced the bars-and-framed-rectangles experiment which compares the perceptual judgement tasks of length and position along non-aligned scales~\cite{cleveland_mcgill}. Figure~\ref{fig:figure12_mlae} shows both variations on the left. It is not easy to judge which bar is larger in the bottom picture which involves a length judgement. However, when adding a little frame as a reference, this length judgement is transferred to a position judgement along non-aligned scales. Based on this little frame, it is easy to see that the right bar is slightly larger than the left since the whitespace in the top of the frame is smaller than the one on the left.

As Cleveland and McGill explain in their theories, judging the whitespace is actually also a length judgement rather than a position~\cite{cleveland_mcgill}. They now relate this task to Weber's Law which states that the perceivable difference within a distribution is proportional to the initial size of the distribution~\cite{householder1940weber}. Here, it means that humans can easily measure the difference in the white scale since its initial size is small while estimating the small change in lengths of the black bars is not easily perceivable. The Just Noticable Difference (JND) is higher when the initial stimuli is smaller in size (here the white bars).

We mimmick the bars-and-framed rectangles experiment as a two value regression task. We create rasterized visualizations of size $100\times100$ as shown in Figure~\ref{fig:figure12_mlae} and let our networks estimate the sizes of the stimuli.

\subsection{Hypotheses}

Expand Down Expand Up @@ -39,9 +50,7 @@ \subsection{Hypotheses}
%\end{table}


\subsection{Weber-Fechner's Law}

As identified by Cleveland and McGill, the bar and framed rectangle experiment is closely related to Weber's law. This psychophysics law states that perceivable difference within a distribution is proportional to the initial size of the distribution. Weber's law goes hand-in-hand with Fechner's law. We conduct an additional experiment based on the original illustrations of the Weber-Fechner law to investigate whether this law can be applied to computational perception of our classifiers (Fig.~\ref{tab:weber_law}).

%\begin{figure}[t]
% \includegraphics[width=\linewidth]{weber_overview}
Expand Down Expand Up @@ -73,18 +82,17 @@ \subsection{Weber-Fechner's Law}
%\end{table}


\subsection{Point Cloud Experiment}

We conduct an additional experiment testing whether Weber's law applies to convolutional neural networks for graphical perception. For this, we generate three 2D point clouds as base stimuli -- each is created randomly by activating 10, 100, or 1000 pixels in a $100\times100$ raster image. We then additionally activate from 1 to 10 random pixels within the initial distribution but by carefully only selecting inactive pixels. We show examples for this in Figure~\ref{fig:weber_law} (left). This means that the number of additional points is harder to identify if they are added to the 1000 pixel set while the 10 pixel set allows to easily count. We then let our networks solve a regression task to estimate the number of added points.


\subsection{Results}

First run indicates that framed rectangles perform better but we dont really know it yet.

\begin{figure}[t]
\includegraphics[width=\linewidth]{figure12_mlae_better_all.pdf}
\caption{\textbf{Computational results of the Bars-and-Framed-Rectangles experiment.} Log absolute error means and 95\% confidence intervals for the \emph{bars-and-framed-rectangles experiment} as described by Cleveland and McGill~\cite{cleveland_mcgill}. We test the performance of a Multi-layer Perceptron (MLP), the LeNet Convolutional Neural Network, as well as feature generation using the VGG19 and Xception networks trained on ImageNet.}
\label{fig:figure12_mlae}
\end{figure}

\begin{figure}[t]
\includegraphics[width=\linewidth]{weber_mlae_noise_all.pdf}
\caption{\textbf{Computational results of the Weber-Fechner's Law experiment.} Log absolute error means and 95\% confidence intervals for the \emph{bars-and-framed-rectangles experiment} as described by Cleveland and McGill~\cite{cleveland_mcgill}. We test the performance of a Multi-layer Perceptron (MLP), the LeNet Convolutional Neural Network, as well as feature generation using the VGG19 and Xception networks trained on ImageNet.}
\caption{\textbf{Computational results of the point cloud experiment.} Log absolute error means and 95\% confidence intervals for the \emph{bars-and-framed-rectangles experiment} as described by Cleveland and McGill~\cite{cleveland_mcgill}. We test the performance of a Multi-layer Perceptron (MLP), the LeNet Convolutional Neural Network, as well as feature generation using the VGG19 and Xception networks trained on ImageNet.}
\label{fig:weber_law}
\end{figure}
36 changes: 9 additions & 27 deletions PAPER/07_results.tex
@@ -1,31 +1,13 @@
\section{Results and Discussion}

General discussion..

\subsection{Classifiers}

\textbf{Graphical Perception by CNNs.} In all experiments, CNNs were able to regress visual encodings to their quantitative variables reasonably with error rates comparable to humans. This suggests that future work can enable full-blown understanding of different chart types, e.g. a classifier which can identify one type of bar charts.
\\~\\
\textbf{Understanding Infographics by CNNs.} While elementary perceptual tasks can be learned by CNNs, it seems to be a very challenging task to have CNNs `understand' information visualizations which come in all variations. A simple \textit{google search} for barchart yields an incredible amount of variations.
\\~\\
\textbf{Stimuli Variability.} All our generated stimuli exhibit a certain variability which ranges from a very low number of permutations of 20, for the most simple volume elementary perceptual task, to millions, for the position-length experiment. We additionally add random noise to each stimuli to ensure that the CNNs do not just memorize images. We do not observe a direct correlation between variability and `perceptability' by our networks which suggests that the networks do not just memorize images. We also perform a direct noise and no-noise comparison (see supplemental) without any significant effect.
\\~\\
\textbf{Concept Learning.} Our cross-network experiments show that simple variations throw off the networks and result in high error rates. This suggests that the networks are not actually learning concepts but rather learn slight variations of pixel values. While we try to counteract this with our variability settings, it seems that this is the way the networks work.
\\~\\
\textbf{Transfer Learning using ImageNet.} Classifiers trained on imagenet are tuned towards natural images. While VGG19 and Xception perform better than the shallower LeNet, their full performance only develops when training from scratch. This shows how natural images are truely different than infographics.
\\~\\
\textbf{Anti-aliasing.} Does it help? Not sure yet!




%
%\subsection{Elementary Perceptual Tasks}
%
%
%\subsection{Position-Angle Experiment}
%
%
%\subsection{Position-Length Experiment}
%
%\subsection{Bars and Framed Rectangles Experiment}
%


%\begin{figure}[t]
% \includegraphics[width=\linewidth]{figure12_val_loss.png}
% \caption{\textbf{Classifier Efficiency of the Bars and Framed Rectangles experiment.} Categorical Cross-Entropy loss for the \emph{bars and framed rectangles experiment} as described by Cleveland and McGill~\cite{cleveland_mcgill}. The frame around the bars adds an additional visual cue enables faster network convergence. This is not yet reproducible!}
% \label{fig:figure12_val_loss}
%\end{figure}
\textbf{Anti-aliasing.} To keep things simple, we choose to create rasterized visualizations without interpolation. However, there might be a bias from networks trained on natural images (such as VGG19 * and Xception *, with ImageNet weights) which prefer smoother and not so prominent edges. We compare anti-aliased stimuli against the original ones without any significant effect (see supplemental).
11 changes: 11 additions & 0 deletions PAPER/paper.bib
Expand Up @@ -17,6 +17,17 @@ @article{cleveland_mcgill
publisher={Taylor \& Francis}
}

@article{householder1940weber,
title={Weber laws, the Weber law, and psychophysical analysis},
author={Householder, Alston S and Young, Gale},
journal={Psychometrika},
volume={5},
number={3},
pages={183--193},
year={1940},
publisher={Springer}
}

@inproceedings{maneesh_deconstructing_d3,
author = {Harper, Jonathan and Agrawala, Maneesh},
title = {Deconstructing and Restyling D3 Visualizations},
Expand Down
Binary file modified PAPER/paper.pdf
Binary file not shown.
Binary file modified PAPER/paper.synctex.gz
Binary file not shown.

0 comments on commit 55c2fee

Please sign in to comment.