Skip to content

Commit

Permalink
readme formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
cvzoya committed Oct 28, 2015
1 parent 6560ff9 commit d124aff
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 14 deletions.
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
# Dataset

The user data included here corresponds to the following paper:
The user data included here corresponds to the following [paper](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf):

```
Beyond Memorability: Visualization Recognition and Recall.
Borkin, M., Bylinskii, Z., Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
Borkin, M.*, Bylinskii, Z.*, Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2015)
```
[paper pdf](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf)

Please cite this paper if you use this data.

Expand All @@ -22,7 +21,7 @@ By using this dataset, you are agreeing to the following license agreement:
*To use any of these images in a research paper or technical report, do not exceed thumbnail size.

This data contains taxonomic labels and attributes for 393 visualizations, as described in:
[Main data README](https://github.com/massvis/dataset/blob/master/README.md)
[README](https://github.com/massvis/dataset/blob/master/README.md)

These include the source, category, and type of each visualization, as well as the following attributes: data-ink ratio, number of distinctive colors, black & white, visual density, human recognizable object (HRO), and human depiction. We also provide the transcribed title for each visualization and where the title was located on the visualization, as well as whether the visualization contained data or message redundancy. From we include at-a-glance memorability scores (after 1 second of viewing) and from we include prolonged memorability scores (after 10 seconds of viewing).

Expand Down
10 changes: 5 additions & 5 deletions csv_files/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Dataset

The user data included here corresponds to the following paper:
The user data included here corresponds to the following [paper](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf):

```
Beyond Memorability: Visualization Recognition and Recall.
Borkin, M., Bylinskii, Z., Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
Borkin, M.*, Bylinskii, Z.*, Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2015)
```

Expand All @@ -29,15 +29,15 @@ Additionally, here we include the eye tracking and text description data as well

###[fixationsByVis.zip](https://github.com/massvis/eyetracking/blob/master/csv_files/fixationsByVis.zip)

For each filename corresponding to one of the 393 visualizations, and for both the encoding and recognition phases of our visualization studies (Borkin, Bylinskii, et al. 2015), there is a directory of user fixations. A comma-separated file corresponds to each user. Each line of the file corresponds to a single fixation and is formatted as follows: fixation number (order within fixation sequence), x and y locations of fixation in the image, fixation duration in ms. So, for example, `fixationsByVis/wsj612/enc/wab.csv` contains the fixations (during the encoding phase) of user labeled `wab` on visualizations labeled `wsj612`.
For each filename corresponding to one of the 393 visualizations, and for both the encoding and recognition phases of our visualization studies ([Borkin, Bylinskii, et al. 2015](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf)), there is a directory of user fixations. A comma-separated file corresponds to each user. Each line of the file corresponds to a single fixation and is formatted as follows: fixation number (order within fixation sequence), x and y locations of fixation in the image, fixation duration in ms. So, for example, `fixationsByVis/wsj612/enc/wab.csv` contains the fixations (during the encoding phase) of user labeled `wab` on visualizations labeled `wsj612`.

###[userDescriptions.zip](https://github.com/massvis/eyetracking/blob/master/csv_files/userDescriptions.zip)

For each of the 393 visualizations, we provide a filename with all user-generated free-form descriptions, from the recall phase of our visualization studies (Borkin, Bylinskii, et al. 2015). Each file is formatted as one user description per line, where the user's initials are separated by a colon: from the textual description.
For each of the 393 visualizations, we provide a filename with all user-generated free-form descriptions, from the recall phase of our visualization studies ([Borkin, Bylinskii, et al. 2015](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf)). Each file is formatted as one user description per line, where the user's initials are separated by a colon: from the textual description.

###[descriptionAnnotations.csv](https://github.com/massvis/eyetracking/blob/master/csv_files/descriptionAnnotations.csv)

These include the manually coded user-generated description texts as described in (Borkin, Bylinskii, et al. 2015). This file contains one user description per line, with the following columns:
These include the manually coded user-generated description texts as described in ([Borkin, Bylinskii, et al. 2015](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf)). This file contains one user description per line, with the following columns:
* `filename` is the name of the visualization (note: the visualization name will occur as many times as there are user descriptions for it)
* `user` includes the code for the user name
* `description quality` is the manual quality rating for the description (0-3)
Expand Down
8 changes: 3 additions & 5 deletions matlab_files/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
# Dataset

The user data included here corresponds to the following paper:
The user data included here corresponds to the following [paper](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf):

```
Beyond Memorability: Visualization Recognition and Recall.
Borkin, M., Bylinskii, Z., Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
Borkin, M.*, Bylinskii, Z.*, Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2015)
```
[paper pdf](http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-camera.pdf)

Please cite this paper if you use this data.

Expand Down Expand Up @@ -56,14 +55,13 @@ Text coding:
###[visualization code](https://github.com/massvis/eyetracking/blob/master/matlab_files/visualizationCode)

We include Matlab scripts for visualizing the eyetracking data.
These visualizations were presented here:
These visualizations were presented in the following [paper](http://web.mit.edu/zoya/www/Bylinskii_eyefixations_small.pdf):

```
Eye Fixation Metrics for Large Scale Analysis of Information Visualizations
Bylinskii, Z., Borkin, M.
First Workshop on Eyetracking and Visualizations (ETVIS 2015) in conjunction with IEEE VIS 2015
```
[paper pdf](http://web.mit.edu/zoya/www/Bylinskii_eyefixations_small.pdf)

Please cite this paper if you use this code.

Expand Down

0 comments on commit d124aff

Please sign in to comment.