In January 2011, a formal experiment was conducted which aimed to clear some questions in the field of personal information management (PIM), information re-finding, and information architecture. You can find the January experiment on github too. In April 2011 we conducted another experiment with slightly different focus. This repository holds relevant data of this second experiment in April 2011.
For the purpose of the experiments, the testing framework tagstore was used to compare storing files (images, graphics and documents) within the folder hierarchy using Microsoft Windows Explorer and tagstore.
The tagstore framework supports file management by applying tags. Those tags are being used to (automatically) generate navigation hierarchies, called TagTrees.
This repository contains anonymous data from the experiment in order to check, extend, or re-evaluate the experiment.
This is Open Data.
This is Open Science.
Other Experiment Repositories
The January-experiment: https://github.com/novoid/2011-01-tagstore-formal-experiment
The April experiment has less technical logs, a two week pause between filing and re-finding, more test persons, more test items, more detailed questionnaires.
The experiment was conducted by a group of Karl Voit (Graz University of Technology) and took place in Graz, Austria. Therefore, the language of the test persons was German. All data which test persons were confronted with is in German. Derived data and supplementary things such as evaluation scripts are in English though.
Several related white papers are linked on the tagstore homepage.
What is missing
In near future, all relevant experiment data should be online. If something is not online (like for example the TP videos), a README.org explains further things or links to an alternative hosting place.
Currently, the detailed results are being extracted. The raw results are somewhat irritating due to several things like bugs in software.
Here is the list of things which will be published here in future:
We developed a detailed transcript language and are writing down each event very carefully: counting mouse clicks, writing down each spoken word, taking notes on each relevant GUI element interaction.
This took us very long.
The definition of the transcript language will be published here too.
The raw transcript files will be published here soon.
Transcript processing scripts
Python-parser are reading the transcript files and generate a ASCII and CSV summary per transcript file. Those parsers will be published too.
Raw and Summarized Results
The summarized CSV files (all TPs) as well as Spreadsheets containing the derived diagrams will be published.
Formal Experiment Report
A report containing the most important results will be published here.