Skip to content
Vision Lab Glaven Stimuli
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Glaven Set 1
.gitignore
Convert to STL.nb
LICENSE
README.md
metadata.xmp

README.md

Shapes for haptic experiments

v 1.4

Screen Shot 2011-09-29 at 3.03.57 PM In 2004, a student of the vision laboratory, Morgan Casella, started a project on line drawings and 3D shape perception. We created a class of objects that used structured randomness (known in the computer graphics community as ‘noise’) in a systematic way, by varying frequency and therefore ‘complexity’. An initial set of 25 objects was created and used in his thesis. We printed a few of them at a scale of about 20cm^3 and used those as drawing models.

Later, Eric Egan used golf-ball sized, 3D printed and cast versions of the 25 stimuli for crossmodal haptic experiments. In those experiments we were able to identify a subset of the objects such that each are haptically 1 JND apart. They are ordered in terms of perceptual complexity.

Subsequently, these objects have been used in a variety of experiments in our laboratory involving cross-modal perception.

This set of 8 objects start with an object in the 'middle' of the original 25 (a shape we call, for no apparent reason, Golden Boy) and step through them in 1 JND intervals.

The name 'Glaven' was coined by Martin Voshell, a very early vision lab alum, and is based on the Jerry Lewis / Nutty Professor style interjections of Professor John Frink of the Simpsons. Frink is the official laboratory mascot.

The original objects were generated using a custom set of software routines in  Wolfram Mathematica. We prepared this set for 3D printing with Autodesk Meshmixer. We are making the scan data available here for use by the research community. Please cite our dataset —

  • Phillips, F, Casella, M, and Egan, EJL (2016). Glaven objects (v1.4) [3D Object Files]. Retrieved from http://www.skidmore.edu/~flip
  • Phillips, F., Egan, E. J. L. & Perry, B. N. Perceptual equivalence between vision and touch is complexity dependent. Acta Psychol 132, 259–266 (2009).

All commercial uses, other than those granted by the CC BY-NC-SA 4.0 license, are explicitly reserved by the authors. For permission, please contact flip@skidmore.edu

Data
References
  • Casella, M. W. What can drawing tell us about our mental representation of shape? [Thesis] (2005).
  • Phillips, F., Casella, M. W. & Gaudino, B. M. What can drawing tell us about our mental representation of shape? Journal of Vision 5, 522–522 (2005).
  • Phillips, F. Creating noisy stimuli. Perception 33, 837–854 (2004).
  • Phillips, F., Egan, E. J. L. & Perry, B. N. Perceptual equivalence between vision and touch is complexity dependent. Acta Psychol 132, 259–266 (2009).
  • Phillips, F. & Egan, E. J. L. Crossmodal information for visual and haptic discrimination. SPIE 7240, 72400H–15 (2009).
License

Creative Commons License Gibson Feelies by Flip Phillips & Eric Egan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

You can’t perform that action at this time.