Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
119 lines (95 sloc) 9.65 KB

READING FAÇADES: INTEGRATING HUMAN AND COMPUTER VISION

ARCH 5110: Architecture as Catalyst | March 9-13, 2015 | U. of Minnesota

INSTRUCTORS

Guest Instructor: Jentery Sayers, Assistant Professor, English and Social, Cultural, and Political Thought; Director, Maker Lab in the Humanities, University of Victoria

Faculty Instructor: Andrea J. Johnson, AIA, LEED BD+C, Assistant Professor, UMN School of Architecture

DESCRIPTION

This Catalyst workshop explores the intersections of human and computer vision in the construction of three-dimensional space. How does the emergence of computer vision, or machine phenomenology, inform our interpretations of the built environment? How can the face or exterior of a building be detected, organized, and understood? Instead of approaching human and computer vision in a binary fashion, how might they be blended to ask questions about society, technology, and design?

In this workshop, we will combine image capture, computer programming, and physical computing techniques with object-detection frameworks in order to not only expand existing perceptions of built environments but also consider the relevance of computer vision to building facade design, archiving, and analysis. Here, the affordances of computer vision to systematically, superficially, and rapidly detect, stitch, and model 3D objects will prove informative. These affordances will be combined with critical studies of algorithms and computational culture. Students will participate in hands-on, introductory workshops on Python, Git, photogrammetry, and image processing. No previous experience in these areas will be assumed.

FORMAT

Foundational lectures to introduce topics; workshops for skill-building; studio sessions with project critiques; seminar discussion

OBJECTIVES

  1. Approach computer vision as a technical and cultural matter, through a combination of theory and practice.
  2. Build 3D models with repositories of 2D images.
  3. Construct, describe, archive, and share image repositories using distributed version control.
  4. Consider the relevance of computer programming (e.g., in Python) to the representation and expression of 3D space.
  5. Experiment with computer vision across a spectrum of realist representation and speculative expression.

KEYWORDS

  • Computer Vision: methods for acquiring, processing, analyzing, and understanding images
  • Object Recognition: task of finding and identifying objects in an image or video sequence
  • Photogrammetry: taking measurements from photos to determine locations of surface points
  • Physical Computing: interactive physical systems using software and hardware that can sense and respond to the analog world
  • Python Programming: an open-source, high-level, easy to learn programming language
  • Git: a distributed revision control system for archiving and sharing data
  • Software Studies: the examination of how software and algorithms are embedded in society and culture, often with an emphasis on the material composition of media

ASSIGNMENTS

In addition to short readings, students will complete short exercises anchored in computer vision, programming, and 3D modeling. Class process and work will be posted by students throughout the week via GitHub. Each student will develop, create, and document a final project.

The readings, exercises, and seminar discussions will stress how work in computer vision operates on a spectrum, from realist representation (e.g., depicting the built environment as accurately as possible) to speculative expression (e.g., using computation to create things that do not exist in the world). Throughout the week, students will be encouraged to explore and test this spectrum. What does photogrammetry allow us to see that we may not otherwise? How can it help us model lived, social reality? How can it help us stitch together alternative realities, make curious media, and prototype counterfactuals? How can it be performed collaboratively or creatively?

During exhibition, students will be expected to share work that responds to these questions and more, through digital or tactile media. Through this work they will also be expected to assume a position (if you will) on the spectrum of representation and speculation.

DOCUMENTATION

Complete documentation of process and final project is required. All final files must be uploaded to GitHub by Monday, March 23. Minimum requirements:

  • Summary of work according to provided template
  • Images that fully document your final project (min. 10 images)
  • Images that document the workshop and your process (min. 10 images)
  • Working files used to create your project

Note: Save images as 72 ppi jpeg, min. 3600 pixels in one dimension, maximum quality 10 or above

MATERIALS

Each student should have access to a computer. If possible (but not required), students should bring the following to meetings:

  • A laptop (Windows, OSX, or Linux)
  • A camera (a DSLR with an SD card, if possible)

Students are also encouraged to install the following on their machines:

They should also create an account with https://github.com/, if they have not already.

During the week, they may be asked to work with additional software and languages, such as Rhino, SketchUp, and Python, in which case they will be given additional instruction.

DRAFT SCHEDULE

  • Monday 9:30-10:30: Introductions; Discussion of Syllabus and Schedule
  • Monday, 10:30-11:30: Computer Vision as Culture + Technique
  • Monday, 2-6: Gentle Introduction to Git, GitHub, and Markdown
  • Tuesday, 9-12: Introduction to Photoscan
  • Tuesday, 2-6: Experimenting with Photoscan (may include some Python programming)
  • Wednesday, 9-12: Fieldwork with Various Cameras
  • Wednesday, 2-6: Processing Image Sets
  • Thursday, 9-12: Sharing and Discussing Results
  • Thursday, 2-6: Preparing the Final Show
  • Friday, 9-2: Prepping and Setting Up the Final Show

GITHUB REPOSITORIES

RECOMMENDED READINGS