Switch branches/tags
Nothing to show
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
..
Failed to load latest commit information.
Images
README.md

README.md

Open Audio Weekend Projects

Table of Contents

BPL Sampler
A hip hop beat were two of the sampler instruments are voices excerpted fro the Brooklyn Public Library Our Streets, Our Stories. This active listening and remixing model could have great application in the classroom!

CrowdScribe
A Chrome extension prototype for public requesting and gathering transcriptions. This prototype raises awareness around accessibility, allows for the crowdsourcing of transcription, and is designed with live events in mind. By targeting live events, the extension builds upon existing communities and audiences.

Homemade History
A project modeling potential engagement and resuse activities around oral history collections. Building on NYPL's Open Transcript Editor, this model would allow for users to clip two minutes of an oral history and record their own complimentary response to the clip which would be ingested back into the larger collection.

P.I.T.C.H.Y. D.A.W.G. (Perfecting Interactive Technology for Content Heard by You Despite Awkward Word Groupings)
Platform for sharing related media synced with audio. Listeners can select from three modes of listinging: Audio Only, Highlights, Full Experience. Full Experience provides an enhanced experience by displaying both the transcript and related media synced with the audio. Highlights allows for condensed navigation of the audio by displaying keywords indexed from the transcript.

Storynode
To identify place names, transcripts are run through the Stanford Named Entity Recognizer. These place names are then plotted on a map, creating a cartographic way of browsing audio and transcriptions.

InstaBurns
InstaBurns is an experiment in auto-generating common terms and their frequency from transcripts in order to explore the relationship of terms withing and across audio files. The InstaBurns platform also uses significant terms to automatically generate a slideshow of related images using the Google Image API.

A-to-V
A-to-V is a one-stop central database where collectors of oral histories provide searchable information about their audio files and make those files directly available to users.

BPL Sampler

Short description

A hip hop beat were two of the sampler instruments are voices excerpted fro the Brooklyn Public Library Our Streets, Our Stories. This active listening and remixing model could have great application in the classroom!

Links and materials

Github Repo

Long description

How can audio storytelling and music combine in a way that encourages people to engage more with the lyrical content? For two Brooklyn Public Library Our Streets, Our Stories, I’ve created two sampling instruments, Marty & Inez, as well as a basic hip hop beat that you can play over.

CrowdScribe

logo of crowd and scribe as written in shorthand

#### Short Description A Chrome extension prototype for public requesting and gathering transcriptions. This prototype raises awareness around accessibility, allows for the crowdsourcing of transcription, and is designed with live events in mind. By targeting live events, the extension builds upon existing communities and audiences. #### Code/wireframes: [https://github.com/ClarisaD/crowdscribe](https://github.com/ClarisaD/crowdscribe) #### Long description: Crowdscribe is a proof of concept for a Chrome extension that supports crowdsourced transcriptions. Users can request transcriptions of media using the extension, and users who are on the same webpage at the same time will get a notification to help transcribe media on the page.

Homemade History

logo composed of project name on dark background

#### Short Description: A project modeling potential engagement and resuse activities around oral history collections. Building on NYPL's Open Transcript Editor, this model would allow for users to clip two minutes of an oral history and record their own complimentary response to the clip which would be ingested back into the larger collection. #### Audience School age to baby boomers. #### Links and materials: [Demo Presentation](https://github.com/nypl-openaudio/start-here/blob/master/Projects/Images/Homemade%20History.pdf) #### Long description: Anyone can contribute to existing oral histories. Using a mobile app (or web portal) one can access the pre-approved NYPL oral history collections via a public version of the Open Transcript Editor, listen and follow along with the transcript and then extract sound clips of interest using a highlight/copy/paste text-based UI; record and edit your own story clips to add to, amplify and comment on the existing histories; mix the results together; add images and tag it; and share the remix of dynamic audio plus images with the world. Results are automatically shared back to NYPL’s oral history production staff to be curated (and further edited, if necessary), transcribed, and republished back to the collections. Further distribution will occur via streaming, podcasting and social media sharing. The NYPL audio production team will have a facilitation guide for users and live online chat to assist with technical operation of the app.

Wireframe to be prepared along with a business case. An existing audio app will be researched as a possible foundational tool.

P.I.T.C.H.Y. D.A.W.G. (Perfecting Interactive Technology for Content Heard by You Despite Awkward Word Groupings)

image of a wolf howling team name

#### Short Description: Platform for sharing related media synced with audio. Listeners can select from three modes of listinging: Audio Only, Highlights, Full Experience. Full Experience provides an enhanced experience by displaying both the transcript and related media synced with the audio. Highlights allows for condensed navigation of the audio by displaying keywords indexed from the transcript.

Links and materials

Demo Presentation
Sample of auto-generated slideshow

Long Description

Module-based curated audio experience based on 3 basic types of Users/Listeners: Audio Only (average listener, just press play); Full Experience (Dynamic Window + Transcript, can listen to audio while getting an enhanced listening experience through Visual Info Cards based on Keywords/Tags); Highlights (Keywords/Tags are indexed and offered as “chapters”, listener can skip forward to sections when a topic of interest is discussed). Potential future enhancements: User-driven Search Tool for the transcript (to serve research needs); Option of choosing Autoplay function during Highlights experience, so user can continue listening if they are enjoying the section they started from; A new use for post-edit transcripts which can create improved Rewind/Fast Forward function based on complete thoughts, sentences, ideas rather than just timestamps; Pulling out metadata and indexing layered conversations (of different speakers) so you can compare them with other audio files.

Storynode

screenshot of prototype interface with text on right column of screen and a map with some points on it on the left column of the screen

#### Short Description To identify place names, transcripts are run through the Stanford Named Entity Recognizer. These place names are then plotted on a map, creating a cartographic way of browsing audio and transcriptions.

Links and Materials

Github Repository
Storynode Documentation

Long description

Wouldn’t it be great if we could see all the locations mentioned in an oral history on a map? We could see not only the connections within one oral history recording, but the connections between multiple recordings in a collection or even across collections. Using Stanford’s Named Entity Recognizer (NER), we identified place names within oral history transcripts and plotted them on a map after first training NER to read NYC street names. Working on a user friendly app.

InstaBurns

Short Description

InstaBurns is an experiment in auto-generating common terms and their frequency from transcripts in order to explore the relationship of terms withing and across audio files. The InstaBurns platform also uses significant terms to automatically generate a slideshow of related images using the Google Image API.

Theme

Discovery

Links and Materials

Demo Presentation Github Repository

Long Description

An experiment to process audio transcripts, apply common topics, gather visual resources, and build illustrative slideshows with an interface to use the topics and images for discovery and navigation.

Our goal was to help users explore large collections of audio and discover thematic relationships. The challenge is “seeing” an overview of what is inside a specific audio file, diving in more deeply, and using that as a launch-point for serendipity and discovery.

We considered three primary audiences and a range of user scenarios

Public users

  • Casual, undirected exploring and sharing
  • Discovering “in place” -- using audio to enrich an experience in a neighborhood, monument, or landmark

Researchers and students who want to gather resources related to topics of interest

  • Collecting segments of audio and organizing references as part of larger research projects

Archivists and resource managers

  • Answering questions from others about finding or using specific audio resources
  • Performing background research when creating new works

Our process

The design process involved
Elaborating the scenarios to understand the nuances of the user’s experience
Mapping out general flows for the scenarios, to understand the interactions required for the overall experience.
Parallel design activities rapidly produced wireframes and consensus around the behavior of the main features.
We focused on the audio player screen for development, but thought through concepts for other supporting screens.

Development had four threads

  • Perform cluster analysis on the transcripts to identify common terms and co-occurrence relationships that could suggest themes, and implement a visualization to explore the relationships between terms across different audio files.
  • Name the emerging themes to create high-level categorization of segments within each audio file.
  • In parallel, perform a semantic analysis of the sentences in the transcript; using the resulting significant terms, use the Google image API to retrieve images associated with the key phrases in each sentence, and stream them real-time into a viewer to create a background slideshow that provides a visual reference aligned with the audio narrative.
  • Create an application that presents the streaming images, transcript, and the waveform with topics mapped to segments of the audio; also show additional reference links to other audio files and external resources/sites.

A-to-V

logo of stylized A and V

#### Short Description: A-to-V is a one-stop central database where collectors of oral histories provide searchable information about their audio files and make those files directly available to users. #### Links and Materials [Demo Presentation](https://github.com/nypl-openaudio/start-here/blob/master/Projects/Images/A%20to%20V.pdf) [Github Repository](https://github.com/emmjab/a-to-v)

Long Description

Where does the content for A to V come from?

  • Crowd-sourced data and content from private collections.
  • Public audio archives (e.g., NYPL, Library of Congress).

Who will use A to V? People who wish to make their oral history files/collections available to the public.
People researching oral histories, including:

  • Students
  • Scholars
  • Educators
  • Documentarians

How do oral history collectors use A to V?
Using a simple form, oral history collectors provide the following information about their collections and/or individual audio files, including:

  • Topic(s)
  • Location
  • Date
  • Names of interviewer(s) and subject(s)

And they provide access to audio files by:

  • Uploading individual files or links to the A to V database.
  • Providing access information for unprocessed collections.

How do researchers use A to V?
The database is searchable in two ways:

  • By information fields
    Location (where the interview was conducted, locations discussed in the interview)
    Topics
    Date (when recorded, time period discussed)
    Interviewer and interviewee names
  • By clicking directly on the homepage map to access a variety of information about the individual oral histories and collections.

What does it look like?
The principal image on the home is a map of the United States. Homepage includes a navigation bar, as well as a sidebar for displaying information in list form.