Skip to content
/ sonotype Public

An exploration towards using the way we speak to create type forms that are more universal and expressive

Notifications You must be signed in to change notification settings

fyt-o/sonotype

Repository files navigation

Sonotype

An exploration towards using the way we speak to create type forms that are generative towards a universal, inclusive and expressive communication.

f.y.t/o, ZAP and COMPUTERS.


Introduction - Premise and Questions

Chances are that we learnt to communicate with each other using our bodies before we began to communicate using abstract representations, shapes and pictures. Text is probably a method by which we could standardise certain aspects of our communication. How much of our communication through the medium of sound and body is still captured in our textual communications? Why has letterform been created in different languages in the way it has? What could inform our typeforms towards diverse comprehensibility, legibility and expression?

As we mulled over these questions, our preoccupations with sound and developing an understanding around sound being a distortion of a space-time paradigm began to inspire us towards using our sonic impressions to generate a family of forms. Our initial impetus to use sound is twofold.

  • Having had my own struggles with understanding and reading type early in life and then subsequently working with young children who struggle with language in a similar way, I was interested in contributing to the development of a typeface that could create a more tangible association between the sounds that we use to communicate and the forms we use to represent the sounds.
  • A language, in its spoken form, is characterised by the repertoire of sounds that one can use within that language. These sounds are not just for communicative purposes but are carriers of expression in and of themselves. By using sound, we were keen to translate some of that character and expression into the typeface that is created

Data - Decisions and Sources

In order to gather the data that we needed, we first had to understand the construction of language. We took inspiration from a variety of languages to understand the ways in which they approach communication auditorily, expressively and visually. Hebrew, Arabic, Latin, Greek, Tamil, Pali, Korean, Tsalagi and Xhosa were amongst the languages that we engaged with. Once we had developed an understanding of the range of options from which a writing system could be constructed, we decided to introduce certain observations of elementary linguistic pedagogy such as focusing on phonetics as a fundamental way to understand the construction of words.

We began to record our own sound samples for a variety of phonetic sounds and began to understand how they changed either in their amplitudes, frequencies or in their duration. It was then that Zach, pointed us towards a program called wav2c that would accept a '.wav' file of a sound sample and return a list of amplitudes for the sounds in that file. In order to maintain a standard tone, timbre and quality of sounds, we decided to source the samples from a UCLA course repository. All the files shown below are for the phoneme 'a' (like the 'a' in 'father').

  • The data was obtained as a '.aif' file.
  • Using Audacity this file was converted to a '.wav' file.
  • Following which wav2c helped us obtain a C++ Header file with a list of amplitudes
  • Finally, this file was taken into a text editor (Sublime in this case) and converted to a '.txt' file after removing everything except for the amplitude values.

Initial Doodling

We started out by using Processing to develop a simple way of making sense of sound. In order to do this, we wrote a simple 'sketch' that used Processing's sound library to analyse the amplitudinal variations in a sound sample and also use a Fast Fourier Transform (FFT) algorithm to visualise the sample as a series of spectral frequencies. While Processing and FFT's did not feature in the subsequent direction for this piece, it was elemental in directing my understanding of how sound can be seen, understood and abstracted.

Here is the sketch

And here are some sample outputs from the Processing sketch

Sample1

Sample2

Sample3

Sample4

Sample5


Form follows Data

Using Grasshopper to realise the forms for this piece was a simple decision once we had the data in numerical form.

We had a few options when it came to translating this data into forms. However, certain specific ideas had to be considered before I decided on a series of forms.

  • Firstly, In order for this to be a typeface that could actually be used, the forms had abstraction so as to be recognisable, replicable and legible
  • Secondly, the forms should lend themsleves to three-dimensional fabrication. This was essential to develop a discourse around inclusivity in language teaching. Certain people respond to the kinesthetic stimulus of form and we wanted to make sure that there was a way tom engage this typeface beyond the audio-visual experience.
  • Finally, we wanted to leave traces of representing sound as a wave in the typeform. This was to both show where the typeforms came from but also to visually associate the phonetic sound with a form.
Exploration 1

Keeping these thoughts in mind, we first tried to use a pleating definition in Grasshopper to visualise the amplitudes as a long strip of paper that would be folded into a pleated accordian of sorts.

pleats1

While this afforded very easy fabrication and resemblance to soundwaves, it was neither reflective of expression in the sound nor was it immediately recognisable (the nuances that differentiate one pleated form from the other is in each individual fold).

Exploration 2

The next exploration involved using the data to firstly generate a representation of the phoneme amplitudes as a 'soundwave' that we could manipulate for filtering out 'noise' (repetitive values). Following this, we experimented weith different ways in which the wave could be abstracted, sculpted or simplified to create a family of forms that were similar enough but also unique. The grasshopper definition for this is available here.

Series I

The first series of forms we created were for the purposes of a reproducible typeface that could actually function. The outputs of that both as three-dimensional renders and as a two-dimensional series of characters are shown below.

A-render

R-render

Sample Sheet

Series II

The second series was created to explore forms that were more expressive. Here are some renderings for two different sound samples for the same phonetic sound. These samples differed from one another in the way they were sonically expressed.

Sound Globe I

Sound Globe II

What's Next?

Currently, working on developing the forms from Series I into a complete repertoire of typeforms that can be employed for the purposes of visual communication using Glyphs while also trying to identify the most interesting experience for scribing in this typeface.

Some other questions we may begin to ask:

  • Would we start to adopt sounds from other languages as we now have a way to represent these sounds in recognisable ways?
  • Is this a typeface? A writing system? An expression of language as we see and hear it?
  • What other applications can we find for these forms? Jewelry? Sculpture?

Will keep updating this with more questions and assets as we begin to develop them.

About

An exploration towards using the way we speak to create type forms that are more universal and expressive

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published