Switch branches/tags
Nothing to show
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
..
Failed to load latest commit information.
data
instruments
regions
scrapers
visualization
.gitignore
README.md
analyze_lyrics.py
body.ck
body.py
query_lyrics.py

README.md

Data-Driven DJ Track 8: Body Language

You can listen to this track and read the full description here.

Software Required

All software required for making this song from scratch is free and open-source

  • ChucK - a programming language for real-time sound synthesis and music creation
  • Python - I am running version 2.7.3
  • Processing - for image analysis and supporting visualization

Instructions

Prepare Sound And Data

  1. artists.csv contains the list of artists that you would like to analyze
  2. lyrics.json contains the lyrics from all of the artists' songs. These lyrics were generated by scraping genius.com and using a series of scripts to parse them
  3. words.csv contains the list of words and their associated regions of the body which will be used to analyze the lyrics
  4. regions.json contains the bounding boxes of the regions of the body that will be used in the visualization
  5. instruments.csv contains the sounds that will be used in the song
  • File is the filename of the instruments sound file
  • From Gain and To Gain is the volume range this instrument can oscillate between. A value of 0 is silent.
  • From Tempo and To Tempo is the tempo range this instrument can oscillate bewteen. A value of 1 is standard BPM, 2 is twice as fast, 0.5 is twice as slow.
  • Tempo Offset is the offset as a percentage of the instruments tempo. For example if an instrument's tempo is 1 and the tempo offset is 0.5, the instrument will on the half beat.
  • Interval Phase, Interval, Interval Offset control at what intervals the instruments can play. For example, if interval phase is 16, interval is 2, and interval offset is 1, every 16 beats, the instrument can play on the 2nd half of the beats (8-16).
  • Active essentially activates or deactivates an instrument
  1. Prepare any new sound files and place in folder instruments. All files should be in .wav format. For best results, I'd recommend using very short clips (< 500ms).

Configure The Scripts

  1. Python script: body.py
  • BPM is the song's beats per minute.
  • MS_PER_ARTIST is the milliseconds per artist.
  • REGION_COUNT is the number of artist's top-mentioned body regions to look at
  1. ChucK script: body.ck
  • padding is the amount of milliseconds before and after the song.
  • instrument_buffers is the number of buffers each instrument has. If you hear clipping in your song, you will want to increase this number.
    • start is which millisecond you would like the song to start on. Useful for debugging a particular part of the song.

Generating The Song

  1. Run python analyze_lyrics.py in the project's directory. This will generate a data/analysis.json file that will be used to generate the song
  2. Run python body.py in the project's directory. This will generate two files that ChucK will use:
  • data/ck_instruments.csv: A manifest of instrument files
  • data/ck_sequence.csv: A sequence of instruments
  1. Open up body.ck in ChucK. You can either export the song to .ogg/.wav or start your VM and add a new shred

Generating The Visualization

  1. Open visualization/visualization.pde in Processing. This script generates a visualization based on the data from the previous steps.
  • set boolean captureFrames = true; to output frames to output folder
  • run script, this will generate frames at the framerate (fps) in the configuration