Skip to content
/ eegsynth Public
forked from eegsynth/eegsynth

Converting real-time EEG into sounds, music and visual effects

Notifications You must be signed in to change notification settings

MM-N/eegsynth

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The EEGsynth

Table of contents

Background

The EEGsynth is an open-source Python codebase that provides a real-time interface between (open-hardware) devices for electrophysiological recordings (e.g. EEG, EMG and ECG) and analogue and digital devices (e.g. MIDI, games and analogue synthesizers). This allows one to use electrical brain/body activity to flexibly control devices in real-time, for what are called (re)active and passive brain-computer-interfaces (BCIs), biofeedback and neurofeedback. The EEGsynth does not allow diagnostics, and neither does it provide a GUI for offline analysis. Rather, the EEGsynth is intended as a collaborative interdisciplinary open-source and open-hardware project that brings together programmers, musicians, artists, neuroscientists and developers for scientific and artistic exploration.

  • The EEGsynth is intended to be run from the command line, using Python and Bash scripts, and is not friendly for those not familiar with such an approach.
  • The codebase and technical documentation are maintained on our GitHub repository. It is strongly advised to work within you own cloned repository and keep up to date with EEGsynth commits, as the EEGsynth is in constant development.
  • For installation please follow our installation instructions for Linux, OSX and Windows. Please note that Windows and OSX are not actively supported. Linux is the main target, partly because we want the EEGsynth to be compatible with the Raspberry Pi, partly because some Python libraries do not support Windows, and partly because we rely on command-line interface anyway.
  • Preliminary didactic material can be found on our wiki page of the Brain Control Club, a student club at Center for Research and Interdisciplinarity (CRI).
  • You can watch a recent presentation on the inspiration for the EEGsynth in the commonalities between between brain (activity) and sound (waves) at IRCAM.
  • The EEGsynth was initiated and still used artistic performances and workshops by the art-science collective 1+1=3.
  • The EEGsynth is used in the COGITO project to transform 32-channel EEG into sound for realtime transmission by a 25-meter radiotelescope.
  • The EEGsynth project is non-commercial and developed by us in our own time. We are grateful to have received financial support from Innovativ Kultur, The Swedish Arts Grant Council, Kulturbryggan and Stockholm County Council.
  • You can follow us on Facebook and Twitter, read our news blog, and check our past and upcoming events on our calendar.
  • Please feel free to contact us with questions and ideas for collaborations via our contact form.

Introduction

What follows is a short introduction on the design and use of the EEGsynth. This is followed by tutorials to get you started creating your own BCI system.

Modular design and patching

The EEGsynth is a collection of separate modules, directly inspired by modular synthesizers (see picture below). Similarly as in a modular synthesizers, simple software modules (python scripts) are connected, or “patched”, to create complex and flexible behavior. Each module runs in parallel, performing a particular function. For example, imagine module A is responsible for determining the heart-rate from ECG (voltages from the heart), while module B sends out a signal to a drummachine a every n milliseconds. By patching A→B, the EEGsynth can be made to control a drum-machine at the speed of the heart rate.

Example of complex modular synthesizer patch

Figure 1. Example of complex modular synthesizer patch

In the EEGsynth patching is implemented through the use of the open-source Redis database which stores attribute-value pairs. Attribute-value pairs are nothing more than an attribute name with a value assigned to it, such as ('Name', 'John') or ('Height', 1.82). A module can put anything it wants into the database, such as ('Heartrate', 92). Another module can ask the database to return the value belonging to ('Heartrate'). This allows one to create complex, many-to-many patches. Interactions with Redis are specified separately for each module in their own* .ini* file (initialization file). The .ini file is a text file with human-understandable formatting (according to Python’s ConfigParser class) where we define the attribute names that are used for input and output. For example, here we have spectral.ini:

[general]
debug=2
delay=0.1

[redis]
hostname=localhost
port=6379

[fieldtrip]
hostname=localhost
port=1972
timeout=30

[input]
; the channel names can be specifies as you like
; you should give the hardware channel indices
channel1=1
channel2=2
channel3=3
channel4=4
channel5=5
channel6=6
;frontal=1
;occipital=2

[processing]
; the sliding window is specified in seconds
window=2.0

[band]
; the frequency bands can be specified as you like, but must be all lower-case
; you should give the lower and upper range of each band
delta=2-5
theta=5-8
alpha=9-11
beta=15-25
gamma=35-45
; it is also possible to specify the range using control values from redis
redband=plotsignal.redband.lo-plotsignal.redband.hi
blueband=plotsignal.blueband.lo-plotsignal.blueband.hi

[output]
; the results will be written to redis as "spectral.channel1.alpha" etc.
prefix=spectral

The spectral module calculates the spectral power in different frequency bands. Those frequency bands, and their name, are given in the .ini file. As you can see some are defined by numbers ('alpha=9-11'), while others use Redis keys ('redband=plotsignal.redband.lo-plotsignal.redband.hi'). In the latter case, the frequency band is determined (via Redis) by the plotsignal module, which can be used to visually select frequency bands. In its turn, the spectral module outputs (power) values to Redis, prefixed by 'spectral' (see its output field).

As you can see, the .ini file includes other settings as well. You can find a default .ini in their respective directory, with a filename identical to the module. E.g. module/spectral contains spectral.py and spectral.ini. Initialization files can be edited with any text editor but should be saved in a separate 'patch directory', in which you store all the .ini files belonging to one patch. This helps organizing your patches as well as your local git repository, which will then not create conflicts with the remote default .ini files.

Data acquisition and communication

The EEGsynth uses the FieldTrip buffer to exchange eletrophysiological data between modules. It is the place where raw (or processed) data is stored and updated with new incoming data. For more information on the FieldTrip buffer, check the FieldTrip documentation. Note that the FieldTtrip buffer allows parallel reading of data. Some modules, such as the spectral module, take data from the FieldTrip buffer as input and output values to the Redis buffer. Other modules take care of the data acquisition, interfacing with acquisition devices and updating the FieldTrip buffer with incoming data. We typically use the affordable open-source hardware of the OpenBCI project for electrophysiological recordings. However, EEGsynth can also interface with other state-of-the-art commercial devices using FieldTrip's device implementations.

Controlling external software and hardware

The purpose of the EEGsynth is to control exernal software and hardware with electrophysiological signals. Originally developed to control modular synthesizers, the EEGsynth supports most protocols for sound synthesis and control, such as CV/gate, MIDI, Open Sound Control, DMX512 and Art-Net. These modules are prefixed with 'output' and output values and events from Redis to its protocol. Redis can also be accessed directly, e.g. in games using PyGame. Rather than interfacing with music hardware, output to Open Sound Control can be used to control music software such as the remarkable open-source software Pure Data, for which one can find many applications in music, art, games and science.

Manual control

Although the purpose of the EEGsynth (and BCIs in general) is to control devices using biological signals, some manual interaction might be desired, e.g. to adjust the dynamics of the output or to select the frequency range of the brainsignal during the recording. However, as with analogue synthsizers, we like the tactile real-time aspect of knobs and buttons, but would like to avoid using a computer keyboard. We therefor mainly use MIDI controllers, such as the LaunchControl XL displayed below. Identical to all other modules, the launchcontrol module records the launchcontrol input from sliders, knobs, and buttons into the Redis database to be used by other modules.

Figure 2. The Novation LaunchControl XL, often used in EEGsynth setups

Summary

To summarize, the EEGsynth is an open-source code-base that functions as an interface between electrophysiological recordings devices and external software and devices. It takes care for the analyis of data on the one hand, and the translation into external protocols on the other. This is done in a powerful, flexible way, by running separate modules in parallel. These modules exchange data and parameters using the FieldTrip buffer and Redis database, respectively. This 'patching' is defined using text-based initialization files of each module. The EEGsynth is run from the command-line, without a GUI and without visual feedback (except for the plotting modules), and interaction using MIDI controllers, rather than computer keyboards. The upside is that the EEGsynth is easily customized and expanded, has the true feel and fucntion of a real-time feedback system, and can be light enough to run e.g. on a Raspberry-Pi (i.e. on Raspian). Below you can see the current (actually, already outdated) collection of modules included in the EEGsynth, showing the two different ways of communication: via the FieldTrip bugger (data) and via Redis (control values and parameters).

Figure 3. Visual depiction of communication between modules via either the FieldTrip buffer for raw data (yellow) or via the Redis database (blue) for output and input parameters.

Module overview

Detailed information about each module can be found in the README.md included in each module directory. Here follows a description of the current available modules.

Analysis

  • Spectral Analyzes power in frequency bands in the raw data buffer
  • Muscle Calculates RMS from EMG recordings in the raw data buffer
  • Accelerometer Extracts accelerometer data (X,Y,Z) from onboard sensor of the OpenBCI in the raw data buffer
  • EyeBlink Detects eye blinks in the raw data buffer
  • HeartRate Extracts heart rate in the raw data buffer

Data acquisition

  • Openbci2ft Records raw data from the OpenBCI amplifier and places it in the buffer.
  • Jaga2ft Records raw data from Jaga amplifier and places it in the buffer
  • For more supported acquisition devices look here

Communication between modules

  • Redis The database for communicating data between modules
  • Buffer FieldTrip buffer for communicating raw data

Utilities for optimizing data flow, patching and prototyping

External interfaces (open-source)

External interfaces (consumer)

Software synthesizer modules

  • Pulsegenerator Send pulses, i.e. for gates, or MIDI events synchronized with heart beats
  • Sequencer A basis sequencer to send out sequences of notes
  • Synthesizer A basic synthesizer to send our waveforms
  • Quantizer Quantize output chromatically or according to musical scales

COGITO project

  • Cogito Streaming EEG data (from the Gtec EEG) to audio for radio transmission, using our interstellar protocol

Tutorials

Tutorial 1: Playback data, and display on monitor

We will start with the following setup wherein we will playback data to the buffer as if it is recorded in real-time. This is convenient, since it will allow you to develop and test your BCI without having to rely on real-time recordings.

Boxes depict EEGsynth modules. Orange arrows describe time-series data. Blue arrows describe Redis data

Starting the data buffer

The EEGsynth uses the FieldTrip buffer to communicate data between modules. It is the place where raw (or processed) data is stored and updated with new incoming data. For more information on the FieldTrip buffer, check the FieldTrip documentation.

  1. Navigate to the buffer module directory /eegsynth/module/buffer
  2. Copy the buffer.ini to your own ini directory (e.g. to /eegsynth/inifiles, which would be in ../../inifiles relative to the buffer module directory)
  3. Start up the buffer module, using your own ini file: ./buffer.sh -i ../../inifiles/launchcontrol.ini. Note here that the buffer module is one of the few modules that are an executable written in C, run from a bash script rather than Python. However, it does function exactly the same concerning the user-specific ini files.
  4. If the buffer module is running correctly, it does not print any feedback in the terminal. So no news is good news!

Writing pre-recorded data from HDD to the buffer

We will then write some prerecorded data into the buffer as if it was being recorded in real-time:

  1. Download some example data in .edf format. For example, from our data directory on Google Drive. Or use the data you recording in the recording tutorial.
  2. Place the .edf file in a directory, e.g. in /eegsynth/datafiles
  3. Navigate to the playback module directory /eegsynth/module/playback
  4. Copy the playback.ini to your own ini directory (e.g. to /eegsynth/inifiles, which would be in ../../inifiles relative to the buffer module directory)
  5. Edit your playback.ini to direct the playback module to the right edf data file, e.g. under [playback] edit: file = ../../datafiles/testBipolar20170827-0.edf
  6. Edit the two playback.ini options for playback and rewind so that it will play back automatically (and not rewind): play=1 and rewind=0
  7. Make note that you can comment out (hide from the module) lines of text by adding a semicolon (;) at the beginning of the line
  8. Now start up the playback module, using your own .ini file: python playback.py -i ../../inifiles/playback.ini
  9. If all is well, the module will print out the samples that it is 'playing back'. This is that data that is successively entered into the buffer as if was just recorded

Plotting streaming data in the buffer

If you made it so far the buffer is working. However, we can now also read from the buffer and visualize the data as it comes in, using the plotsignal module. Note you need to be in a graphical environment for this.

  1. Navigate to the plotsignal module directory /eegsynth/module/plotsignal
  2. Copy the plotsignal.ini to your own ini directory (e.g. to /eegsynth/inifiles, which would be in ../../inifiles relative to the buffer module directory)
  3. Edit your plotsignal.ini to plot the first two channel, but editing under [arguments] edit: channels=1,2
  4. Now start up the plotsignal module, using your own .ini file: python plotsignal.py -i ../../inifiles/plotsignal.ini
  5. If you see your data scroll by, bravo!

Tutorial 2: Using Redis for live interaction with modules

Now we have set up a basic minimal pipe-line with data transfer, we can introduce communication between modules used the Redis database. Redis is the place where modules communicate via 'key-value' pairs. Read the online documentation on the EEGsynth website for more background on the use of Redis. In this tutorial we will influence the behavior of the plotsignal output module, by changing parameters in Redis, while having the plotsignal module use these parameters as well as writing parameters back in Redis. It becomes important now to really understand the flow of information in the schema.

Boxes depict EEGsynth modules. Orange arrows describe time-series data. Blue arrows describe Redis data

Writing and reading from Redis

After installation of the EEGsynth, the Redis database should be running in the background at startup. To check whether Redis is working you can monitor Redis while adding and reading 'key-value' pairs. For the purpose of the tutorial we will use the LaunchControl MIDI controller to enter values from the LaunchControl to Redis. If you do not have a Launchcontrol, you can enter values by hand. We will discuss this as well (just skip this part).

  1. Navigate to launchcontrol module directory /eegsynth/module/launchcontrol
  2. Copy the launchcontrol.ini to your own ini directory (e.g. to /eegsynth/inifiles, which would be in ../../inifiles relative to the launchcontrol module directory)
  3. Start up the launchcontrol module, using your own ini file: python launchcontrol.py -i ../../inifiles/launchcontrol.ini
  4. You will see the connected MIDI devices printed in the terminal. If you have not set up the .ini file correctly yet, read out the MIDI device name from the output, and replace the device name, e.g. device=Launch Control under the [midi] field of your .ini file.
  5. Now restart the launchcontrol module. If everything is working correctly, a move of any of the sliders will print a key-value pair in the terminal.

You can also add values to Redis directly in Python:

  1. Start up Python, i.e. type python in the terminal
  2. Import Redis, i.e. type import r as redis
  3. Set a key-value pair, by typing r.set('test_key','10')
  4. Read a key-value pair, by typing r.set('test_key')

Patching the plotsignal module

You can now monitor the actions of Redis by opening a new terminal window and running redis-cli monitor. You should be able to see both set and get actions. So monitor this window while adding values in Redis as described above to see if it is working correctly. If you are using the launchcontrol module, you will see that the keys will be named something like launchcontrol.control077. We can tell the plotvisual module to use these values to adapt its behaviour. It will then take these values and relate them to the range of its spectral analysis, to determine frequency bands of its 'red band' and 'blue band'. The plotvisual module, in its turn, will output these frequency bands back into Redis. This makes them available to e.g. further EEG analysis. Take a moment to consider this pipeline. We call this connecting of modules via Redis parameters 'patching', referring to patching in modular synthesizers.

  1. Determine which launchcontrol sliders/rotators you want to use by moving them and looking at the values that change in Redis (use redis-cli monitor). Let's say we will do the following:
* _launchcontrol.control013_ will determine the center frequency of red band
* _launchcontrol.control029_ will determine the half-width of red band
* _launchcontrol.control014_ will determine the center frequency of blue band
* _launchcontrol.control030_ will determine the half-width of blue band
  1. Now edit your plotsignal.ini file to enter these as parameters as follows under [input]:
* _redfreq=launchcontrol.control013_
* _redwidth=launchcontrol.control029_
* _bluefreq=launchcontrol.control014_
* _bluewidth=launchcontrol.control030_

The plotsignal module will now look into Redis to try to find values there corresponding to the status of these keys. If you now change the value of any of these key-value pairs by e.g. rotating a button, the LaunchControl module will update these values in Redis, where the plotsignal module will read them and adjust its display (the red and blue lines delineating the two frequency bands). You can now move the frequency bands around, and get visual feedback overlayed on the spectrum of the channels that you are plotting. The plotsignal module also makes a conversion between the state of the values it reads from Redis (the last read position of the knobs), to frequencies in Hertz. It outputs those back into Redis, e.g. under plotsignal.redband.lo, and plotsignal.redband.hi. You can check this by using redis-cli monitor.

Tutorial 3: Real-time EEG recording with OpenBCI

Now we have set up a basic pipeline, we can replace the playback of EEG with real-time recordings of EEG. The EEGsynth supports many EEG devices, thanks to fact that we use FieldTrip's support of EEG devices in its real-time development. Read the FieldTrip realtime development documentation for more information. We only distribute OpenBCI, Jaga and GTec devices with the EEGsynth (and not all their devices yet). These modules (e.g. openbci2ft, jaga2ft and gtec2ft) are written in C, because they rely on very specific interfacing with their devices and software. However, we use them as we would any other module, i.e. start them up with a user-specified .ini file. Similarly as the playback module, these write to the buffer. They do this typically in blocks of data that are relatively small number of samples compared to the time we use to analyse the data and control devices.

Boxes depict EEGsynth modules. Orange arrows describe time-series data. Blue arrows describe Redis data

Setting up EEG recording

The most important lesson here is actually how to set up proper EEG recordings, but this falls outside of the scope of this tutorial. Please refer to our recording tutorial to familiarize yourself with the recording procedure first. When you did so, and got some experience (best is to do so under supervision of a more experienced EEG user), we will patch the EEG real-time recording in the pipeline of the previous tutorial, replacing the playback module with the openbci2ft module. If you are using another device, the principle will be the same.

  1. Navigate to openbci2ftmodule directory /eegsynth/module/openbci2ft
  2. Copy the openbci2ft.ini to your own ini directory (e.g. to /eegsynth/inifiles, which would be in ../../inifiles relative to the openbci2ftmodule directory)
  3. We will need to plug in the OpenBCI dongle into a USB port. But before you do so, do the following:
1.   Open a new terminal window and list the devices of the kernel, by typing _ls /dev_.
2.   Plug in the OpenBCI dongle in a USB port
3.   List the device again using _ls /dev_
4.   If you compare the two lists, you should see that another device was added after you plugged in the dongle. It will probably start with _ttyUSB_ followed with a number. This is the USB port number at which the dongle is connected. If you unplug or restart your computer, this number might change, and therefor you will probably need to do this check regularly. There might be easier ways of finding the USB port number, but this, at least, is fool-proof.
  1. Edit your openbci2ft.ini file and enter the right port name for the dongle, which you can find under [General], e.g serial = /dev/ttyUSB1
  2. Start up the openbci2ft module, using your own ini file: python openbci2ft.py -i ../../inifiles/openbci2ft.ini. If things are working, you the terminal will print a message that it is waiting to connect.
  3. You can now turn on the EEG board (not the dongle) by moving the little switch to either side. After a couple of second you should see the dongle starting to blink a green and red light. This means it is configuring the EEG board with the settings specified in the .ini file, which will take a couple of seconds. After that you should have your data coming in, being transferred into the fieldtrip buffer.
  4. Now you can check the incoming data with the plotsignal module.

About

Converting real-time EEG into sounds, music and visual effects

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 57.2%
  • MATLAB 18.0%
  • Shell 11.5%
  • C 9.7%
  • HTML 2.6%
  • JavaScript 1.0%