This a project that sends data from various biometrical sensors as MIDI commands. The goal is to build a soundscape based on the human body.
- Add more scales to Scalezor
- Make proper python naming of files, classes and packages
- Add a dramatic curve module, that can modulate MIDI signals over time.
- Let the motion dector take a MIDI input singal from the heart beat to synchronize
- Break out the Note-on-a-Scale selector from Motion dector
- Add support for having multiple notes running at the same time
- Add support for NMS protocol
- Clean up threading
- Add support for Kinnect
- Let motion2MIDI splitt the screen and send multiple CC messages
- Add GUI for heartbeat monitor that shows connection status and allows for modulation
- Aded FadeBack that connect to the motion detector, and sends gradually decreasing CCs, to let motions fade out.
PROTOTYPES and their STATUS
- Use object tracking instead of motion detection. Status: Tracking dancing bodies is too unstable and CPU heavy.
- Use the Kinnect depth sensor for motion detection in a pitch black room. Status: Works, but Kinnect can't see futher than four meters. Need to experiment with the camera placement. Maybe in the roof?
- Add analog input via contact microphones. Status: Tested two setups, either modulating the analog singal using loopers, or converting the analog to CC and notes. However this seams to be outside this project.
- Add four channel mixing
- Add DMX light protocol
- Run on a stand-alone machine
- liblo for Open Sound Protocol, used by Non-Session-Manager via pip3
- mido for MIDI, via pip3
- bluepy for Bluetooth, via pip3
- cv2 for video manipulation, via pip3
- freenect1 for Kinnect 360, via git https://github.com/OpenKinect/libfreenect/
- numpy for calculations, via pip3
- tkinter for GUI drawing, via pip3
- rtmidi for MIDI backend, via apt package python3-rtmidi