Skip to content

Commit

Permalink
v0.7 update
Browse files Browse the repository at this point in the history
  • Loading branch information
rconstanzo committed Oct 22, 2022
1 parent 450fefe commit 7bf0bb8
Show file tree
Hide file tree
Showing 134 changed files with 90,671 additions and 45,913 deletions.
2 changes: 1 addition & 1 deletion QuickStart.md
Expand Up @@ -3,7 +3,7 @@
## How to start:

1) install the FluCoMa package via the Package Manager in Max or via https://www.flucoma.org/download/
2) move the full SP-Tools folder in your Packages folder (~/Documents/Max 8/Packages/), rename the folder to "SP-Tools", and restart Max.
2) download and move the full SP-Tools folder in your Packages folder (~/Documents/Max 8/Packages/), rename the folder to "SP-Tools", and restart Max.
3) ***IMPORTANT*** If you are using the Max for Live devices, make sure you download and install Max version 8.3+ and point Live to it in the preferences (*File Folder -> Max Application -> Browse*)

You can then open the SP-Tools Overview patch via the Package Manager info window, or from the extras menu bar.
64 changes: 48 additions & 16 deletions README.md
@@ -1,5 +1,5 @@
# SP-Tools - Machine Learning tools for drums and percussion
SP-Tools are a set of marchine learning tools that are optimized for low latency and real-time performance. The tools can be used with [Sensory Percussion](http://sunhou.se) sensors, ordinary drum triggers, or any audio input.
SP-Tools are a set of machine learning tools that are optimized for low latency and real-time performance. The tools can be used with [Sensory Percussion](http://sunhou.se) sensors, ordinary drum triggers, or any audio input.

SP-Tools includes low latency onset detection, onset-based descriptor analysis, classification and clustering, corpus analysis and querying*, neural network predictive regression, and a slew of other abstractions that are optimized for drum and percussion sounds.

Expand All @@ -11,34 +11,42 @@ FluCoMa v1.0 or higher.
All abstractions work in 64-bit and M1/Universal Binary.

## Useful Videos
[SP-Tools Teaser Video](https://www.youtube.com/watch?v=CXLFH496TBI)
[SP-Tools (alpha v0.1) Video Overview](https://www.youtube.com/watch?v=xxiWaFLn0M8)
[SP-Tools (alpha v0.2) Video Overview](https://www.youtube.com/watch?v=luLl4eJdezQ)
[SP-Tools (alpha v0.3) Video Overview](https://www.youtube.com/watch?v=FSUcIMrjy7c)
[SP-Tools (alpha v0.4) Video Overview](https://www.youtube.com/watch?v=q20wLzf8RVU)
[SP-Tools (alpha v0.5) Video Overview](https://www.youtube.com/watch?v=W2N_XyrVvrc)
[SP-Tools (alpha v0.6) Video Overview/Walkthrough](https://www.youtube.com/watch?v=OVByXZEaebo)
[SP-Tools Teaser Video - Performance and Musical Examples](https://www.youtube.com/watch?v=CXLFH496TBI)
[SP-Tools (alpha v0.1) - Initial Overview](https://www.youtube.com/watch?v=xxiWaFLn0M8)
[SP-Tools (alpha v0.2) - Controllers and Setups](https://www.youtube.com/watch?v=luLl4eJdezQ)
[SP-Tools (alpha v0.3) - Filtering, Playback, and Realtime Analysis](https://www.youtube.com/watch?v=FSUcIMrjy7c)
[SP-Tools (alpha v0.4) - Concatenation and Realtime Filtering](https://www.youtube.com/watch?v=q20wLzf8RVU)
[SP-Tools (alpha v0.5) - Grid-Based Matching, Erae Touch, and Max for Live](https://www.youtube.com/watch?v=W2N_XyrVvrc)
[SP-Tools (alpha v0.6) - Max for Live Walkthrough](https://www.youtube.com/watch?v=OVByXZEaebo)
[SP-Tools (alpha v0.7) - Ramps, Data Processing, Novelty, and Timestretching](https://www.youtube.com/watch?v=yCWKemdfm78)
[Corpus-Based Sampler](https://www.youtube.com/watch?v=WMGHqyyn1TE)
[Metal by the Foot 1/4](https://www.youtube.com/watch?v=ZMke-GUlWYU)

## Changelog
### v0.6 - [SP-Tools v0.6 Video Overview/Walkthrough](https://www.youtube.com/watch?v=OVByXZEaebo)
### v0.7 - [SP-Tools v0.7 - Ramps, Data Processing, Novelty, and Timestretching](https://www.youtube.com/watch?v=yCWKemdfm78)
* **BREAKING CHANGES** - all objects that had a separate control inlet, now take those messages in the left-most inlet
* added new "ramp" objects for structural and gestural changes (`sp.ramp`, `sp.ramp~`)
* added new "data" objects for transforming, looping, and delaying descriptors (`sp.databending`, `sp.datadelay`, `sp.datagranular`, `sp.datalooper~`, `sp.datatranspose`)
* added novelty-based segmentation for determining changes in material type (`sp.novelty~`)
* added timestretching functionality to `sp.corpusplayer~` and the `Corpus Match` M4L device

### v0.6 - [SP-Tools v0.6 - Max for Live Walkthrough](https://www.youtube.com/watch?v=OVByXZEaebo)
* added Max for Live devices (16 total) which cover (nearly) all the functionality of SP-Tools
* Max codebase further commented and tidied

### v0.5 - [SP-Tools v0.5 Video Overview](https://www.youtube.com/watch?v=W2N_XyrVvrc)
### v0.5 - [SP-Tools v0.5 - Grid-Based Matching, Erae Touch, and Max for Live](https://www.youtube.com/watch?v=W2N_XyrVvrc)
* added Max for Live devices for some of the main/flagship functionality (`Concat Match`, `Controllers`, `Corpus Match`, `Descriptors`, `Speed`)
* added `sp.gridmatch` abstraction for generic controller-based navigation of corpora
* added support for the Erae Touch controller (`sp.eraetouch`)
* improved path stability when loading example corpora

### v0.4 - [SP-Tools v0.4 Video Overview](https://www.youtube.com/watch?v=q20wLzf8RVU)
### v0.4 - [SP-Tools v0.4 - Concatenation and Realtime Filtering](https://www.youtube.com/watch?v=q20wLzf8RVU)
* added "concat" objects for real-time mosaicking and concatenative synthesis (`sp.concatanalysis~`, `sp.concatcreate`, `sp.concatmatch`, `sp.concatplayer~`, `sp.concatsynth~`)
* added ability to apply filtering to any descriptor list (via `sp.filter`)
* improved filtering to allow for multiple chained criteria (using `and` and `or` joiners)
* updated/improved pitch and loudness analysis algorithms slightly (you should reanalyze corpora/setups/etc...)

### v0.3 - [SP-Tools v0.3 Video Overview](https://www.youtube.com/watch?v=FSUcIMrjy7c)
### v0.3 - [SP-Tools v0.3 - Filtering, Playback, and Realtime Analysis](https://www.youtube.com/watch?v=FSUcIMrjy7c)
* added ability to filter corpora by descriptors (baked into `sp.corpusmatch` via `filter` messages)
* added improved/unified corpus playback with `sp.corpusplayer~`
* add realtime analysis abstractions (`sp.realtimeframe~`, `sp.descriptorsrt~`, `sp.melbandsrt~`, `sp.mfccrt~`)
Expand All @@ -47,7 +55,7 @@ All abstractions work in 64-bit and M1/Universal Binary.
* added `sp.corpuslist` abstraction for visualizing and playing samples in a corpus in list form
* removed old playback abstractions (`sp.corpussimpleplayer~`, `sp.corpusloudnessplayer~`, `sp.corpusmelbandplayer~`)

### v.02 - [SP-Tools v0.2 Video Overview](https://www.youtube.com/watch?v=luLl4eJdezQ)
### v.02 - [SP-Tools v0.2 - Controllers and Setups](https://www.youtube.com/watch?v=luLl4eJdezQ)
* added "setups" (corpus scaling and neural network prediction/regression)
* added "controllers" (meta-parameters extracted from onset timings and descriptor analysis)
* added four new abstractions (`sp.controllers`, `sp.speed`, `sp.setupanalysis`, `sp.setuptrain~`)
Expand Down Expand Up @@ -118,6 +126,21 @@ sp.corpusplayer~ is an all-in-one playback object that allows for mono or stereo
### **sp.crossbank~** - *Cascade of cross~ filters*
sp.crossbank~ is a cascade of cross~ filters for spectral compensation. Frequencies are pre-set to adjust the spectrum based on the melband analysis/compensation. It should be used inside a poly~ object.

### **sp.databending** - *Transform and distort descriptor streams*
sp.databending takes incoming descriptor data (descriptors, melbands, or MFCCs) and apply various transformations and "bends". The input can be lists or buffers and the same will be output.

### **sp.datadelay** - *Delays incoming descriptor data like a lossy "analog" delay*
sp.datadelay takes incoming descriptors (of any kind) and sends them through a delay line. The feedback and rolloff parameters function as they would in a conventional delay. The input can be lists or buffers and the same will be output.

### **sp.datagranular** - *Repeat and vary incoming descriptor data like a "granular synth"*
sp.datagranular takes incoming descriptor data (descriptors, melbands, or MFCCs) and processes it through a "granular synth"-style process.

### **sp.datalooper~** - *Record, loop, and play back descriptor data*
sp.datalooper~ take incoming descriptor data (descriptors, melbands, or MFCCs) and sends them into a looper with somewhat conventional looper controls

### **sp.datatranspose** - *Transpose and modify descriptor streams*
sp.datatranspose takes incoming descriptor data (descriptors, melbands, or MFCCs) and "transposes" it in different ways. The input can be lists or buffers and the same will be output.

### **sp.descriptordisplay** - *Displays the descriptors as a radar chart*
sp.descriptordisplay plots the incoming realtime descriptors, along with the nearest match on a radar chart for visualizing the differences between the incoming audio and its nearest match.

Expand Down Expand Up @@ -154,11 +177,14 @@ sp.melbands~ outputs 40 melbands which can be used for spectral compensation in
### **sp.mfcc~** - *Analyzes audio for MFCCs based on audio onsets*
sp.mfcc~ outputs 13 MFCC coefficients (skipping the 0th coefficient) which can be used for classification and clustering. Although abstract they can also be used to control parameters.

### **sp.mfccframe** - *Analyzes audio for MFCCs based on frame input*
sp.mfccframe outputs 13 MFCC coefficients (skipping the 0th coefficient) which can be used for classification and clustering. Although abstract they can also be used to control parameters.

### **sp.mfccrt~** - *Analyzes audio for MFCCs based on audio input*
sp.mfcc~ outputs 13 MFCC coefficients (skipping the 0th coefficient) which can be used for classification and clustering. Although abstract they can also be used to control parameters.

### **sp.mfccframe** - *Analyzes audio for MFCCs based on frame input*
sp.mfccframe outputs 13 MFCC coefficients (skipping the 0th coefficient) which can be used for classification and clustering. Although abstract they can also be used to control parameters.
### **sp.novelty~** - *Novelty-based onset detection*
sp.novelty~ takes audio input and outputs a bang and a trigger when novelty is detected. The novelty can be computed across different time frames and for different parameters.

### **sp.onset~** - *Amplitude-based onset detection*
sp.onset~ takes audio input and outputs a bang, trigger, and a gate when an onset is detected. The sensitivity is adjustable (0-100%) and a threshold can be set as an absolute noise floor (in dB).
Expand All @@ -170,7 +196,13 @@ sp.onsetframe~ takes audio input, just like sp.onset~ but instead of outputting
sp.playbackcore~ is the underlying poly~ that handles the polyphonic sample playback of matched corpus entries. It's not intended to be used on its own, but rather is the core component of sp.corpusplayer~.

### **sp.plotter** - *Display 2d corpus data and labels*
sp.plotter is a utility for visualizing corpora and trained classes.
sp.plotter is a utility for visualizing corpora and trained classes.

### **sp.ramp** - *Event-based ramp generation*
sp.ramp takes onsets as input (as bangs or triggers/gates) and incrementally outputs three versions of a given ramp based on the amount of defined events.

### **sp.ramp~** - *Onset-based ramp generation*
sp.ramp~ takes onsets as input (as bangs or triggers/gates) and outputs three versions of a given ramp allowing for sample accurate gestures to be triggered by incoming onsets.

### **sp.realtimeframe~** - *Buffer recording and clock output*
sp.realtimeframe~ is the counterpart to sp.onsetframe~ where instead of outputting the frame to analyzed based on onset detection, sp.realtimeframe~ spits out a constant stream of frame values to analyze enabling realtime analysis of multiple descriptor types that remain in sync.
Expand Down

0 comments on commit 7bf0bb8

Please sign in to comment.