AbletonML is a natural language interface for controlling Ableton Live. It allows you to control Ableton Live using simple English commands like "create a midi track with piano" or "set tempo to 120 BPM".
- Control Ableton Live using natural language commands
- Modern Electron-based GUI
- Max for Live integration
- Real-time project state visualization
- macOS (tested on macOS 11+)
- Ableton Live 11 or 12 with Max for Live
- Node.js 14+ and npm
- Python 3.9+
-
Clone this repository:
git clone https://github.com/yourusername/AbletonML.git cd AbletonML -
Install Python dependencies:
# Create virtual environment (recommended) python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # For macOS users: If you get tkinter errors, install: brew install python-tk@3.13
-
Install Node.js dependencies (for Electron GUI):
npm install -
Set up the Max for Live device:
- Open the
max/AbletonML_Bridge.maxpatfile in Max - Export it as a Max for Live device (see instructions in
max/export_to_amxd.txt) - Load the device into a MIDI track in Ableton Live
- Open the
-
Start the simple test GUI:
source venv/bin/activate python test_simple_gui.py -
Type commands like:
set tempo to 120create midi trackadd pianoadd reverb to track 2
-
Start Ableton Live and load the AbletonML_Bridge device on a MIDI track.
-
Start the AbletonML application:
# Option 1: Simple GUI (faster) python app/simple_gui.py # Option 2: Electron GUI npm start
-
Type natural language commands in the input field and press Enter or click "Execute".
"set tempo to 120""change bpm to 90""set bpm to 140""adjust tempo to 80"
"create midi track""create audio track""add audio track""make midi track"
"add piano""add synth""add drums"
"add reverb to track 2""add delay to track 1""add echo to track 3""add compressor to track 4"
"set reverb dry/wet to 30%""set delay mix to 50""set compressor amount to 75%""set reverb wet to 80""set delay level to 25%"
AbletonML consists of several components:
-
Electron Frontend: A modern GUI for entering commands and visualizing the project state.
-
Python Backend: Processes natural language commands and communicates with Ableton Live.
- NLP Module: Parses natural language commands
- Action Mapper: Maps parsed commands to actions
- Max Controller: Sends commands to Max for Live
-
Max for Live Device: Receives commands from the Python backend and controls Ableton Live.
- User enters a command in the Electron frontend
- Command is sent to the Python backend via Socket.IO
- NLP module parses the command
- Action mapper converts the parsed command to actions
- Max controller sends the actions to the Max for Live device via UDP
- Max for Live device executes the actions in Ableton Live
- Updated project state is sent back to the frontend
# Install tkinter support
brew install python-tk@3.13
# Recreate virtual environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt- Make sure the Max for Live device is loaded in Ableton Live
- Check that UDP port 7400 is not blocked by a firewall
- Restart the AbletonML application and Ableton Live
- Try using simpler commands
- Check the exact wording in the example commands
- Make sure the NLP module is properly initialized
The system works in simulation mode when Ableton Live is not connected:
python test_simple_gui.pyelectron/: Electron frontend filesbackend/: Python backend servercore/: Core modules (NLP, action mapper, controller)max/: Max for Live device files
To add support for new commands:
- Update the NLP module in
core/nlp.pyto recognize the new command - Add a mapping function in
core/action_mapper.py - Implement the action in
core/max_controller.py - Update the Max for Live device to handle the new action
MIT