This project generates Bach-style chorale music using a deep learning model built with TensorFlow/Keras. The model learns musical patterns from a dataset of chorales and generates new sequences of notes that can be played as MIDI music.
The architecture combines Temporal Convolutional Networks (CNN) and LSTM to capture both short-term musical motifs and long-term harmonic structure.
Music-Generator/
│
├── Music_Generator.ipynb
│
└── Music-Dataset/
│
├── train/
│ ├── chorale_000.csv
│ ├── chorale_001.csv
│ └── ...
│
├── test/
│ ├── chorale_000.csv
│ └── ...
│
└── valid/
├── chorale_000.csv
└── ...
Each CSV file represents one Bach chorale.
Each row represents a timestamp and each column represents a voice in the chord.
Example:
| note0 | note1 | note2 | note3 |
|---|---|---|---|
| 74 | 70 | 65 | 58 |
| 74 | 70 | 65 | 58 |
| 75 | 70 | 58 | 55 |
At each timestamp, all four notes are played simultaneously forming a chord.
| Value | Meaning |
|---|---|
| 36 | C1 (lowest pitch) |
| 81 | A5 (highest pitch) |
| 0 | Silence |
Install the required libraries:
pip install tensorflow pandas numpy music21Steps used to preprocess the dataset:
- Load CSV chorale files
- Convert notes to integer tokens
- Create sliding windows of sequences
- Generate input-output pairs for training
window_size = 32
window_offset = 16
batch_size = 32After preprocessing:
X_train shape: (3111, 131)
The model combines CNN + LSTM layers.
Embedding Layer
↓
Conv1D (32 filters)
↓
Batch Normalization
↓
Conv1D (48 filters, dilation=2)
↓
Batch Normalization
↓
Conv1D (64 filters, dilation=4)
↓
Batch Normalization
↓
Conv1D (96 filters, dilation=8)
↓
Batch Normalization
↓
Conv1D (128 filters, dilation=16)
↓
Batch Normalization
↓
Dropout
↓
LSTM (256 units)
↓
Dense (Softmax output)
Total parameters:
455,056
Training configuration:
Optimizer: Nadam
Learning Rate: 0.001
Loss: Sparse Categorical Crossentropy
Epochs: 15
Batch Size: 32
Final results:
Training Accuracy ≈ 89%
Validation Accuracy ≈ 82%
The trained model predicts the next note token based on previous notes.
- Provide seed chords
- Model predicts probability of next note
- Sample next note
- Append it to sequence
- Repeat until full chorale is generated
Example:
seed_chords = test_data[2][:8]
new_chorale = generate_chorale(model, seed_chords, 56)Generated music is converted to MIDI using music21.
from music21 import stream, chord
s = stream.Stream()
for row in chorale:
s.append(chord.Chord([n for n in row if n], quarterLength=1))
s.show('midi')A random music generator is also implemented for comparison.
generate_random_chorale(length=56)This helps compare random vs learned music patterns.
Save trained model:
model.save("bach_generation.keras")Load model:
from tensorflow import keras
model = keras.models.load_model("bach_generation.keras")- Python
- TensorFlow / Keras
- NumPy
- Pandas
- music21
- Transformer-based music generation
- Attention mechanisms
- Larger music datasets
- Temperature-based sampling
- Real-time music generation
Pratham
Deep Learning Project – Music Generation