Skip to content

Pratham00007/Music_Generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

🎵 Bach Chorale Music Generator (Deep Learning)

This project generates Bach-style chorale music using a deep learning model built with TensorFlow/Keras. The model learns musical patterns from a dataset of chorales and generates new sequences of notes that can be played as MIDI music.

The architecture combines Temporal Convolutional Networks (CNN) and LSTM to capture both short-term musical motifs and long-term harmonic structure.


Project Structure

Music-Generator/
│
├── Music_Generator.ipynb
│
└── Music-Dataset/
    │
    ├── train/
    │   ├── chorale_000.csv
    │   ├── chorale_001.csv
    │   └── ...
    │
    ├── test/
    │   ├── chorale_000.csv
    │   └── ...
    │
    └── valid/
        ├── chorale_000.csv
        └── ...

Dataset Description

Each CSV file represents one Bach chorale.

Each row represents a timestamp and each column represents a voice in the chord.

Example:

note0 note1 note2 note3
74 70 65 58
74 70 65 58
75 70 58 55

At each timestamp, all four notes are played simultaneously forming a chord.


🎹 Note Encoding

Value Meaning
36 C1 (lowest pitch)
81 A5 (highest pitch)
0 Silence

Installation

Install the required libraries:

pip install tensorflow pandas numpy music21

🧹 Data Preprocessing

Steps used to preprocess the dataset:

  1. Load CSV chorale files
  2. Convert notes to integer tokens
  3. Create sliding windows of sequences
  4. Generate input-output pairs for training

Parameters

window_size = 32
window_offset = 16
batch_size = 32

After preprocessing:

X_train shape: (3111, 131)

Model Architecture

The model combines CNN + LSTM layers.

Architecture

Embedding Layer
        ↓
Conv1D (32 filters)
        ↓
Batch Normalization
        ↓
Conv1D (48 filters, dilation=2)
        ↓
Batch Normalization
        ↓
Conv1D (64 filters, dilation=4)
        ↓
Batch Normalization
        ↓
Conv1D (96 filters, dilation=8)
        ↓
Batch Normalization
        ↓
Conv1D (128 filters, dilation=16)
        ↓
Batch Normalization
        ↓
Dropout
        ↓
LSTM (256 units)
        ↓
Dense (Softmax output)

Total parameters:

455,056

Training

Training configuration:

Optimizer: Nadam
Learning Rate: 0.001
Loss: Sparse Categorical Crossentropy
Epochs: 15
Batch Size: 32

Final results:

Training Accuracy ≈ 89%
Validation Accuracy ≈ 82%

Music Generation

The trained model predicts the next note token based on previous notes.

Steps

  1. Provide seed chords
  2. Model predicts probability of next note
  3. Sample next note
  4. Append it to sequence
  5. Repeat until full chorale is generated

Example:

seed_chords = test_data[2][:8]

new_chorale = generate_chorale(model, seed_chords, 56)

🔊 Playing Generated Music

Generated music is converted to MIDI using music21.

from music21 import stream, chord

s = stream.Stream()

for row in chorale:
    s.append(chord.Chord([n for n in row if n], quarterLength=1))

s.show('midi')

Random Chorale Baseline

A random music generator is also implemented for comparison.

generate_random_chorale(length=56)

This helps compare random vs learned music patterns.


Save and Load Model

Save trained model:

model.save("bach_generation.keras")

Load model:

from tensorflow import keras

model = keras.models.load_model("bach_generation.keras")

Technologies Used

  • Python
  • TensorFlow / Keras
  • NumPy
  • Pandas
  • music21

Future Improvements

  • Transformer-based music generation
  • Attention mechanisms
  • Larger music datasets
  • Temperature-based sampling
  • Real-time music generation

👨‍💻 Author

Pratham

Deep Learning Project – Music Generation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors