Skip to content

scoreur/AutoComposer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoComposer

An HMM-supported automatic composer, which produces melodies that somehow resemble the given training songs. composer.cpp:

Main module that composes, takes several lines of single integer as its input, and a single line of integers seperated by spaces as its output. n for the first line, indicating the total number of scores, followed by n integers for the next n lines, i.e., 0 for a rest, 1 for 'do', 2 for 're', etc. For i/o examples, see scores and mine. play.py:

A script that translates and plays (via cxq's server) the output of composer. scores:

A short piece of canon, as a testing training set. mine:

A piece of melody I generated, could be played by play.py. Hidden Markov Model

Hidden Markov model (a.k.a HMM) is a method to model arbitrary random procedures. It assumes that the procedure to be modeled is generated by some hidden Markov chain, each state of the Markov chain induces a distribution on all the possible outputs. At each step, the Markov chain itself evolves, and produces an output recording to the distribution generated by its current state. It is obvious that HMM is an outstanding choice to handle problems related to sequences over time, e.g, rhythms. Training

The model could be trained by adjusting the model parameters, including parameters of the Markov chain, distributions induced by each state, and the initial distribution on the states of the Markov chain. Training is implemented by iteration, according to some formulas proved by the method of Lagrangian Multiplier, whose proof can be found in any related article. All the calculation can be done using basic recursional relationship between quantities. Parallelization

As training on multiple strings are somehow independent, it can be done parallelly. In my implementation, openmp was chosen, for its simplicity, and that I'm so lazy. Idealistically, using c cores simultaneously will reduce the time needed by c times. Classification Using HMM

Classification can also be done with HMM, by calculating and judging the probability of a string to appear as the output of the model among all possible strings of the same length. The probability can be calculated using dynamic programming (implemented in function "conjecture" in composer.cpp). Method Used to Generate Rhythms

The idea to generate rhythms is quite simple: let the trained model evolve itself, the output of a particular length is then our desired rhythm. This part is implemented in function "evlove" in composer.cpp. Scaling

For a sufficiently long string, the probability that it appears will be so small that exceeds precision limit of any possible machine. We scale the temporary variables here to avoid precision issues. In short, when we calculate the possibility of its prefix, we normalize this possibility and record the logarithm of the coefficient. When we are done, logarithm of the probability could be calculated by simply summing up the records at each step. Efficiency of HMM

HMM, in my opinion, is a pretty fast model. Suppose that we have a HMM model with n hidden states and m ossible outputs. A single iteration of training on a string of length t takes O(tn^2) time. Because of the math behind HMM, the parameters often converge after about 100 times of iteration. In my composer, n is set to 100, and the length of the working part of any song is not likely to exceed 1000, which means that we can finish almost any training on a single song in 10 seconds. When there are multiple songs to train on, we can use multiple threads to efficiently reduce the required time.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published