Skip to content

Music Transcription: GSOC'22 #35

Open
@ashwanirathee

Description

@ashwanirathee

Hey @Datseris!! I saw this music transciption project on Julialang website for JuliaMusic which is an exciting project.
I wanted to discuss what you are looking for and would love make a proof of concept close to that.

MIDIfication of music from wave files

It is easy to analyze timing and intensity fluctuations in music that is the form of MIDI data. 
This format is already digitilized, and packages such as MIDI.jl and MusicManipulations.jl allow for 
seamless data processing. But arguably the most interesting kind of music to analyze is the live one.
Live music performances are recorded in wave formats. Some algorithms exist that can detect the
"onsets" of music hits, but they are typically focused only on the timing information and hence forfeit
detecting e.g., the intensity of the played note. Plus, there are very few code implementations online
for this problem, almost all of which are old and unmaintained. We would like to implement an algorithm
in MusicProcessing.jl that given a recording of a single instrument, it can "MIDIfy" it, which means to 
digitalize it into the MIDI format.

For the project, I noticed there are couple of classical methods based on pitch but nowdays for automatic
music transciption, people are mostly using CNN, LSTM and Transformers. I found a lot of papers on ismir
using those. So do we want to use Flux based transcription using this above methods or something classical?

Papers that I think that can be good candidate for implementation:

Can you point towards some papers that tell how you would like it implemented?So I can get an idea about it
and start contributing accordingly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions