Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deformers to implement #1

Open
7 of 19 tasks
bmcfee opened this issue Dec 17, 2014 · 18 comments
Open
7 of 19 tasks

Deformers to implement #1

bmcfee opened this issue Dec 17, 2014 · 18 comments
Assignees

Comments

@bmcfee
Copy link
Owner

bmcfee commented Dec 17, 2014

Simple(ish) deformers (many from audio degradation toolbox):

  • Low-, band-, high-pass filter
  • bit-crushing
  • companding+quantization noise
  • mp3 compression
  • Clipping
  • Time stretch
  • Pitch shift
    • Tricky thing here: may require tuning estimation and quantization to determine whether or not to change a note's pitch class
  • Event blur
    • Cf: Ulrich et al, 2014
    • Find all events with duration == 0 and duplicate them at some random offset and degradation in confidence
  • Chord simplifier
    • Eg, drop the 7th to simulate lazy annotators
    • Or 'A:min(*3) -> A:maj'
  • Time clip
  • Dynamic range compression
  • Aliasing (resampling without LPF)
  • Resampling
  • Colored noise
  • reversal

Advanced deformers:

@bmcfee bmcfee self-assigned this Dec 17, 2014
@bmcfee bmcfee added this to the 0.1 milestone Dec 17, 2014
@bmcfee
Copy link
Owner Author

bmcfee commented Mar 24, 2015

Note: fix timing vs group delay with convolved impulse response

@cyrta
Copy link

cyrta commented Sep 22, 2015

You should use not only
sox or rubberband but maybe vst plugins
for dynamic compression and filter for sure.
use https://github.com/teragonaudio/MrsWatson to script that easily,
it is a batch cli program to launch plugins with a given parameters.

ps. nice project, I have some scripts scripts to do that, but python library would be idea.
Were you inspired by ??
http://www.eecs.qmul.ac.uk/~ewerts/publications/2013_MauchEwert_AudioDegradationToolbox_ISMIR.pdf
http://code.soundsoftware.ac.uk/projects/
audio-degradation-toolbox

I will fork and put some changes then ask for merge

@bmcfee
Copy link
Owner Author

bmcfee commented Sep 22, 2015

You should use not only
sox or rubberband but maybe vst plugins
for dynamic compression and filter for sure.
use https://github.com/teragonaudio/MrsWatson to script that easily,
it is a batch cli program to launch plugins with a given parameters.

I'd rather not use any command-line tools, but rather library calls. Python bindings weren't quite there at the time I needed this to work, so the cmdline stuff was hacked in. I'd also prefer to avoid proprietary (ie, non-free software) dependencies. But otherwise: yeah, it'd be great to have a general audio effects binding! Do you think that's possible?

Were you inspired by ??

Yup! The details are in the muda paper, which (I hope!) explains what the difference between muda and adt is, and why we didn't simply fork adt.

I will fork and put some changes then ask for merge

Great! I'm also planning to do a bit more development on this and polish it into a proper python library with tests and documentation, hopefully before the end of october.

@cyrta
Copy link

cyrta commented Sep 29, 2015

Hi, thanks for link to the paper, clear now.

commandline in python must be avoided, sure.
There are many libraries in python or with python binding to use in open source.
However, most of the audio signal processing by sound studios is done using VST,
and most of commonly used presets are there stored or on internet,
I would be nice to have possibility to use e.g. reverb plugins.
Even there are quite number of open source plugins like freeverb.

I have some bash scripts that use mrswatson and proprietary plugins.
mrswatson is very good vst host, and its already available.
I do not know a good python host for vst, and writing one is too time consuming.
maybe it would be nice to change it to library and write simple python bindings, but it takes time also.
and it's better to make more signal degretation results than to spend time on keep the code super clean.

I also plan to do much of the work at the end of October.
I am going to update on my progress then.

@bmcfee bmcfee removed this from the 0.1 milestone Nov 12, 2015
@ejhumphrey
Copy link

"time clip" is duration?

@ejhumphrey
Copy link

roger on the "rather not use any command-line tools" ... I'd be keen to sync on this in a side-bar? depending on the conversation, we can summarize for posterity here or in a separate issue / proposal if need be.

@bmcfee
Copy link
Owner Author

bmcfee commented May 3, 2016

"time clip" is duration?

offset + duration, yeah. Think of randomly slicing the data and getting time-aligned chunks out. This is usually done in sampling / training pipelines, but it could be considered an "augmentation" as well.

roger on the "rather not use any command-line tools" ... I'd be keen to sync on this in a side-bar? depending on the conversation, we can summarize for posterity here or in a separate issue / proposal if need be.

what all did you have in mind?

@ejhumphrey
Copy link

I don't share the aversion to leveraging command-line interfaces under the hood if it provides functionality we can't otherwise get (easily) through native libraries / interfaces. I agree that proprietary hard dependencies are no-go's, but I quite like the idea of making the framework as versatile as possible, if it means that a user might have to configure tools separately if they really want to harness muda.

For example, with time-stretching, we could provide different algorithms / backends for how this gets accomplished. Rubberband is fine, but what if I want to use dirac, elastique, or some other thing that doesn't / won't have a python implementation.

@bmcfee
Copy link
Owner Author

bmcfee commented May 3, 2016

but I quite like the idea of making the framework as versatile as possible

That's why you can extend the BaseDeformer object. 😁

Seriously though, cmdline dependencies are a total pain for maintainability. I'd have to check, but I'm pretty sure that 100% of the error reports I've received on muda have come down to broken cmdline dependencies with rubberband -- and that's a well-behaved and maintained package.

For example, with time-stretching, we could provide different algorithms / backends for how this gets accomplished.

This sounds like bloat/feature creep to me. IMO, the current stretch/shift stuff is good enough for government work*, and our efforts are better spent broadening the types of available deformations, rather than adding six variations of a thing we already have.

*downstream feature extraction

@bmcfee
Copy link
Owner Author

bmcfee commented Mar 1, 2017

Quick update: I have a first cut at chord simplification as part of a tag-encoding module here. It wouldn't be difficult to patch this into a muda deformer.

@ybayle
Copy link

ybayle commented Sep 24, 2017

Hi, I would like to propose and add a new audio deformer to muda that I need for my PhD thesis. I need to modify the phase of frequencies in songs to produce new audio signals. These raw audio signals could then be used as input of a neural network. I want to assess the impact of such data augmentation on the performances of a neural network and to study the internal learning of neurons.
I want to guarantee reproducibility of my algorithm and to enhance accordingly muda with this phase-based data augmentation.
Before starting to write a lot of code, I would like here to discuss more thoroughly on how to implement nicely this functionality in muda.
The ground truths (annotations) part should be straightforward as it won't time-stretch nor pitch-shift the signal.
I already have some python code working and the algorithm is quite simple:
Signal -> FFT -> phase modification -> IFFT -> Signal'
I am wondering how many parameters' input to interface with the user (and how many to hide). Here are the parameters that could be considered:

  • target_phase: can be an int for applying the same phase to all frequencies or an array for different phases.
  • bypass_indexes: an array containing the frames or timestamp on which we should not apply the treatment.

@bmcfee
Copy link
Owner Author

bmcfee commented Sep 24, 2017

That sounds interesting, and it should be pretty easy to implement since you don't have to do any annotation modification. The DRC deformer is probably the closest in structure to what you describe, though its parameters are obscured by a dictionary of preset.

Otherwise, the parameters you describe sound reasonable. The key thing is to push all of the parameters that the deformation function needs into the states generator, which you can see examples of in all of the other muda deformers. This ensures that deformations can be reconstructed exactly, and everything is properly logged in the output jams file.

@ybayle
Copy link

ybayle commented Sep 24, 2017

Ok, thanks for the reply. I'll work on that and make a pull request once I validated some sound examples and produced the corresponding test functions.

@justinsalamon
Copy link

@bmcfee quick question - by Attenuation are you referring to changing the loudness of the signal?

Multi-loudness training (MLT) has been shown to be especially useful for far-field sound recognition (e.g. original PCEN paper), so it would be a great deformer to have for projects such as BirdVox and SONYC.

Perhaps a reasonable interface for this is for the user to provide min and max DBFS values, and then the deformer chooses a value uniformly in the provided interval and adjusts the gain of the input signal to match the selected value?

@bmcfee
Copy link
Owner Author

bmcfee commented May 22, 2018

by Attenuation are you referring to changing the loudness of the signal?

Yes, that's how ADT specified it (where this list originally came from). More generally, attenuation as a function of sub-bands (maybe notch filtering?), ala Sturm, might be useful as well.

@justinsalamon
Copy link

justinsalamon commented May 22, 2018

More generally, attenuation as a function of sub-bands (maybe notch filtering?), ala Sturm, might be useful as well.

That's more in the direction of EQ, no? Also a useful deformer, though I'd probably keep it separate from a global loudness deformer (color vs intensity).

@bmcfee
Copy link
Owner Author

bmcfee commented May 23, 2018

That's more in the direction of EQ, no?

Sure, but the former is a special case of the latter. Seems reasonable to me to keep the implementation unified.

@bmcfee
Copy link
Owner Author

bmcfee commented Jun 8, 2018

Side note: once bmcfee/pyrubberband#15 gets merged, it would be possible to simulate tape-speed wobble (as done by ADT) by piece-wise linear approximation. We'd have to reimplement the timing logic for annotations, but this shouldn't be too difficult.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants