Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Humanization / randomness #10

Open
sjaehn opened this issue Jun 24, 2020 · 6 comments
Open

Humanization / randomness #10

sjaehn opened this issue Jun 24, 2020 · 6 comments
Labels
enhancement New feature or request

Comments

@sjaehn
Copy link
Owner

sjaehn commented Jun 24, 2020

Addition of some randomness / humanization for velocity and timing has been requested @ linuxmusicians.com: https://linuxmusicians.com/viewtopic.php?f=24&t=21670

@sjaehn sjaehn added the enhancement New feature or request label Jun 24, 2020
@magnetophon
Copy link

I think true humanization would require groove templates: http://www.numericalsound.com/uploads/3/2/1/6/32166601/dna-groove-template-user-manual.pdf

One way would be to add support for groove templates like above, but another would be to add a sidechain midi input, whose timing information would be transfered to the main input.
Not sure if an lv2 even can have 2 midi inputs, but for now, let's assume it can, or will soon ;)

The sidechain midi would be quantized to the grid of BShaffl and the difference between the quantized and non-quantized version determines the timing of the output of Bshaffl.

You could get the sidechain midi from an audio track with a drumtrigger plugin that converts it to midi.
That way the output of BShaff gets the groove of the audio, creating something like this!

@sjaehn
Copy link
Owner Author

sjaehn commented Jun 24, 2020

I think about an easier way. Not true "humanization". Only to "simulate" the human error rate by randomization. As many other drum machines do (including Hydrogen).

FYI, LV2 can handle multiple midi input and midi output ports. However, hosts usually have a problem with it.

Edit: But you don't need multiple MIDI ports as you have up to 16 MIDI channels.

@sjaehn
Copy link
Owner Author

sjaehn commented Jun 25, 2020

There are several ways to humanize quantized MIDI-pattern music. You can use algorithms, a. i. (as mentioned @ linuxmusicians.com), and - of course - humans.

I can't help with a. i. as i don't have any experience. I'm very sceptic about a. i. (or a natural intelligence) can produce useful humanziation of real-time MIDI signals without pre-listening the whole track (or at least parts of it).

No doubt, you can humanize a MIDI track using human groove patterns, human MIDI sidechaining or a human-geneated MIDI track.

But we already go the algorithm way. With the amp swing, the steps swing, the amp sliders, and the str markers. By doing this you can roughly simulate a playing style. And you can spice it up with some randomness. But of course you may call it "humanized" but (you are right) it isn't human.

@magnetophon
Copy link

Agreed.
As mentioned in #12, I think we don't even need the randomness, at least when we have layers.

@sjaehn
Copy link
Owner Author

sjaehn commented Jun 26, 2020

Amp randomization added in 09ad274.

Timing randomization added in fa25a00. With some TODOs (latency, values > 0.5).

@sjaehn
Copy link
Owner Author

sjaehn commented Jun 26, 2020

Latency and values > 0.5 fixed too in 8a25852 and bd40abf, respectively.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants