Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Blender Plugin] Add functionality for frame-by-frame 'baking' at some point once WavShaper is ready #127

Closed
DJLevel3 opened this issue Sep 9, 2022 · 7 comments

Comments

@DJLevel3
Copy link
Contributor

DJLevel3 commented Sep 9, 2022

Context

I will try to implement this myself in a few hours, but I'm developing an audio plugin called WavShaper and I want to add the functionality to animate the shapes instead of using static shapes. My idea for implementation is to use an audio file like I'm using, but instead of choosing one cycle of the same animation I read one frame of animation per cycle of audio (so, 1 second @ 10Hz = 10 frames).

Suggestion

To that end, I have so far been going through the audio from osci-render by hand and clipping out each frame, but in the future I want to be able to use the audio from osci-render. This may be possible in a Lua plugin, not sure on that, but what I'm imagining is a button (either in osci-render or in the Blender plugin) to go through frame by frame and save 4800 samples for each frame to the same audio file. There is definitely a feature in Blender to bake some kind of animation, that can almost definitely be used with osci-render.


Some additional ideas to make it even more useful in other applications:

  1. Automatically maximize output volume (code below)
  2. Allow setting frequency of frames
  3. Allow setting duration or number of cycles recorded for each frame
  4. Automatically set some settings based on others?

Code to maximize the output volume

Written in C++, but should translate really easily to Java. The math syntax is super similar.

// Normalization factor (maximum sample absolute value)
double norm = 0;

// Floating-Point Calculation and Recording Pass
for (int frame = 0; i < nFrames; frame ++) {
    for (int sample = 0; sample < sampsPerFrame; i ++) {

        // ----- Calculate the sample and store it in sampleArray[frame][sample][channel] -----

        // Left Channel
        if(std::abs(sampleArray[frame][sample][0]) > norm) {
            norm = std::abs(sampleArray[frame][sample][0]); // Math.abs() in Java
        }

        // Right Channel
        if(std::abs(sampleArray[frame][sample][1]) > norm) {
            norm = std::abs(sampleArray[frame][sample][1]); // Math.abs() in Java
        }
    }
}

// Reduce unnecessary floating-point divisions, they're way slower than multiplications
norm = 1 / norm;

// Floating-Point Normalization Pass
for (int frame = 0; i < nFrames; frame ++) {
    for (int sample = 0; sample < sampsPerFrame; i ++) {
        sampleArray[frame][sample][0] *= norm;
        sampleArray[frame][sample][1] *= norm;
    }
}

// ----- Convert the samples to whatever format (int16, int24, float32, etc) is needed -----

// ----- Store the converted samples to the output file -----
@DJLevel3
Copy link
Contributor Author

DJLevel3 commented Sep 9, 2022

Might be possible with a Lua script, I'll have to look into it

@jameshball
Copy link
Owner

This is a great idea! Thanks for the suggestion and code snippets. I don't think this would be very hard to do and would just need storing of the line data of each frame - something that's easy to do.

Could you explain why you can't just record the audio live from Blender as it plays back the animation rather than changing each frame manually? Is the performance not good enough when you do this?

I'm thinking of a way of cycling through frames and then being able to configure different settings for each frame, like the frequency.

@DJLevel3
Copy link
Contributor Author

DJLevel3 commented Sep 10, 2022

Sorry I'm late! I need one frame per 4800-sample cycle exactly because WavShaper constructs a shape out of the first 4800 samples of an audio file (0.1s @ 48000 sample rate). I want to add multiple frames by reading multiples of 4800 samples, so I need frames to be 4800 samples long, 4800 samples apart, with absolutely no variance. If there's any lag, things break.

This means I need total control over when each frame starts and ends to the sample, and there is some lag when loading frames live from Blender. I've had it tank down to 5 frames rendered per second on particularly bad models which were intended to run at 25fps. These models were absurdly complex, the one I had the worst lag on was the front grille on a car, which had hundreds of square holes.

Also, if storing the data per frame ends up being too memory expensive (it probably won't) you can do two full rendering passes, one where you calculate the normalization and one where you apply it. This would be twice as time expensive, though.

@DJLevel3
Copy link
Contributor Author

Anyway I had family matters today so I didn't have time to actually do any more code today (22:50, Sep. 9 my time). I hopefully will tomorrow.

@jameshball
Copy link
Owner

That makes sense thanks for the clarification! Are you working on this with the Java version on another branch or do you want me to work on this?

@DJLevel3
Copy link
Contributor Author

DJLevel3 commented Sep 10, 2022 via email

@jameshball
Copy link
Owner

#235 closes this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

2 participants