Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for null / phase cancellation testing #10

Open
sudara opened this issue Jun 3, 2022 · 5 comments
Open

Support for null / phase cancellation testing #10

sudara opened this issue Jun 3, 2022 · 5 comments
Labels
enhancement New feature or request

Comments

@sudara
Copy link

sudara commented Jun 3, 2022

This is most likely 100% out of scope for the time being and might belong in a tool that uses plugalyzer.

I'd like to store reference audio files in my repository. In CI, I will compare plugalyzer's current output to those reference files for regression testing.

So the feature would be: After creation of an output file, create a "difference" file (invert the phase of the current output, add it to the reference).

This would require

  • Being able to specify the reference file as an input option
  • Determining the RMS of the "difference file".
  • Having a default tolerance of -XXdb.
  • plugalyzer should report if the "difference file" is above this tolerance and fail with an exit code 1.
  • Ability to set the tolerance per-input
  • Integration with http://github.com/sudara/melatonin_audio_sparklines to visualize the output up to XXXX samples? This might not make sense if the files are too long and yes, this is my project lulz.
@CrushedPixel CrushedPixel added the enhancement New feature or request label Jun 3, 2022
@benthevining
Copy link

This tool comes close to accomplishing all of these: https://github.com/benthevining/plugin-cancellation-testing

It's a wrapper CMake script on top of Plugalyzer, with another tool I wrote to do the comparison. I still need to add actually generating a diff file, and integration with your sparklines module is a good idea too!

@benthevining
Copy link

I could eliminate the need for my secondary tool if plugalyzer itself could do the comparison and thresholding to pass/fail the test. Even if this feature is added to plugalyzer, I still think having some wrapper cmake code is valuable, because you can drive regenerating the reference files using the test definitions.

@CrushedPixel
Copy link
Owner

CrushedPixel commented Jan 25, 2024

Even if this feature is added to plugalyzer, I still think having some wrapper cmake code is valuable, because you can drive regenerating the reference files using the test definitions.

I don't understand what you mean by this.

I am open to suggestions from both of you on how to integrate this into Plugalyzer, but I'm wondering whether it should even live in Plugalyzer itself.

Are there any other types of validation that we might want to perform on the output files in the future? This could indicate whether to add this directly or implement a broader validation API.

@benthevining
Copy link

benthevining commented Jan 25, 2024

I'm open to integrating my cmake code into plugalyzer, if you're interested in adding the diff testing features directly to the plugalyzer CLI app. (But my code continuing to live in a separate repo is fine too.) Basically it would be a CMake module that people could use somewhat like:

find_package (Plugalyzer REQUIRED)

Plugalyzer_add_cancellation_test (
  myPlugin_VST3
  INPUT_AUDIO input.wav
  REFERENCE_AUDIO output.wav
  STATE_FILE state.json
  EXACT
  REGEN_TARGET RegenerateReferenceAudio
)

This CMake function is useful because:

  • It sets up the necessary CTest tests (with all the right build config handling, attaching all the files to the tests for CDash upload, etc)
  • It also adds a custom target named RegenerateReferenceAudio that, when you build it, will regenerate the output.wav using these input settings. The idea being, when you reach a new stable version, you now have an easy way to auto-update all your reference files with all the right input settings for each one.

I think it's quite valuable to be able to use your test definitions to also drive regenerating the reference files, and that's one of the main features this code provides.

@benthevining
Copy link

Are there any other types of validation that we might want to perform on the output files in the future?

Possibly. For a while I've been brainstorming some kind of tool to detect audio discontinuities, but I don't have any solid design/implementation of that yet...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants