Skip to content

Releases: Voice-Lab/VoiceLab

v2.0.0

15 Jun 17:32
Compare
Choose a tag to compare

VoiceLab

Automated Reproducible Acoustical Analysis
Voice Lab is an automated voice analysis software. What this software does is allow you to measure, manipulate, and visualize many voices at once, without messing with analysis parameters. You can also save all of your data, analysis parameters, manipulated voices, and full colour spectrograms and power spectra, with the press of one button.

Version 2.0.0

License

VoiceLab is licensed under the MIT license. See the LICENSE file for more information.

Cite Voicelab

If you use VoiceLab in your research, please cite it:

  • Feinberg, D. (2022). VoiceLab: Software for Fully Reproducible Automated Voice Analysis. Proc. Interspeech 2022, 351-355.
  • Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv

Installation instructions:

  • Install from pip using Python 3.9-3.11
    • pip install voicelab
  • To install on Windows, download the .exe file from the releases page.
    • run the voicelab.exe file
  • To install on OSX, download the .zip file from the releases page.
    • Unzip the file, and run the VoiceLab app
  • Install on Ubuntu (standalone)
    • Download voicelab
    • change the permissions to executable: chmod +x voicelab
    • try: voicelab or ./voicelab
    • You might need to run this to install a dependency: sudo apt-get install libxcb-xinerama0

Changes from 1.3.1 to 2.0

New Features

  • MeasureAlphaNode measures the alpha ratio

    • It is a measure of the ratio of low frequency energy to high frequency energy
    • It's written by me using NumPy
  • Pitch-corrected RMS Energy (Voice Sauce) --See bug fixes.

  • Pitch-corrected Equivalent Continuous Sound Level (Leq)

  • New viewport window for LPC spectra

Bug fixes

  • Major bugfix affecting all users of Energy in VoiceLab and Voice Sauce

In Voice Sauce source code, they report to calculate RMS in the documentation, but instead calculated total energy in each pitch-dependent frame. This means the Energy value in Voice Sauce that was translated to VoiceLab was not scaled for wavelength, and therefore not pitch-independent. Why does this matter?

Lower pitched voices have longer wavelengths, and therefore more energy than higher pitched voices. Voice Sauce is trying to correct for that by making the window length equivalent to a few pitch periods. They take the sum of the energy in each frame, and since they do not divide by the number of frames (in taking the mean for the RMS calculation), there is no pitch correction occurring at that level. If you then take the mean or RMS of the output of Voice Sauce Energy, you would be taking the total energy divided by number of frames in the sound. Higher pitched sounds have shorter wavelengths, and you can fit more of them into a fixed time period, so if your sounds are all the same length, then your measurements are pitch corrected. This doesn't happen automatically, so the problem is that the longer sounds also have more frames. Thus the measure is confounded.

To fix this I have implemented and RMS calculation at every frame as it says in the Voice Sauce manual. You can see the values are much closer to those given by Praat now, but are different, and that is because of the pitch-dependent frame length. I've removed the old calculation of mean energy, and if you are using RMS energy as a single value, that is the RMS of all of the frames. If you want the old calculation, it is in all of the older versions of VoiceLab.

If you are concerned, I recommend anyone who has published using this algorithm, or plans to in Voice Sauce or older versions of VoiceLab, re-run their Energy measurements and use the new values if this is something critical to your findings.

  • Fixed spectrograms and spectra
    • You can now see them in the boxes and you can expand them

API is no longer supported until further notice

If you clone the GitHub repo and look in the tests, you can see how to use the API. However, it is not supported at this time. I did, however update the example documentation. I have started writing a test suite, so you can prepare nodes by modifying that code.

Contact

David Feinberg: feinberg@mcmaster.ca

Documentation

https://voice-lab.github.io/VoiceLab

v1.3.1

27 Jan 18:50
Compare
Choose a tag to compare

What's Changed

  • Measure Cepstral Peak Prominence (CPP) now defaults to measure only pitched frames. There is an option to turn this off.
    • Elizabeth S. Heller Murray, Andie Chao, Lauren Colletti, A (2022) Practical Guide to Calculating Cepstral Peak Prominence in Praat, Journal of Voice
  • Fixed bug where Energy would not work in compiled versions
  • Updated API to reflect recent changes

v1.3.0

19 Sep 17:24
Compare
Choose a tag to compare

VoiceLab v1.3.0

What's Changed

  • Installing on windows is different. First unzip the folder, then go in it and click VoiceLab.exe. This should speed things up a bit for loading the software.
  • Several bug fixes where if default settings were integers, users could not type decimals. Now all defaults are floats so that doesn't happen anymore.
  • Backend change --instead of passing around sound objects, or passing a file name generating them within each node (this was very slow because of all the disk reading), they are now generated in LoadVoicesNode. Then, their data and sampling rate are passed to the other nodes, facilitating multicore processing. Multicore processing has not yet been implemented. This does not affect GUI users, but does affect people using the API.

Thus.. This no longer works:


measure_pitch_node = MeasurePitchNode()
measure_pitch_node.args['file_path'] = "my_voice_recording.wav"
results = measure_pitch_node.process()
print(results['Mean Pitch (F0) (Praat To Pitch (ac))'])

Do this instead


measure_pitch_node = MeasurePitchNode()
measure_pitch_node.args['file_path'] = "my_voice_recording.wav"
measure_pitch_node.sound = parselmouth.Sound(measure_pitch_node.args['file_path'] )
signal = measure_pitch_node.sound.values
sampling_rate = measure_pitch_node.sound.sampling_frequency
measure_pitch_node.args['voice'] = (signal, sampling_rate)
results = measure_pitch_node.process()
print(results['Mean Pitch (F0) (Praat To Pitch (ac))'])

I promise to make a better API if I ever get time to write v2.0

New Contributors

Our first pull request was accepted for help with setup.py

Full Changelog: v1.2.0...v1.3.0

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.2.0

26 Apr 16:01
Compare
Choose a tag to compare

VoiceLab v1.2.0

VoiceLab - Automated Reproducible Acoustic Analysis

Release Notes

Version 1.2.0

Changes from 1.1.2

New Features

  • MeasurePitchNode now outputs a list of all pitch values
  • New Rotate Spectrom script from Chris Darwin
  • Some API documentation: https://voice-lab.github.io/VoiceLab/#api
    • There's a lot to do, so it's going to take a while to get it all together.

Bug fixes

  • When trying to enter a value in "Time Steps" (Measure Voice pitch) it will no longer crash when typing a ".".... this patch should fix the issue in all the boxes.
  • Fixed spectrograms

Feature Removals

  • Started removing pitch range and duration options from formant manipulation menus.
    • If you need these back, contact me, and I'll put them back.

Known Issues

  • Icons in windows aren't displaying properly. I've worked on this for 2 days. Rather than delaying the release any further, it's been released with broken icons.

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.1.2

13 Mar 18:27
e876201
Compare
Choose a tag to compare

VoiceLab v1.1.2

VoiceLab - Automated Reproducible Acoustic Analysis

Release Notes

Changes from 1.1.1

Bug fixes

  • Fixed bug where saving measurements from Subharmonics and Energy crashed system

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.1.1

08 Mar 22:27
0df0cb5
Compare
Choose a tag to compare

VoiceLab v1.1.1

VoiceLab - Automated Reproducible Acoustic Analysis

Release Notes

Changes from 1.1.0

Bug fixes

  • Fixed manipulations - There were several bugs where non-default options didn't work. Now they work good.

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.1.0

25 Feb 21:00
Compare
Choose a tag to compare

VoiceLab v1.1.0

VoiceLab - Automated Reproducible Acoustic Analysis

Release Notes

Changes from 1.0.2

Feature additions

  • Addition of Measure Energy, based on VoiceSauce Energy algorithm (correlation r>0.999). Measure RMS Energy is included in this measure also.
  • Addition of Pitch Yin Algorithm from Librosa

Bug fixes

  • Now can play file formats other that wav (only tested on mp3)
  • Can now manipulate stereo files by converting to mono first
  • Fix bug in formant manipulation & pitch & formant manipulation that caused crash
  • Fixed playback on windows

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.0.2

30 Nov 18:05
Compare
Choose a tag to compare

VoiceLab v1.0.2

VoiceLab - Automated Reproducible Acoustic Analysis

This is the full release. Unstable features have been pulled. The algorithms have been tested. We are out of Beta and into production. Validation analysis will be posted to osf.io, and a new prerprint to PsyArxiv ASAP. Documentation has been updated for the most part --let me know if something isn't clear. I'll update this when those are completed. That doesn't mean you won't find a bug though, so if you do, let me know and I'll fix it right away.

Release Notes

Changes from 1.0.2

  • I fixed a bugs in jitter and SHNR that would cause the program to crash.

Changes from Beta Version

  • I switched automatic formant algorithms to Praat's Fomant Path with maximum number of formants = 5.5
  • No more TEVA, CREPE, Yin Pitch algos, or F1F2 plots. If there is interest, I can work on these, but they've been pulled due to compatability issues --drop me a line in the discussion for feature requests.
  • New Features:
    • SubHarmonic Pitch and SubHarmonic to Harmonic Ratio
    • Trim sounds and trim silence from sounds

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.0.1

08 Nov 19:22
e779617
Compare
Choose a tag to compare

VoiceLab v1.0.1

VoiceLab - Automated Reproducible Acoustic Analysis

This is the full release. Unstable features have been pulled. The algorithms have been tested. We are out of Beta and into production. Validation analysis will be posted to osf.io, and a new prerprint to PsyArxiv ASAP. Documentation has been updated for the most part --let me know if something isn't clear. I'll update this when those are completed. That doesn't mean you won't find a bug though, so if you do, let me know and I'll fix it right away.

Changes from 1.0.1

  • I fixed a bug where if you ran Jitter PCA (which is selected by default) it would crash the program

Changes from Beta Version

  • I switched automatic formant algorithms to Praat's Fomant Path with maximum number of formants = 5.5
  • No more TEVA, CREPE, Yin Pitch algos, or F1F2 plots. If there is interest, I can work on these, but they've been pulled due to compatability issues --drop me a line in the discussion for feature requests.
  • New Features:
    • SubHarmonic Pitch and SubHarmonic to Harmonic Ratio
    • Trim sounds and trim silence from sounds

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/

v1.0.0

14 Oct 18:56
6ae65a3
Compare
Choose a tag to compare

VoiceLab v1.0.0

VoiceLab - Automated Reproducible Acoustic Analysis

This is the full release. Unstable features have been pulled. The algorithms have been tested. We are out of Beta and into production. Validation analysis will be posted to osf.io, and a new prerprint to PsyArxiv ASAP. Documentation has been updated for the most part --let me know if something isn't clear. I'll update this when those are completed. That doesn't mean you won't find a bug though, so if you do, let me know and I'll fix it right away.

Changes

  • I switched automatic formant algorithms to Praat's Fomant Path with maximum number of formants = 5.5
  • No more TEVA, CREPE, Yin Pitch algos, or F1F2 plots. If there is interest, I can work on these, but they've been pulled due to compatability issues --drop me a line in the discussion for feature requests.
  • New Features:
    • SubHarmonic Pitch and SubHarmonic to Harmonic Ratio
    • Trim sounds and trim silence from sounds

There is a windows binary and an OSX binary. For now, if you are on linux, you can grab the code from GitHub and roll your own.

Development team

Oliver Cook wrote the original GUI and pipeline several years ago and is no longer an active developer on this project. I thank him for all of his hard work and I am forever grateful. This project is now solely maintained by me, David Feinberg, who also wrote all of the original and current code regarding voice measurements, manipulations, and visualizations. Email me at: feinberg@mcmaster.ca if you have any questions.

Citing VoiceLab

Please cite and endorse my PsyArxiv Preprint. It's the only way I can get credit for this at my job.
Feinberg, D. R., & Cook, O. (2020). VoiceLab: Automated Reproducible Acoustic Analysis. PsyArxiv
https://psyarxiv.com/v5uxf/