VQ-VAE for Acoustic Unit Discovery and Voice Conversion
-
Updated
Jul 6, 2023 - Python
VQ-VAE for Acoustic Unit Discovery and Voice Conversion
Predicting depression from acoustic features of speech using a Convolutional Neural Network.
A Python library for measuring the acoustic features of speech (simultaneous speech, high entropy) compared to ones of native speech.
Vector-Quantized Contrastive Predictive Coding for Acoustic Unit Discovery and Voice Conversion
Source code complementing our paper for acoustic event classification using convolutional neural networks.
Acoustic mosquito detection code with Bayesian Neural Networks
Use machine learning models to detect lies based solely on acoustic speech information
🎵 A repository for manually annotating files to create labeled acoustic datasets for machine learning.
keras_multi_target_signal_recognition Underwater single channel acoustic multiple targets recognition using ResNet, DenseNet, and Complex-Valued convolutional nerual networks. keras-gpu 2.2.4 with tensorflow-gpu 1.12.0 backend.
Tools and functions for neural data processing and analysis in python
An ensemble bagged trees classification approach for monitoring of the engine conditions and fault diagnosis using Visual Dot Patterns of acoustic and vibration Signals
A BCNN prediction pipeline to discover mosquito sounds from audio.
The project is related to the development of labs for the ITMO Speaker Recognition Course.
A curated collection of research papers with open-source implementations/datasets focused on in-situ process monitoring and adaptive control in laser-based additive manufacturing.
Multimodal Exponentially Modified Gaussians with Optional Oscillation
Acoustic sentiment analysis for emotion classification
predicting music track success (revenue) via acoustic and metadata features
A Python app for classifying voice recordings using KNN and SVM models. Includes a graphical interface for training, evaluating, and classifying audio data with acoustic descriptors. Designed for audio analysis and machine learning experimentation.
Script to extract acoustic features from speech using OpenSmile toolkit.
Add a description, image, and links to the acoustic-features topic page so that developers can more easily learn about it.
To associate your repository with the acoustic-features topic, visit your repo's landing page and select "manage topics."