-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perform feature extraction on a descriptor file instead of an audio file #26
Comments
Hi, I'm not quite shure if I understand your question. When you say you are analysing tracks, do you mean, that you perform a classification or detection task such as genre classification or mood prediction? Which modules are you using from this repository? |
Hi Oliver,
Our library allows to perform those 2 steps:
1) feature extraction (using rp_extract.py or rp_extract.batch.py)
2) classification (using rp_classify.py)
Yes you can store the results of step 1 to do or redo classification (step 2) later.
rp_extract.batch.py will store those features in a CSV or HDF5 file
you could also build a wrapper around rp_extract.py to store the feature in a database.
… Am 08.08.2018 um 14:16 schrieb Oliver Reznik ***@***.***>:
Is there a way to train a model, then perform feature extraction on a descriptor file of an audio file instead of the actual audio file itself?
My problem is that users will submit their audio files to me for analysis. I analyse them and send back the results. Then I delete the audio file because it's not mine and I can't store it. But if I can export some sort of descriptor file for that audio file, then later I'd like to be able to train a new model and extract the features for that descriptor.
Is something like this possible with this library?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#26>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ALHE6RtLKjJ--ggBLhV_AsQ394hrCnslks5uOtaEgaJpZM4Vz0gE>.
--
Thomas Lidy
TU Wien - Vienna University of Technology
Institute of Software Technology and Interactive Systems
Favoritenstraße 9-11/188
A-1040 Vienna, Austria
http://www.ifs.tuwien.ac.at/~lidy <http://www.ifs.tuwien.ac.at/~lidy>
|
@audiofeature cool that's what I was looking for. Looks like this uses the classify function? Where do I get the model object to pass into it? And then for feature I just pass a path to one of the three feature files? I'd read through this tutorial (http://nbviewer.ipython.org/github/tuwien-musicir/rp_extract/blob/master/RP_extract_Tutorial.ipynb) but it seems to not be working. |
I have pushed a .v4 file of the tutorial, in case a read error with newer Jupyter version was the problem.
You can do those steps
1) Feature extraction:
./rp_extract_batch.sh <folder_with_audio_files> <output_feature_filename_no_extension>
It will extract the default features RP, SSD, RH and create 3 output files with those extensions.
(use other parameters to get other feature types)
alternatively
from rp_extract import rp_extract
into your code
2) Classification
a) train your own model: (refer to README.md -> "Train a model“):
./rp_classify.py <folder_with_audio_files> <output_model_file> --classfile <class_file>
will analyze the audio files like in step 1 then create a classifier model with SVM classifier
classfile: you have to provide a tab-separted file where each input filename is listed with relative path and a tab + a label of the genre category (string)
b) make predictions / classifications:
./rp_classify.py <input_path> <model_file> <output_file>
<input_path> individual audio file or folder
<model_file> your own model file if you created one in step 2.a; if omitted, a pretrained model from folder models/GTZAN.* will be used (this one was generated however with sklearn 0.17 so you’d have to downgrade to that version to use it);
<output_file> file to write the predictions to; if omitted, will print on screen
best
Thomas
… Am 10.08.2018 um 01:12 schrieb Oliver Reznik ***@***.***>:
@audiofeature <https://github.com/audiofeature> cool that's what I was looking for. Looks like this uses the classify function? Where do I get the model object to pass into it? And then for feature I just pass a path to one of the three feature files?
I'd read through this tutorial (http://nbviewer.ipython.org/github/tuwien-musicir/rp_extract/blob/master/RP_extract_Tutorial.ipynb <http://nbviewer.ipython.org/github/tuwien-musicir/rp_extract/blob/master/RP_extract_Tutorial.ipynb>) but it seems to not be working.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#26 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ALHE6RsFjb5dmV2voRpOOLaMmuQlQ0Tdks5uPMHIgaJpZM4Vz0gE>.
--
Thomas Lidy
TU Wien - Vienna University of Technology
Institute of Software Technology and Interactive Systems
Favoritenstraße 9-11/188
A-1040 Vienna, Austria
http://www.ifs.tuwien.ac.at/~lidy <http://www.ifs.tuwien.ac.at/~lidy>
|
Is there a way to train a model, then perform feature extraction on a descriptor file of an audio file instead of the actual audio file itself?
My problem is that users will submit their audio files to me for analysis. I analyse them and send back the results. Then I delete the audio file because it's not mine and I can't store it. But if I can export some sort of descriptor file for that audio file, then later I'd like to be able to train a new model and extract the features for that descriptor.
Is something like this possible with this library?
The text was updated successfully, but these errors were encountered: