-
-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retrain on new and own dataset #7
Comments
@Ahmed-Abouzeid Hi, I have some problems too! I also want to know how to create my own data. Now I have run create_development.py directly ,but it gave me that train_files_subjects_list.append(file_name.split('/')[7]) |
@Ostnie are you aware that the training files are not audio samples, they are .npy files which include features of a certain person voice. So, you need first to run the speechpy package on the wav files to create npy files and use them in that repository. I am copying the speechpy code I wrote for clarification: `for x, f in enumerate(os.listdir(sys.argv[1])):
|
@Ostnie Next step should be running the create_development.py after preparing the train_subjects_path.txt. For me I wrote that:
I hope what I am saying make sense to you. I will let you know if I was able to fix the original issue I posted earlier. |
@Ahmed-Abouzeid Hello, have you ever fixed your issue? |
@nkcsfight Hi, I am again visiting this problem and will work on it these days. If I reached something I will post it here! |
Hi, I believe you are using the wrong method here. You should be using lmfe instead mfcc. |
@Ahmed-Abouzeid I guess your problem is that you have decreased the number of filters from 40 to 13. The network is structured in such a way that if input of 40x80 in not as an input, during convolution operation the size becomes negative, which is exactly happening in your case. Changing it back to 40 and getting your data prepared using 40 filters would definitely solve the problem. |
Hi, |
@naeemrehmat65 Thanks for your interest. Please refer to this part for creating development. Creating enrollment and evaluation is similar. This part is just related to how you create an HDF5 for feeding it to the network. It must be customized and modified considering your specific datat. |
Thanks for your response.
…On Feb 11, 2018 12:40 AM, "Amirsina Torfi" ***@***.***> wrote:
@naeemrehmat65 <https://github.com/naeemrehmat65> Thanks for your
interest. Please refer to this part
<https://github.com/astorfi/3D-convolutional-speaker-recognition/blob/master/code/0-input/create_hdf5/create_development.py>
for creating development. Creating enrollment and evaluation is similar. *This
part is just related to how you create an HDF5 for feeding it to the
network*. *It must be customized and modified considering your specific
datat*.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYztPWWQcr6yuTAMMywo4mu17P8oXkkZks5tTfCXgaJpZM4RNnqj>
.
|
Hi,
First, that is a great job and ver well done :)
Now I am trying to use your source code and maybe contribute to it, I am working on a speaker recognition problem to detect if a teacher tutorial is recorded by his voic. I have about 10 hours of historical recordings for 6 teachers. First I used the speechpy to get 3d npy from wav files and used your create_development.py to create the hdf5 files for train and eval. Is that correct? Specially I got 13 instead of 40 regarding the features vector length in the npy files! I ran the run.bash file and it gave me also error saying something like that: ValueError: Negative dimension size caused by subtracting 2 from 1 for 'MaxPool_7' (op: 'MaxPool') with input shapes: [?,1,112,128].
The text was updated successfully, but these errors were encountered: