Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying ivect_log_reg_model.torch #1

Closed
Mlallena opened this issue Oct 5, 2021 · 3 comments
Closed

Trying ivect_log_reg_model.torch #1

Mlallena opened this issue Oct 5, 2021 · 3 comments

Comments

@Mlallena
Copy link

Mlallena commented Oct 5, 2021

I am trying to use the gender recognition model shown here ('ivec_log_reg_model.torch'), but the method suggested runs into an error:

Traceback (most recent call last):
  File "test.py", line 4, in <module>
    model.load_state_dict(torch.load('../best_models/ivec_log_reg_model.torch'))
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LogisticRegression:
        size mismatch for linear.weight: copying a param with shape torch.Size([2, 400]) from checkpoint, the shape in current model is torch.Size([1, 512]).
        size mismatch for linear.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([1]).

Replacing (512,1) with (400,2) in the example does seem to work. Now, the problem is that there's no mention of how to test it with your own audios. I'll see if I can find it, but any suggestion would be welcome.

@hechmik
Copy link
Owner

hechmik commented Oct 5, 2021

Hi @Mlallena,

It makes sense that the model throws that error as it expects an I-vector whose dimensions are the one you mentioned. More info on how we computed them can be found in section 2 of this Readme file https://github.com/hechmik/voxceleb_enrichment_age_gender/blob/main/notebooks/README.md.

@Mlallena
Copy link
Author

Mlallena commented Oct 5, 2021

In order to use my own audios in order to check and/or finetune your model, what would I have to do with the audios? Do I need to obtain their MFCC file? Is there a method to directly input the audio filepath so the internal process obtains the input for the model?

Thanks for your previous answer. As I said earlier, I'll try to find more, but an answer is welcome.

@hechmik
Copy link
Owner

hechmik commented Oct 5, 2021

Sorry for the delay in the response but I was at work and I didn't have time to get back to you until now. Basically the procedure you need to follow is the following:

  • Compute MFCCs for your recordings, using Kaldi
  • Compute the i-vectors for your recordings, using the "ivector-extractor"
  • Pass these i-vectors to the pre-trained model you already tried

As said in the README.md file we used the Asvtorch tool for doing all these steps, as it was the easiest option for processing Voxceleb recordings. In your scenario you'll need to modify this library a little bit, however I didn't have the chance to do it as we always worked inside the "VoxCeleb" ecosystem.

A good starting point is the description of the actual steps needed for computing i-vectors, which you can find here. The solution proposed in our paper is "Voxceleb"-dependent, as we used the not-labeled records for training the various extractors: in my opinion you could replicate the other steps also on other datasets, even though results won't likely be the same.

I hope I was clear enough!

@hechmik hechmik closed this as completed Mar 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants