Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How inference is done after training model? #24

Open
sujit420 opened this issue Jun 14, 2018 · 2 comments
Open

How inference is done after training model? #24

sujit420 opened this issue Jun 14, 2018 · 2 comments

Comments

@sujit420
Copy link

sujit420 commented Jun 14, 2018

There is no such example or sample code given for inference.
Supposing, I have trained my model, how do I use it for prediction?
@ZhuFengdaaa can u help me
Thanks in advance

@kevgeo
Copy link

kevgeo commented Jun 16, 2018

Even I am trying to do the same thing. My doubt is whether there is a simpler way to load the layers of the model? Since the model is saved using torch.save(model.state_dict(), model_path), only the trained parameters are saved; so we will have to load the layers first and then load the trained parameters.

But I am confused as to how to load the layers of the model for simple testing on an image. Below is permalink where the model where the model layers are loaded.

constructor = 'build_%s' % args.model
model = getattr(base_model, constructor)(train_dset, args.num_hid).cuda()
model.w_emb.init_embedding('data/glove6b_init_300d.npy')

Is it necessary to use train_dset in line 2 to construct the model? I am not able to understand why dataset is needed? Is there a simpler way to load the model layers for testing on a single image?

@hengyuan-hu @ZhuFengdaaa Hope you guys could help. Thanks in advance.

@ZhuFengdaaa
Copy link

like this line, this implementation uses number of tokens to construct the embedding layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants