-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't test the trained encoder #3
Comments
You need to train a voxel decoder for test.
…On Fri, Jun 2, 2017 at 8:29 AM Oier Mees ***@***.***> wrote:
Hi,
I trained the single class encoder with ./demo_pretrain_singleclass.sh.
Now I wanted to evaluate the trained models. So in eval_quant_test.lua I
just changed the name of the loaded file (cnn_vol.t7) to:
base_loader = torch.load(opt.checkpoint_dir ..
'arch_rotatorRNN_singleclass_nv24_adam2_bs8_nz512_wd0.001_lbg10_ks16/net-epoch-20.t7')
encoder = base_loader.encoder
base_voxel_dec = base_loader.voxel_dec
When I run the testcript eval_models.sh I get following error:
/home/meeso/torch/install/bin/luajit: scripts/eval_quant_test.lua:90:
attempt to index global 'base_voxel_dec' (a nil value)
stack traceback:
scripts/eval_quant_test.lua:90: in main chunk
[C]: in function 'dofile'
...eeso/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main
chunk
[C]: at 0x00406670
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#3>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIWuK0QY947UXFv801ohduGmsS8DC1qPks5sACo3gaJpZM4Nubz7>
.
|
Ah okey you are right. But it seems that in the code the encoder doesn't get loaded: nips16_PTN/scripts/train_PTN.lua Line 108 in 789031e
Not sure if its the commented out lines later, but I think a torch.load() is missing somewhere in the code right? Or how can I tell him to use my trained encoder for the training? Also, is there some method for storing a log with the train and test loss? At least this way one could see if the trained encoder is performing, or how do you monitor it? |
You are right. This should be fixed.
Add back the two lines:
loader = torch.load(opt.checkpoint_dir .. opt.basemodel_name ..
string.format('/net-epoch-%d.t7', opt.basemodel_epoch))
encoder = loader.encoder
…On Fri, Jun 2, 2017 at 9:28 AM Oier Mees ***@***.***> wrote:
Ah okey you are right. But it seems that in the code the encoder doesn't
get loaded:
https://github.com/xcyan/nips16_PTN/blob/789031e2c6d63648a1debc2134dd7fd525bf735f/scripts/train_PTN.lua#L108
Not sure if its the commented out lines later, but I think a torch.load()
is missing somewhere in the code right? Or how can I tell him to use my
trained encoder for the training?
Also, is there some method for storing a log with the train and test loss?
At least this way one could see if the trained encoder is performing, or
how do you monitor it?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AIWuK5URc2FSlJmH3Ke4-79UOL_LLXCyks5sADg0gaJpZM4Nubz7>
.
|
Thanks, I also had to change the name 'rotatorRNN1_64' to 'arch_rotatorRNN' |
Hi,
I trained the single class encoder with ./demo_pretrain_singleclass.sh. Now I wanted to evaluate the trained models. So in eval_quant_test.lua I just changed the name of the loaded file (cnn_vol.t7) to the last trained model:
base_loader = torch.load(opt.checkpoint_dir .. 'arch_rotatorRNN_singleclass_nv24_adam2_bs8_nz512_wd0.001_lbg10_ks16/net-epoch-20.t7')
encoder = base_loader.encoder
base_voxel_dec = base_loader.voxel_dec
When I run the testcript eval_models.sh I get following error:
/home/meeso/torch/install/bin/luajit: scripts/eval_quant_test.lua:90: attempt to index global 'base_voxel_dec' (a nil value)
stack traceback:
scripts/eval_quant_test.lua:90: in main chunk
[C]: in function 'dofile'
...eeso/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
Any idea how I fix this?
The text was updated successfully, but these errors were encountered: