Description
I have trained the normal and implicit networks separately and tested them on the cape data set. In this process, I have some questions to ask and confirm. I sincerely hope to get your help.
- The ModelCheckpoint of implicit networks saved after training is in the second epoch, so the last eight epochs are invalid training. Does this reflect the data efficiency of ICON training?
- When testing with cape data set, I found that there was no
test.txt
file, so I changedtest150.txt
totest.txt
, I am not sure if this operation is correct. I found thattest150.txt
contains 150 models (Easy: 50, Hard: 100), but I found the following lines in the code:
accu_outputs = accumulate(
outputs,
rot_num=3,
split={
"cape-easy": (0, 50),
"cape-hard": (50, 100)
},
)
So I may have done something wrong here. How many cape models did you use for testing? Also, I would like to know how to extrapolate cape-NC
from cape-easy-NC
and cape-hard-NC
.
3. I also used the pre-training model you provided for testing on the cape data set, and found that the results were not the same every time. I wonder whether this is normal.
4. Your pre-training model is not trained on thuman2.0, I was wondering if you could update the Benchmark (train on THuman2.0, test on CAPE), because you mentioned that testing results are better than the reported results.https://github.com/YuliangXiu/ICON/issues/183#issuecomment-1445002583