Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-trained model shows low performance #20

Open
KosukeArase opened this issue Jan 4, 2019 · 10 comments
Open

Pre-trained model shows low performance #20

KosukeArase opened this issue Jan 4, 2019 · 10 comments

Comments

@KosukeArase
Copy link

KosukeArase commented Jan 4, 2019

Hi @laughtervv
Thank you for open sourcing your great work!
I downloaded your pre-trained model from here (the link in README) and your h5 data from here (the link in issue#3) and used pergroup_thres.txt and mingroupsize.txt in issue#8.
However, I ran test.py using them and got mAP 0.47, this is lower than the mAP reported in your paper (mAP 0.5435).
Did I make some mistakes?
Here is the result I got. Thank you.

Instance Segmentation AP: [ 0.47166667  0.53958333  0.37652931  0.67391304  0.43280553  0.7375
  0.67703333  0.42900336  0.34550953  0.2         0.38823529  0.51387205
  0.32641463]
Instance Segmentation mAP: 0.470158929369
Semantic Segmentation IoU: [ 0.85866874  0.92186948  0.86302137  0.84426813  0.72262609  0.8738558
  0.89429596  0.82030826  0.79822212  0.72040915  0.7932732   0.74137266
  0.81986088]
Semantic Segmentation Acc: %f 0.7979087289103282
@KosukeArase
Copy link
Author

@laughtervv Can you share the list of training data?
The result above is evaluated on randomly chosen scenes.
However, when I evaluated on Area_5, the mAP was 0.6447, which is much higher than reported in you paper.
Thank you.

@lhiceu
Copy link

lhiceu commented Feb 20, 2019

Hi @KosukeArase ,I'm confused about how to write the "data/train_hdf5_file_list.txt".Is it like the file "filename.txt" given by @laughtervv ? I rename "filename.txt" to "train_hdf5_file_list.txt" to run train.py. But this caused my computer to be full of memory and eventually crashed. How can I get a start?
And I find "filename.txt" has a lot of duplicate names. What does it mean?
THANKS!

@KosukeArase
Copy link
Author

Hi @lhiceu
The program loads all data onto your memory, so it requires a few GB memory.
If you have enough memory, i think it is because the duplication.
I'm not sure what is filename.txt, but I simply created a list of h5 files which has no duplication.

@lhiceu
Copy link

lhiceu commented Feb 21, 2019

Thanks for your reply! @KosukeArase
I also make a list of h5 file. Now I'm runing train.py with pretrained model given by the author, although the loss looks a little strange(Probably not changing much). I'll wait for a result.
By the way, did you pretrain the network as the paper said at Section 3.3 The network is trained with only the L_sim loss for the first 5 epochs?
I modify the get_loss() in model.py and epoch= 5,is_training = Truein train.py. But I find the trained model has something wrong to load. So I really don't understand how to do it.
I'm so sorry for too many problems.
THANKS!

@SinestroEdmonce
Copy link

SinestroEdmonce commented Mar 11, 2019

Hi @lhiceu @KosukeArase
Now I am thinking about how generate the "train_hdf5_file_list.txt" that the program requires, with which I have no ideas yet. I dont understand what is a so-called "hdf5_file_list" file. I tried to find the file called filename.txt that you have mentioned above, however, I couldnt find it out.
Fortunately, you said that you had already made a list of h5 file by your own. I hope that you can help me generate a list of h5 file. If you are willing to share your list of h5 file by, for example, attaching a link to your comment for downloading the file, that would be so much help to me.
Hope that you are willing to help me. THANKS SO MUCH!!

@lhiceu
Copy link

lhiceu commented Mar 12, 2019

@SinestroEdmonce
I download filename.txt from a issue reply that I forgot. SORRY.
But I think this file is wrong, because of its too many duplicate names. So I write a train_hdf5_file_list.txt for training. It is a list of .h5 file names without repetition.

@lhiceu
Copy link

lhiceu commented Mar 12, 2019

@SinestroEdmonce
Here is the code to generate train_hdf5_file_list.txt

import os,shutil
source_path = './data/train/'  #train file
f = open('train_hdf5_file_list.txt','w')
room_list = os.listdir(source_path)
for room in room_list:
    f.write(room + '\n')    
f.close()

@SinestroEdmonce
Copy link

@lhiceu
THANKS SO MUCH!!
It is so helpful for me!

@SinestroEdmonce
Copy link

Hi @lhiceu
Have you ever run the program? Which dataset did you use to do the testing, ModelNet40, ShapeNet or ScanNet? I really wanna run the program, which is of much importance for me. However I don't know how to make .h5 files from those datasets that I mentioned above. Besides, if you have already run the program, would you please tell me the procedures? It will be so nice of you to give me some guidance!
THANKS and appreciate all your help!

@FangwenSu
Copy link

Hi @lhiceu
Have you ever run the program? Which dataset did you use to do the testing, ModelNet40, ShapeNet or ScanNet? I really wanna run the program, which is of much importance for me. However I don't know how to make .h5 files from those datasets that I mentioned above. Besides, if you have already run the program, would you please tell me the procedures? It will be so nice of you to give me some guidance!
THANKS and appreciate all your help!

hello,have you resolve the problem——“Which dataset did you use to do the testing, ModelNet40, ShapeNet or ScanNet?”.I'm confusing about this program and meet the same trouble .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants