-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Semantic segmentation : How to generate my own room_filelist.txt #38
Comments
Could you maybe explain your data structure. Does it have a spatial partitioning comparable to rooms at all? If so, do you know what HDF5 batches correspond to those spatial partitions? If so, you could generate a room_filelist for your specific case. |
Thanks for your answer, I use a structure like in Stanford3dDataset_v1.2_Aligned_Version_3 (Area_1/conference_room_1/Annotations/), but I don't know how HDF5 partitions my dataset and even the partition by HDF5 in Stanford3dDataset_v1.2_Aligned_Version_3 is unclear fo me. |
@Vonisoa Did you close the topic because you found a solution for your problem? If so, could you share your findings? |
Sorry, I accidentaly closed it. |
Btw, I run the gen_indoor3d_h5.py again to get it but still want to know how to generate the room_filelist.txt. |
Hi @Vonisoa, The filelist is written by code here: pointnet/sem_seg/gen_indoor3d_h5.py Line 80 in 4afd46d
I think you only need to comment out the insert function below it, so that it will skip the data extraction (76-78) and insertion (82-83). The order should be deterministic for the filelist across runs. hope it helps, |
Thank you very much, it really helps! |
Hi, I want to generate my own room_filelist.txt that was generated by the gen_indoor3d_h5.py. I have already run gen_indoor3d_h5.py but the room_filelist.txt has been deleted so i won't re-run gen_indoor3d_h5.py to get the room_filelist because I have too big data and it will take 4 hours to run gen_indoor3d_h5.py again. So is there a way to get it without running gen_indoor3d_h5.py?
Note: I use my own dataset not the Stanford3dDataset
The text was updated successfully, but these errors were encountered: