You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your dataset structure follows common LLFF, and I understand that the main issue with custom datasets and LLFF is the lack of file names being presented to the model. I had similar issues with nvdiffrec and a simple list solves any number of memory leaks that happen when loading images/files rejected by colmap.
a "view_imgs.txt" is pretty important I'd think, and I'm glad some of the example datasets use a poses.npy -I do not understand the reasoning to use remove.bg masks, construct datasets with another .db file, and pickle a list of files (instead of a readable .txt) that users might want to edit when making their own sets.
when I am trying to handle masks, in their own separate folder, would I want them just to be a b/w image out of any number of salient object matting repos, or images specifically from remove.bg of a separate size (to then use bicubic filtering on), only in the alpha, and containing an entire unused image?
There are 0 technological limitations on making a dataset renderable and testable in instant-ngp, meshable in nvdiffrec (and this repo), optimizable in AdaNeRF and R2L, and still be created from a video shot on a phone. If you're planning on making a dataset creation guide, please dont use removebg filenames, dont use new db files, dont use non user-readable lists of files (it takes one extra line to parse a .txt file), and have support for traditional b/w masks.
all that's needed is /images, /masks, imgs.txt, and a poses.npy (pts seems like it's to build a bounding box and isn't in all your example sets)
lowering that barrier allows anyone who knows how to run a script to make datasets, and it's WHY instant-ngp worked, anyone could try it out with ffmpeg and a script. Forks are being made to test datasets made from my colmap2poses script, if a simple colmap2NeROIC script is needed to read colmap data I can make a push with a more forgiving LLFF dataloader and said script.
The text was updated successfully, but these errors were encountered:
Your dataset structure follows common LLFF, and I understand that the main issue with custom datasets and LLFF is the lack of file names being presented to the model. I had similar issues with nvdiffrec and a simple list solves any number of memory leaks that happen when loading images/files rejected by colmap.
a "view_imgs.txt" is pretty important I'd think, and I'm glad some of the example datasets use a poses.npy -I do not understand the reasoning to use remove.bg masks, construct datasets with another .db file, and pickle a list of files (instead of a readable .txt) that users might want to edit when making their own sets.
NeROIC/dataset/llff.py
Line 243 in e535d50
There are 0 technological limitations on making a dataset renderable and testable in instant-ngp, meshable in nvdiffrec (and this repo), optimizable in AdaNeRF and R2L, and still be created from a video shot on a phone. If you're planning on making a dataset creation guide, please dont use removebg filenames, dont use new db files, dont use non user-readable lists of files (it takes one extra line to parse a .txt file), and have support for traditional b/w masks.
all that's needed is /images, /masks, imgs.txt, and a poses.npy (pts seems like it's to build a bounding box and isn't in all your example sets)
lowering that barrier allows anyone who knows how to run a script to make datasets, and it's WHY instant-ngp worked, anyone could try it out with ffmpeg and a script. Forks are being made to test datasets made from my colmap2poses script, if a simple colmap2NeROIC script is needed to read colmap data I can make a push with a more forgiving LLFF dataloader and said script.
The text was updated successfully, but these errors were encountered: