Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about Multi-type Joint Training in 2D-3D #22

Closed
TowerTowerLee opened this issue Aug 27, 2019 · 4 comments
Closed

Questions about Multi-type Joint Training in 2D-3D #22

TowerTowerLee opened this issue Aug 27, 2019 · 4 comments

Comments

@TowerTowerLee
Copy link

     Hi,I had some questions during the joint training.In the process of 2D-3D single-class chair-based dataset training, the default image and voxel have separate file addresses. If we do multi-class joint training, do we need to write the addresses of all six categories of folders, or do we no longer need labels, and directly put the six categories of documents that need training on a single file, that is, the following Those two are the correct instructions for joint training.

one

parser.add_argument('-d','--data', default=['data/voxels/sofa','data/voxels/table','data/voxels/boat','data/voxels/car','data/voxels/plane','data/voxels/chair'], help ='The location for the object voxel models.' )
parser.add_argument('-i','--images', default=['data/overlays/sofa','data/overlays/table','data/overlays/boat','data/overlays/car','data/overlays/plane','data/overlays/chair'], help ='The location for the images.' )

another

parser.add_argument('-d','--data', default=['data/voxels/joint',], help ='The location for the object voxel models.' )
parser.add_argument('-i','--images', default=['data/overlays/joint',], help ='The location for the images.' )

  I don't know which one can be used, or my understanding of joint training is biased. I hope you can help me. Thank you.
@EdwardSmith1884
Copy link
Owner

EdwardSmith1884 commented Aug 27, 2019

I guess I didn't set it up well for joint training for this part. Neither of these would work though because that parameter is supposed to be a string not a list. I would suggest either using -d 'data/voxels/*/' and -i 'data/overlays/*/' or using Regex to select for the classes you want. I don't know what joint refers to here, maybe you altered the code?

@TowerTowerLee
Copy link
Author

I guess I didn't set it up well for joint training for this part. Neither of these would work though because that parameter is supposed to be a string not a list. I would suggest either using -d 'data/voxels//' and -i 'data/overlays//' or using Regex to select for the classes you want. I don't know what joint refers to here, maybe you altered the code?

Hi,I've changed the list to a string, and the ‘joint’ folder means to put together six voxel files downloaded and converted from the shapenet dataset,because I saw in your paper about the accuracy of joint training of multi-class data and so on, hoping to be able to use your code for multi-class training.
The main problem is that I don't understand the difference between training six class of data and the implementation of training single class like ‘chair’ based on code 20-VAE-3D-IWGAN.py.
Or do I mix six class of dataset to voxels folder and six kinds of image files together in overlays?
I don't know if it's right for many types of training. Six class of data can be directly mixed together without labels.
In addition, we need to generate the voxel model of 202020 for the operation of 2D-3D. I can't understand the reason, but I hope you can help me solve these problems. Thank you.

@EdwardSmith1884
Copy link
Owner

yes you just mix everything together, but should split by class when evaluating. I generate at 20/20/20 resolution because that is the resolution used to evaluate in the 3D GAN paper.

@TowerTowerLee
Copy link
Author

yes you just mix everything together, but should split by class when evaluating. I generate at 20/20/20 resolution because that is the resolution used to evaluate in the 3D GAN paper.

Thank you for your answer. It helps me a lot。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants