-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training PSPNet #23
Comments
The experiments with ResNet101 are trained with 4 GPUs. Actually our training code contains many memory optimization so it needs less memory. |
@wistone Thank you for the reply. Could you please let me know how do you optimize memory?? Thank you. |
https://github.com/dmlc/mxnet-memonger You can refer to the memory optimization in MXNet |
Did you train a model using MXNet and then convert the MXNet model into Caffe model?? Thank you. |
We implement this method in our training platform in caffe. |
Is it publicly available? If not, could you please let me know how you implement this method in Caffe? Thank you |
No it is not public. You can read the paper for the theory, and refer the code in MXNet. It is doable. |
@hszhao @wistone Could you explain what's mean of the three accuracy output on training phase? There only one SegAccuracy, so I think there should one accuracy output, why three ones occurs? |
Wait, @LearnerInGithub , where did you get training files?! |
@DonghyunK Can you public script for training? |
Hi,
I am trying to train PSPNet50.
With a Titan X, I could only train it with the batch size of 1. I cannot train it with the batch size of 2 even using 2 Titan GPUs.
Could please let me know how many gpus you used to train PSPNet and how many gpus are needed to train PSPNet successfully?
Thank you so much.
The text was updated successfully, but these errors were encountered: