New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to reproduce the model deeplabv3_xception.h5? #1
Comments
It will be difficult to reproduce the results of xception, and it should require a large memory of the graphics card for training.I remember that the effect of xception as the backbone in the paper should be about 80%. My previous pre training weight comes from https://github.com/bonlime/keras-deeplab-v3-plus. Its feature extraction effect is good. |
@bubbliiiing do you mean you used cityscapes pretrained model: https://github.com/bonlime/keras-deeplab-v3-plus/releases/download/1.2/deeplabv3_xception_tf_dim_ordering_tf_kernels_cityscapes.h5 to finetune on pascal voc? Let me try it. I'm not going to reach SOTA, but just to reproduce what you did to generate the voc model myself. |
thanks! I can get reasonable accuracy with cityscapes pretrained model.
|
closing now. |
3Q |
Hi, I'm trying to train deeplabv3 with xception backbone on voc + SBD dataset. You provided the voc pretrained model deeplabv3_xception.h5. But If I want to reproduce your training result, I should not use it as pretrained model, right? So I comment out the line in train.py to not loading pretrained weights. But after 100 epochs, my model accuracy is poor, compared with your model. Did I miss something, do I need something like an ImageNet pretrained model or COCO pretrained model? Thanks!
The text was updated successfully, but these errors were encountered: