Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the model deeplabv3_xception.h5? #1

Closed
zhimengf opened this issue Sep 24, 2021 · 6 comments
Closed

How to reproduce the model deeplabv3_xception.h5? #1

zhimengf opened this issue Sep 24, 2021 · 6 comments

Comments

@zhimengf
Copy link

Hi, I'm trying to train deeplabv3 with xception backbone on voc + SBD dataset. You provided the voc pretrained model deeplabv3_xception.h5. But If I want to reproduce your training result, I should not use it as pretrained model, right? So I comment out the line in train.py to not loading pretrained weights. But after 100 epochs, my model accuracy is poor, compared with your model. Did I miss something, do I need something like an ImageNet pretrained model or COCO pretrained model? Thanks!

@zhimengf
Copy link
Author

@bubbliiiing

@bubbliiiing
Copy link
Owner

It will be difficult to reproduce the results of xception, and it should require a large memory of the graphics card for training.I remember that the effect of xception as the backbone in the paper should be about 80%. My previous pre training weight comes from https://github.com/bonlime/keras-deeplab-v3-plus. Its feature extraction effect is good.
If you want to start with backbone, you may need to know this https://github.com/keras-team/keras-applications/blob/master/keras_applications/xception.py

@zhimengf
Copy link
Author

@bubbliiiing do you mean you used cityscapes pretrained model: https://github.com/bonlime/keras-deeplab-v3-plus/releases/download/1.2/deeplabv3_xception_tf_dim_ordering_tf_kernels_cityscapes.h5 to finetune on pascal voc? Let me try it. I'm not going to reach SOTA, but just to reproduce what you did to generate the voc model myself.

@zhimengf
Copy link
Author

thanks! I can get reasonable accuracy with cityscapes pretrained model.

===>background:	mIou-92.38; mPA-95.9
===>aeroplane:	mIou-85.0; mPA-88.47
===>bicycle:	mIou-40.77; mPA-91.49
===>bird:	mIou-82.09; mPA-94.45
===>boat:	mIou-58.61; mPA-86.91
===>bottle:	mIou-71.73; mPA-86.56
===>bus:	mIou-89.62; mPA-91.48
===>car:	mIou-87.12; mPA-93.32
===>cat:	mIou-89.15; mPA-94.4
===>chair:	mIou-34.24; mPA-60.78
===>cow:	mIou-60.49; mPA-64.41
===>diningtable:	mIou-51.83; mPA-59.9
===>dog:	mIou-87.77; mPA-93.22
===>horse:	mIou-81.91; mPA-88.78
===>motorbike:	mIou-81.64; mPA-91.21
===>person:	mIou-82.8; mPA-88.29
===>pottedplant:	mIou-41.24; mPA-78.89
===>sheep:	mIou-84.88; mPA-95.24
===>sofa:	mIou-43.24; mPA-56.91
===>train:	mIou-77.98; mPA-86.84
===>tvmonitor:	mIou-66.68; mPA-81.11
===> mIoU: 71.01; mPA: 84.22

@zhimengf
Copy link
Author

closing now.

@bubbliiiing
Copy link
Owner

3Q

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants