You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 14, 2024. It is now read-only.
Thanks for your awesome reproduce work! while reading your code, I am a little curious about the number of Unet layers. According to your code in net.py, you use 14 layers Unet in your work:
It seems a little different from the paper, since the paper uses 16 layers totally, both encoder and decoder are 8 layers equally.
I am wandering if its your trick to train size 256*256 images? or its just a inadvertent error here? Thank you for your time.
The text was updated successfully, but these errors were encountered:
hiterjoshua
changed the title
misunderstading about UNEt
misunderstading about Unet architecture in your work
Feb 26, 2020
Thanking you for your kind answer! and I want to know if you use multi GPUs training to speed up the training process, cause I do some improvement on your code to make use of nn.DataParallel and horovod and the training time is longer than your version. I am wandering if you tried before.
Thanks for your awesome reproduce work! while reading your code, I am a little curious about the number of Unet layers. According to your code in net.py, you use 14 layers Unet in your work:
It seems a little different from the paper, since the paper uses 16 layers totally, both encoder and decoder are 8 layers equally.
I am wandering if its your trick to train size 256*256 images? or its just a inadvertent error here? Thank you for your time.
The text was updated successfully, but these errors were encountered: