Skip to content
This repository has been archived by the owner on Feb 14, 2024. It is now read-only.

misunderstading about Unet architecture in your work #43

Closed
hiterjoshua opened this issue Feb 26, 2020 · 3 comments
Closed

misunderstading about Unet architecture in your work #43

hiterjoshua opened this issue Feb 26, 2020 · 3 comments

Comments

@hiterjoshua
Copy link

hiterjoshua commented Feb 26, 2020

Thanks for your awesome reproduce work! while reading your code, I am a little curious about the number of Unet layers. According to your code in net.py, you use 14 layers Unet in your work:

layer_size= 7 
self.layer_size = layer_size
        self.enc_1 = PCBActiv(input_channels, 64, bn=False, sample='down-7')
        self.enc_2 = PCBActiv(64, 128, sample='down-5')
        self.enc_3 = PCBActiv(128, 256, sample='down-5')
        self.enc_4 = PCBActiv(256, 512, sample='down-3')
        for i in range(4, self.layer_size):
            name = 'enc_{:d}'.format(i + 1)
            setattr(self, name, PCBActiv(512, 512, sample='down-3')

It seems a little different from the paper, since the paper uses 16 layers totally, both encoder and decoder are 8 layers equally.
I am wandering if its your trick to train size 256*256 images? or its just a inadvertent error here? Thank you for your time.

@hiterjoshua hiterjoshua changed the title misunderstading about UNEt misunderstading about Unet architecture in your work Feb 26, 2020
@naoto0804
Copy link
Owner

its just a inadvertent error, thank you for pointing out.

@hiterjoshua
Copy link
Author

Thanking you for your kind answer! and I want to know if you use multi GPUs training to speed up the training process, cause I do some improvement on your code to make use of nn.DataParallel and horovod and the training time is longer than your version. I am wandering if you tried before.

@naoto0804
Copy link
Owner

I haven't tried, sorry;

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants