You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
While the conditioning of the generator seems to construct the net and behave expectably, the conditioning of the discriminator seems to ignore depth of the mapping network.
To Reproduce
Steps to reproduce the behavior:
Create any model with --map-depth value other than 8:
Expected behavior
I expect that the discrimintor would have same number of mapping layers as generator, in example above, 2 was used for --map-depth, but the discriminator have 8 mapping layers at the end of itself for some reason.
While it seems that it doesn't break the model(not sure), because it trains, it progresses, i didn't did a long run, just a couple of KIMGs, so can't say anything indepth about it.
Alias Free GAN(StyleGAN 3) impy on that 8 layers of mapping network are unnecesary, and 2 layers is enough, so i decided to use same strategy with SG2 network. By the way, with --cfg=stylegan3-t/r config, same thing happens, even though 2 layers was configured for it out of the box in the code.
At first i though that there is some hardcoded constant value of 8 layers in the code that was left by mistake, but i didn't find anything that could prove this point, moreover, i find this part of code very convoluted and hard to understand(at least for me), which gave more questions than answers for me. I am talking about networks_stylegan2/3.py file, cause i was investigating into it, and i'm pretty sure this is the right place to look at.
Additional context
This is just a toy model i was using to experiment with it, so don't try to understand why it has just 64 filters in all channels, and other stuff ;)
CMD prompt that was used to initialize the model: python train.py --cfg=stylegan2 --gpus=1 --batch=16 --outdir= --data= --cmax=64 --metrics=none --gamma=2 --mirror=1 --fp32=1 --cond=1 --map-depth=2
GTX10X0 series was used, so FP32 mode was turned on, cause mixed precision didn't give any speedup, only slowdown on GPUs of this series.
The text was updated successfully, but these errors were encountered:
Describe the bug
While the conditioning of the generator seems to construct the net and behave expectably, the conditioning of the discriminator seems to ignore depth of the mapping network.
To Reproduce
Steps to reproduce the behavior:
Create any model with --map-depth value other than 8:
Expected behavior
I expect that the discrimintor would have same number of mapping layers as generator, in example above, 2 was used for --map-depth, but the discriminator have 8 mapping layers at the end of itself for some reason.
While it seems that it doesn't break the model(not sure), because it trains, it progresses, i didn't did a long run, just a couple of KIMGs, so can't say anything indepth about it.
Alias Free GAN(StyleGAN 3) impy on that 8 layers of mapping network are unnecesary, and 2 layers is enough, so i decided to use same strategy with SG2 network. By the way, with --cfg=stylegan3-t/r config, same thing happens, even though 2 layers was configured for it out of the box in the code.
At first i though that there is some hardcoded constant value of 8 layers in the code that was left by mistake, but i didn't find anything that could prove this point, moreover, i find this part of code very convoluted and hard to understand(at least for me), which gave more questions than answers for me. I am talking about networks_stylegan2/3.py file, cause i was investigating into it, and i'm pretty sure this is the right place to look at.
Additional context
This is just a toy model i was using to experiment with it, so don't try to understand why it has just 64 filters in all channels, and other stuff ;)
CMD prompt that was used to initialize the model:
python train.py --cfg=stylegan2 --gpus=1 --batch=16 --outdir= --data= --cmax=64 --metrics=none --gamma=2 --mirror=1 --fp32=1 --cond=1 --map-depth=2
GTX10X0 series was used, so FP32 mode was turned on, cause mixed precision didn't give any speedup, only slowdown on GPUs of this series.
The text was updated successfully, but these errors were encountered: