You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to train the model with a custom 64x64 sized image dataset, using the default params: ! python Styleformer/train.py --outdir=./training-runs --data=./resized --gpus=1 --num_layers=1,3,3
but I face this error : assert len(self.block_resolutions) == len(self.num_block) AssertionError
I've printed the both values, but even the dimensions are different: block_resolutions = [8, 16, 32, 64] num_block = [1, 3, 3]
Thank you in advance,
The text was updated successfully, but these errors were encountered:
As in the paper num_layers(i.e., "Layers") means the number of the encoder for each resolution. Since the image resolution is 64x64 and the model starts with a learned 8x8 constant, four num_layer is needed.
For example, --num_layers=1,2,2,2 will fix the problem. This means one encoder block at 8x8, two at 16x16, two at 32x32, and two at 64x64.
Hello, thank you for your work. I would like to ask if there are any requirements for the type of dataset used in this model if using one's own dataset? For example, can it be used on items such as tea cups and keys.
thanks.
I'm trying to train the model with a custom 64x64 sized image dataset, using the default params:
! python Styleformer/train.py --outdir=./training-runs --data=./resized --gpus=1 --num_layers=1,3,3
but I face this error :
assert len(self.block_resolutions) == len(self.num_block) AssertionError
I've printed the both values, but even the dimensions are different:
block_resolutions = [8, 16, 32, 64]
num_block = [1, 3, 3]
Thank you in advance,
The text was updated successfully, but these errors were encountered: