You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am now working on training the MAT network from scratch with a full Places dataset (512*512 resolution) and I'm now facing this error.
Above error was produced by followinig command. python train.py --outdir=/home/work/outputs/MAT/baseline_places_full --gpus=2 --batch=8 --metrics=fid36k5_full --data=/home/work/Dataset/places/full/data_large --data_val=/home/work/Dataset/places/full/val_large --dataloader=datasets.dataset_512.ImageFolderMaskDataset --mirror=True --cond=False --cfg=places512 --aug=noaug --generator=networks.mat.Generator --discriminator=networks.mat.Discriminator --loss=losses.loss.TwoStageLoss --pr=0.1 --pl=False --truncation=0.5 --style_mix=0.5 --ema=10 --lr=0.001
However, this error does not occur when I only use one GPU with the same number of batch for each GPU.
Above result was produced by the following command. python train.py --outdir=/home/work/outputs/MAT/baseline_places_full --gpus=1 --batch=4 --metrics=fid36k5_full --data=/home/work/Dataset/places/full/data_large --data_val=/home/work/Dataset/places/full/val_large --dataloader=datasets.dataset_512.ImageFolderMaskDataset --mirror=True --cond=False --cfg=places512 --aug=noaug --generator=networks.mat.Generator --discriminator=networks.mat.Discriminator --loss=losses.loss.TwoStageLoss --pr=0.1 --pl=False --truncation=0.5 --style_mix=0.5 --ema=10 --lr=0.001
I am now using 8 GPUs with 40GB of RAM for each.
The text was updated successfully, but these errors were encountered:
I am now working on training the MAT network from scratch with a full Places dataset (512*512 resolution) and I'm now facing this error.
Above error was produced by followinig command.
python train.py --outdir=/home/work/outputs/MAT/baseline_places_full --gpus=2 --batch=8 --metrics=fid36k5_full --data=/home/work/Dataset/places/full/data_large --data_val=/home/work/Dataset/places/full/val_large --dataloader=datasets.dataset_512.ImageFolderMaskDataset --mirror=True --cond=False --cfg=places512 --aug=noaug --generator=networks.mat.Generator --discriminator=networks.mat.Discriminator --loss=losses.loss.TwoStageLoss --pr=0.1 --pl=False --truncation=0.5 --style_mix=0.5 --ema=10 --lr=0.001
However, this error does not occur when I only use one GPU with the same number of batch for each GPU.
Above result was produced by the following command.
python train.py --outdir=/home/work/outputs/MAT/baseline_places_full --gpus=1 --batch=4 --metrics=fid36k5_full --data=/home/work/Dataset/places/full/data_large --data_val=/home/work/Dataset/places/full/val_large --dataloader=datasets.dataset_512.ImageFolderMaskDataset --mirror=True --cond=False --cfg=places512 --aug=noaug --generator=networks.mat.Generator --discriminator=networks.mat.Discriminator --loss=losses.loss.TwoStageLoss --pr=0.1 --pl=False --truncation=0.5 --style_mix=0.5 --ema=10 --lr=0.001
I am now using 8 GPUs with 40GB of RAM for each.
The text was updated successfully, but these errors were encountered: