You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But when I try to train --year 2012_aug, I encounter following error:
Setting up a new session...
Device: cuda
Dataset: voc, Train set: 10582, Val set: 1449
[!] Retrain
Traceback (most recent call last):
File "main.py", line 390, in <module>
main()
File "main.py", line 335, in main
for (images, labels) in train_loader:
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/paul/segmentation/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/paul/segmentation/DeepLabV3Plus-Pytorch/datasets/voc.py", line 145, in __getitem__
target = Image.open(self.masks[index])
File "/home/paul/segmentation/lib/python3.6/site-packages/PIL/Image.py", line 2912, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: './datasets/data/VOCdevkit/VOC2012/SegmentationClassAug/2008_002913.png'
In my ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug directory, I have train_aug.txt file in it. What am I missing? Please help. Thanks a lot.
P.S. I did check 2008_002913.png exists under ./datasets/data/VOCdevkit/VOC2012/JPEGImages
So do I need to copy all the .png files to ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug? or what should I do to fix this problem? Thanks for your help.
Edited: after follow the instruction to download labels from the dropbox and extract to ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug then every thing works as expected.
The text was updated successfully, but these errors were encountered:
Hello @ynjiun ! I am facing an issue somewhat related to your issue. I unzipped the SegmentationClassAug.zip file from the dropbox. In the folder I am able to see the images, but the train_aug.txt file is missing. Can you please share this text file with me?
Hi VainF,
I am able to train --year 2012 with following command:
python main.py --model deeplabv3plus_mobilenet --enable_vis --vis_port 28333 --gpu_id 0 --year 2012 --crop_val --lr 0.01 --crop_size 513 --batch_size 14 --output_stride 16 --continue_training
But when I try to train --year 2012_aug, I encounter following error:
In my ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug directory, I have train_aug.txt file in it. What am I missing? Please help. Thanks a lot.
P.S. I did check 2008_002913.png exists under ./datasets/data/VOCdevkit/VOC2012/JPEGImages
So do I need to copy all the .png files to ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug? or what should I do to fix this problem? Thanks for your help.
Edited: after follow the instruction to download labels from the dropbox and extract to ./datasets/data/VOCdevkit/VOC2012/SegmentationClassAug then every thing works as expected.
The text was updated successfully, but these errors were encountered: