Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem occured in hrnet_backbone.py #25

Closed
daixiaolei623 opened this issue Jul 15, 2020 · 12 comments
Closed

problem occured in hrnet_backbone.py #25

daixiaolei623 opened this issue Jul 15, 2020 · 12 comments

Comments

@daixiaolei623
Copy link

Dear Author,

Thank you for your excellent work, but some errors are reported for backbones.

checkpoint names:
checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth


commands:
(for HRNet-W48:)
python -u main.py --configs configs/cityscapes/H_48_D_4.json --drop_last y --backbone hrnet48 --model_name hrnet_w48_ocr --checkpoints_name hrnet_w48_ocr_1 --phase test --gpu 0 --resume ./checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth --loss_type fs_auxce_loss --test_dir input_images --out_dir output_images

Error messages:

2020-07-15 21:00:10,470 INFO [module_runner.py, 44] BN Type is inplace_abn.
Traceback (most recent call last):
File "main.py", line 214, in
model = Tester(configer)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 69, in init
self._init_model()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 72, in _init_model
self.seg_net = self.model_manager.semantic_segmentor()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/model_manager.py", line 81, in semantic_segmentor
model = SEG_MODEL_DICTmodel_name
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/nets/hrnet.py", line 105, in init
self.backbone = BackboneSelector(configer).get_backbone()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/backbone_selector.py", line 34, in get_backbone
model = HRNetBackbone(self.configer)(**params)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 598, in call
bn_momentum=0.1)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/models/backbones/hrnet/hrnet_backbone.py", line 307, in init
self.bn1 = ModuleHelper.BatchNorm2d(bn_type=bn_type)(64, momentum=bn_momentum)
TypeError: 'NoneType' object is not callable

Could you please tell me what is wrong? thank you.

@PkuRainBow
Copy link
Contributor

It seems that this bug is caused by that you have not specified the bn_type during testing.

@hsfzxjy Please help to check this issue.

@hsfzxjy
Copy link
Contributor

hsfzxjy commented Jul 15, 2020

Hi. It may be caused by using the wrong version of PyTorch. This codebase is currently compatible with PyTorch 0.4.1 (well tested) and not supported by PyTorch 1.3+.

@daixiaolei623
Copy link
Author

Thank you very much, but I have another problem, I find the cityscapes dataset donot contain image or lable folder, it is divided into multiple subfolders according to different city names. what the image or label folder in your code in default_loader.py mean?

Error Message:
Traceback (most recent call last):
File "main.py", line 214, in
model = Tester(configer)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 69, in init
self._init_model()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 79, in _init_model
self.test_loader = self.seg_data_loader.get_valloader()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/data_loader.py", line 150, in get_valloader
configer=self.configer),
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/loader/default_loader.py", line 34, in init
self.img_list, self.label_list, self.name_list = self.__list_dirs(root_dir, dataset)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/loader/default_loader.py", line 118, in __list_dirs
img_extension = os.listdir(image_dir)[0].split('.')[-1]
FileNotFoundError: [Errno 2] No such file or directory: '/home/dai/code/semantic_segmentation/dataset/cityscape_dataset/val/image'

@hsfzxjy
Copy link
Contributor

hsfzxjy commented Jul 15, 2020

We use a different directory structure. You may use https://github.com/openseg-group/openseg.pytorch/blob/master/lib/datasets/preprocess/cityscapes/cityscapes_generator.py to reorganize data.

@daixiaolei623
Copy link
Author

@hsfzxjy
Thank you verymuch, Could please write a text about how to run your code? There is not any explanation about how to run this code. Thank you very much! Look forward to your instruction!

@hsfzxjy
Copy link
Contributor

hsfzxjy commented Jul 16, 2020

@daixiaolei623

You can try

python cityscapes_generator.py --save_dir /path/to/new_cityscapes_dir --ori_root_dir /path/to/original_cityscapes_dir

where /path/to/original_cityscapes_dir is the location of original Cityscapes (i.e. containing leftImg8bit) and /path/to/new_cityscapes_dir is the location containing organized data.

@daixiaolei623
Copy link
Author

@hsfzxjy
Thank you very much, but when i run the command:
python -u main.py --configs configs/cityscapes/H_48_D_4.json --drop_last y --backbone hrnet48 --model_name hrnet_w48_ocr --checkpoints_name hrnet_w48_ocr_1 --phase test --gpu 0 --resume ./checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth --loss_type fs_auxce_loss --test_dir /home/dai/code/semantic_segmentation/9/openseg.pytorch-master/cityscapes/val/image --out_dir output_images

I got the error:
File "main.py", line 222, in
model.test()
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/segmentor/tester.py", line 131, in test
for j, data_dict in enumerate(self.test_loader):
File "/home/dai/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 336, in next
return self._process_next_batch(batch)
File "/home/dai/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/dai/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/dai/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/loader/default_loader.py", line 63, in getitem
img = self.img_transform(img)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/tools/transforms.py", line 113, in call
inputs = t(inputs)
File "/home/dai/code/semantic_segmentation/9/openseg.pytorch-master/lib/datasets/tools/transforms.py", line 81, in call
inputs = torch.from_numpy(inputs.transpose(2, 0, 1))
RuntimeError: PyTorch was compiled without NumPy support

I operating environmentis :
anaconda3, python3.7, torch0.4.1, numpy 1.15.0

@daixiaolei623
Copy link
Author

@hsfzxjy
Hi, I solved the above problem, but when I run the code, It just test the cityscapes/val/image with the label, it about 500 images, But how can I test my own images without corresponding labels.

My command:
python -u main.py --configs configs/cityscapes/H_48_D_4.json --drop_last y --backbone hrnet48 --model_name hrnet_w48_ocr --checkpoints_name hrnet_w48_ocr_1 --phase test --gpu 0 --resume ./checkpoints/cityscapes/hrnet_w48_ocr_1_latest.pth --loss_type fs_auxce_loss --test_dir /home/dai/code/semantic_segmentation/9/openseg.pytorch-master/cityscapes/val/image --out_dir output_images

@hsfzxjy
Copy link
Contributor

hsfzxjy commented Jul 18, 2020

@daixiaolei623 Hi. Currently there are no built-in facilities for this. But you can achieve it fast by emulating the directory structure, i.e., organize files like this:

CUSTOM_SET/
    val/
        image/
            img1.png
            img2.png
            ...
        label/
            img1.png
            img2.png
            ...

where CUSTOM_SET/val/image/ contains your own images, and CUSTOM_SET/val/label/ contains some fake labels. The fake labels are only use to make the dataloader operational, just make sure each image in image/ has a corresponding image file in label/. For example, you can first create CUSTOM_SET/val/image/ and then create CUSTOM_SET/val/label/ quickly using symlink. After that, simply change the --test_dir option to the path of your custom dataset.

@daixiaolei623
Copy link
Author

@hsfzxjy
thank you very much, but when I run your code, the performance seems not very good, and I see the performance of your code in cityscapes benchmark is very outstanding, I do not understand why the real performance of my testing of your code with segfix seem not very good, because it is even not equal to the performance of my own code in 2019 which is only ranking 12th in cityscapes benchmark and your code ranking 3th in cityscapes benchmark.

@PkuRainBow
Copy link
Contributor

@daixiaolei623 Thanks for trying out our SegFix method. Could you provide more details on your usage of our SegFix?

In fact, we believe that our SegFix is capable to improve almost all of the methods that we can find on the Cityscapes leaderboard. @hsfzxjy will help check the problem depending on the more details you can provide.

@vakker
Copy link

vakker commented Oct 10, 2021

The reason for the original issue (TypeError: 'NoneType' object is not callable) is that in the ModuleHelper class there's no default option that would raise an error for not supported Pytorch versions, i.e. it just returns None.
Namely here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants