You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your error report and we appreciate it a lot.
Describe the bug
When cfg.model.test_cfg is modified from 'whole to 'slide' by defining a new test_cfg dict
cfg.model.test_cfg = dict(mode='slide',crop_size=512,slide=256)
during the validation stage - the assertion is failing incorrectly.
When I show the variable: cfg.model.test_cfg, I see {'mode': 'slide', 'crop_size': 512, 'stride': 256}.
However during runtime validation, I get the error AttributeError: 'dict' object has no attribute 'mode'
Reproduction
What command or script did you run?
runner.train()
Did you make any modifications on the code or config? Did you understand what you have modified?
Tried to reproduce just by using the original config file and modifying cfg.model.test_cfg
What dataset did you use?
Custom dataset
Environment
Google colab:
sys.platform: linux
Python: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.0+cu118
PyTorch compiling details: PyTorch built with:
GCC 9.3
C++ Version: 201703
Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
AttributeError Traceback (most recent call last)
[<ipython-input-59-2b1910f91f83>](https://localhost:8080/#) in <cell line: 5>()
3
4 runner = Runner.from_cfg(cfg)
----> 5 runner.train()
6
7 #TODO: Add in earlystoppinghook to monitor val iou for tumor class
10 frames
[/usr/local/lib/python3.10/dist-packages/mmengine/runner/runner.py](https://localhost:8080/#) in train(self)
1719 self._maybe_compile('train_step')
1720
-> 1721 model = self.train_loop.run() # type: ignore
1722 self.call_hook('after_run')
1723 return model
[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run(self)
282 and self._iter >= self.val_begin
283 and self._iter % self.val_interval == 0):
--> 284 self.runner.val_loop.run()
285
286 self.runner.call_hook('after_train_epoch')
[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run(self)
361 self.runner.model.eval()
362 for idx, data_batch in enumerate(self.dataloader):
--> 363 self.run_iter(idx, data_batch)
364
365 # compute metrics
[/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
116
117 return decorate_context
[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run_iter(self, idx, data_batch)
381 # outputs should be sequence of BaseDataElement
382 with autocast(enabled=self.fp16):
--> 383 outputs = self.runner.model.val_step(data_batch)
384 self.evaluator.process(data_samples=outputs, data_batch=data_batch)
385 self.runner.call_hook(
[/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py](https://localhost:8080/#) in val_step(self, data)
131 """
132 data = self.data_preprocessor(data, False)
--> 133 return self._run_forward(data, mode='predict') # type: ignore
134
135 def test_step(self, data: Union[dict, tuple, list]) -> list:
[/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py](https://localhost:8080/#) in _run_forward(self, data, mode)
338 """
339 if isinstance(data, dict):
--> 340 results = self(**data, mode=mode)
341 elif isinstance(data, (list, tuple)):
342 results = self(*data, mode=mode)
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
[/content/mmsegmentation/mmseg/models/segmentors/base.py](https://localhost:8080/#) in forward(self, inputs, data_samples, mode)
94 return self.loss(inputs, data_samples)
95 elif mode == 'predict':
---> 96 return self.predict(inputs, data_samples)
97 elif mode == 'tensor':
98 return self._forward(inputs, data_samples)
[/content/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py](https://localhost:8080/#) in predict(self, inputs, data_samples)
216 ] * inputs.shape[0]
217
--> 218 seg_logits = self.inference(inputs, batch_img_metas)
219
220 return self.postprocess_result(seg_logits, data_samples)
[/content/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py](https://localhost:8080/#) in inference(self, inputs, batch_img_metas)
328 """
329
--> 330 assert self.test_cfg.mode in ['slide', 'whole']
331 ori_shape = batch_img_metas[0]['ori_shape']
332 assert all(_['ori_shape'] == ori_shape for _ in batch_img_metas)
AttributeError: 'dict' object has no attribute 'mode'
Bug fix
In encoder_decoder.py , this assertion "assert self.test_cfg.mode in ['slide', 'whole']" possibly is not being handled correctly.
I have made a PR to change the assertion logic to better handle the case where the user modifies test_cfg and defines the new test_cfg in a dict format. See here: #3012
** Work around**
Define test_cfg via dot notation:
cfg.model.test_cfg.mode='slide'
cfg.model.test_cfg.crop_size=(512,512)
cfg.model.test_cfg.stride = (256,256)
The text was updated successfully, but these errors were encountered:
)
## Motivation
In encode_decoder.py , assertion logic is not working correctly if user
modifes cfg.test_cfg and defines it in a dictionary format. See:
#3011
## Modification
Slight change to assertion behaviour to change assertion depending on if
received test_cfg object is a dict or not.
## BC-breaking (Optional)
Unsure - I believe this will not break any downstream tasks as the
previous logic is still included
## Use cases (Optional)
n/a
---------
Co-authored-by: xiexinch <xiexinch@outlook.com>
…en-mmlab#3012)
## Motivation
In encode_decoder.py , assertion logic is not working correctly if user
modifes cfg.test_cfg and defines it in a dictionary format. See:
open-mmlab#3011
## Modification
Slight change to assertion behaviour to change assertion depending on if
received test_cfg object is a dict or not.
## BC-breaking (Optional)
Unsure - I believe this will not break any downstream tasks as the
previous logic is still included
## Use cases (Optional)
n/a
---------
Co-authored-by: xiexinch <xiexinch@outlook.com>
Thanks for your error report and we appreciate it a lot.
Describe the bug
When cfg.model.test_cfg is modified from 'whole to 'slide' by defining a new test_cfg dict
cfg.model.test_cfg = dict(mode='slide',crop_size=512,slide=256)
during the validation stage - the assertion is failing incorrectly.
When I show the variable: cfg.model.test_cfg, I see {'mode': 'slide', 'crop_size': 512, 'stride': 256}.
However during runtime validation, I get the error AttributeError: 'dict' object has no attribute 'mode'
Reproduction
runner.train()
Did you make any modifications on the code or config? Did you understand what you have modified?
Tried to reproduce just by using the original config file and modifying cfg.model.test_cfg
What dataset did you use?
Custom dataset
Environment
Google colab:
sys.platform: linux
Python: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.0+cu118
PyTorch compiling details: PyTorch built with:
TorchVision: 0.15.1+cu118
OpenCV: 4.7.0
MMEngine: 0.7.3
MMSegmentation: 1.0.0+
Error traceback
Bug fix
In encoder_decoder.py , this assertion "assert self.test_cfg.mode in ['slide', 'whole']" possibly is not being handled correctly.
I have made a PR to change the assertion logic to better handle the case where the user modifies test_cfg and defines the new test_cfg in a dict format. See here:
#3012
** Work around**
Define test_cfg via dot notation:
cfg.model.test_cfg.mode='slide'
cfg.model.test_cfg.crop_size=(512,512)
cfg.model.test_cfg.stride = (256,256)
The text was updated successfully, but these errors were encountered: