Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'dict' object has no attribute 'mode' , test_cfg #3011

Closed
LHamnett opened this issue May 14, 2023 · 0 comments
Closed

AttributeError: 'dict' object has no attribute 'mode' , test_cfg #3011

LHamnett opened this issue May 14, 2023 · 0 comments
Assignees

Comments

@LHamnett
Copy link
Contributor

LHamnett commented May 14, 2023

Thanks for your error report and we appreciate it a lot.

Describe the bug
When cfg.model.test_cfg is modified from 'whole to 'slide' by defining a new test_cfg dict
cfg.model.test_cfg = dict(mode='slide',crop_size=512,slide=256)
during the validation stage - the assertion is failing incorrectly.
When I show the variable: cfg.model.test_cfg, I see {'mode': 'slide', 'crop_size': 512, 'stride': 256}.
However during runtime validation, I get the error AttributeError: 'dict' object has no attribute 'mode'

Reproduction

  1. What command or script did you run?

runner.train()

  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    Tried to reproduce just by using the original config file and modifying cfg.model.test_cfg

  2. What dataset did you use?
    Custom dataset

Environment
Google colab:
sys.platform: linux
Python: 3.10.11 (main, Apr 5 2023, 14:15:10) [GCC 9.4.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.0+cu118
PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.8
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  • CuDNN 8.7
  • Magma 2.6.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.15.1+cu118
OpenCV: 4.7.0
MMEngine: 0.7.3
MMSegmentation: 1.0.0+

Error traceback

AttributeError                            Traceback (most recent call last)
[<ipython-input-59-2b1910f91f83>](https://localhost:8080/#) in <cell line: 5>()
      3 
      4 runner = Runner.from_cfg(cfg)
----> 5 runner.train()
      6 
      7 #TODO: Add in earlystoppinghook to monitor val iou for tumor class

10 frames
[/usr/local/lib/python3.10/dist-packages/mmengine/runner/runner.py](https://localhost:8080/#) in train(self)
   1719         self._maybe_compile('train_step')
   1720 
-> 1721         model = self.train_loop.run()  # type: ignore
   1722         self.call_hook('after_run')
   1723         return model

[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run(self)
    282                     and self._iter >= self.val_begin
    283                     and self._iter % self.val_interval == 0):
--> 284                 self.runner.val_loop.run()
    285 
    286         self.runner.call_hook('after_train_epoch')

[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run(self)
    361         self.runner.model.eval()
    362         for idx, data_batch in enumerate(self.dataloader):
--> 363             self.run_iter(idx, data_batch)
    364 
    365         # compute metrics

[/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
    113     def decorate_context(*args, **kwargs):
    114         with ctx_factory():
--> 115             return func(*args, **kwargs)
    116 
    117     return decorate_context

[/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py](https://localhost:8080/#) in run_iter(self, idx, data_batch)
    381         # outputs should be sequence of BaseDataElement
    382         with autocast(enabled=self.fp16):
--> 383             outputs = self.runner.model.val_step(data_batch)
    384         self.evaluator.process(data_samples=outputs, data_batch=data_batch)
    385         self.runner.call_hook(

[/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py](https://localhost:8080/#) in val_step(self, data)
    131         """
    132         data = self.data_preprocessor(data, False)
--> 133         return self._run_forward(data, mode='predict')  # type: ignore
    134 
    135     def test_step(self, data: Union[dict, tuple, list]) -> list:

[/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py](https://localhost:8080/#) in _run_forward(self, data, mode)
    338         """
    339         if isinstance(data, dict):
--> 340             results = self(**data, mode=mode)
    341         elif isinstance(data, (list, tuple)):
    342             results = self(*data, mode=mode)

[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
   1499                 or _global_backward_pre_hooks or _global_backward_hooks
   1500                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501             return forward_call(*args, **kwargs)
   1502         # Do not call functions when jit is used
   1503         full_backward_hooks, non_full_backward_hooks = [], []

[/content/mmsegmentation/mmseg/models/segmentors/base.py](https://localhost:8080/#) in forward(self, inputs, data_samples, mode)
     94             return self.loss(inputs, data_samples)
     95         elif mode == 'predict':
---> 96             return self.predict(inputs, data_samples)
     97         elif mode == 'tensor':
     98             return self._forward(inputs, data_samples)

[/content/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py](https://localhost:8080/#) in predict(self, inputs, data_samples)
    216             ] * inputs.shape[0]
    217 
--> 218         seg_logits = self.inference(inputs, batch_img_metas)
    219 
    220         return self.postprocess_result(seg_logits, data_samples)

[/content/mmsegmentation/mmseg/models/segmentors/encoder_decoder.py](https://localhost:8080/#) in inference(self, inputs, batch_img_metas)
    328         """
    329 
--> 330         assert self.test_cfg.mode in ['slide', 'whole']
    331         ori_shape = batch_img_metas[0]['ori_shape']
    332         assert all(_['ori_shape'] == ori_shape for _ in batch_img_metas)

AttributeError: 'dict' object has no attribute 'mode'

Bug fix
In encoder_decoder.py , this assertion "assert self.test_cfg.mode in ['slide', 'whole']" possibly is not being handled correctly.

I have made a PR to change the assertion logic to better handle the case where the user modifies test_cfg and defines the new test_cfg in a dict format. See here:
#3012

** Work around**
Define test_cfg via dot notation:
cfg.model.test_cfg.mode='slide'
cfg.model.test_cfg.crop_size=(512,512)
cfg.model.test_cfg.stride = (256,256)

xiexinch added a commit that referenced this issue Jun 19, 2023
)

## Motivation

In encode_decoder.py , assertion logic is not working correctly if user
modifes cfg.test_cfg and defines it in a dictionary format. See:
#3011

## Modification

Slight change to assertion behaviour to change assertion depending on if
received test_cfg object is a dict or not.

## BC-breaking (Optional)

Unsure - I believe this will not break any downstream tasks as the
previous logic is still included

## Use cases (Optional)

n/a

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
nahidnazifi87 pushed a commit to nahidnazifi87/mmsegmentation_playground that referenced this issue Apr 5, 2024
…en-mmlab#3012)

## Motivation

In encode_decoder.py , assertion logic is not working correctly if user
modifes cfg.test_cfg and defines it in a dictionary format. See:
open-mmlab#3011

## Modification

Slight change to assertion behaviour to change assertion depending on if
received test_cfg object is a dict or not.

## BC-breaking (Optional)

Unsure - I believe this will not break any downstream tasks as the
previous logic is still included

## Use cases (Optional)

n/a

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants