Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'list' object has no attribute 'values' #928

Closed
eyildiz-ugoe opened this issue Sep 13, 2018 · 8 comments
Closed

AttributeError: 'list' object has no attribute 'values' #928

eyildiz-ugoe opened this issue Sep 13, 2018 · 8 comments

Comments

@eyildiz-ugoe
Copy link

As described in the balloon example, I've used VIA tool to annotate my images. I've used circles, polygons, etc. and gave each class its respective name.

1
2

Now that I want to load the dataset, again, using the same code, and I get the following error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-048a8da58a4c> in <module>()
      1 # Load validation dataset
      2 dataset = components.ComponentsDataset()
----> 3 dataset.load_components(COMPONENTS_DIR, "val")
      4 
      5 # Must call before using the dataset

~/workspace/Mask_RCNN/samples/components/components.py in load_components(self, dataset_dir, subset)
    122             # the outline of each object instance. There are stored in the
    123             # shape_attributes (see json format above)
--> 124             polygons = [r['shape_attributes'] for r in a['regions'].values()]
    125 
    126             # load_mask() needs the image size to convert polygons to masks.

AttributeError: 'list' object has no attribute 'values'

What could be the problem? Has anyone tried loading the annotated data which contains circles? (not only polygons)

@benjamin-taheri
Copy link

The samples only accept the polygons. That's why you get this error. You can either re-annotate the circles or use something like this to convert the circles to polygons in your json file:


import json
from pprint import pprint

with open('via_region_data(val).json') as f:
    data = json.load(f)

    for attr, val in data.items():
        for attr2, val2 in val.items():
            if attr2 == 'regions':
                for attr3, val3 in val2.items():
                    if val3['shape_attributes']['name'] == 'circle':
                        cx = val3['shape_attributes']['cx']
                        cy = val3['shape_attributes']['cy']
                        r = val3['shape_attributes']['r']
                        all_points_x = [cx, cx - 1.5 * r, cx, cx + 1.5 * r, cx]
                        all_points_y = [cy - 1.5 * r, cy, cy + 1.5 * r, cy, cy - 1.5 * r]
                        val3['shape_attributes']['cx'] = all_points_x
                        val3['shape_attributes']['cy'] = all_points_y

                        val3['shape_attributes']['all_points_x'] = val3['shape_attributes'].pop('cx')
                        val3['shape_attributes']['all_points_y'] = val3['shape_attributes'].pop('cy')
                        val3['shape_attributes']['name'] = 'polygon'


pprint(data)

with open('via_region_data-val.json', 'w') as f:
    json.dump(data, f)

@eyildiz-ugoe
Copy link
Author

@sajjad-taheri thank you for the code, but when I run I get the following error:

Traceback (most recent call last):
  File "polygon_fixer.py", line 10, in <module>
    for attr3, val3 in val2.items():
AttributeError: 'list' object has no attribute 'items'

@skt7
Copy link
Contributor

skt7 commented Sep 18, 2018

VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list.

Older Version Format:
"regions":{ "0":{<data_0>}, "1":{<data_1>}, . . . }

Newer Version Format:
"regions":{ [{<data_0>},{<data_1>},...] }

So you can do is, change line 10 from for attr3, val3 in val2.items(): to for val3 in val2:

@eyildiz-ugoe
Copy link
Author

eyildiz-ugoe commented Sep 18, 2018

For some reason the data is not getting loaded no matter what I try. I keep getting:

polygons = [r['shape_attributes'] for r in a['regions'].values()]

AttributeError: 'list' object has no attribute 'values'

Here is my function that is supposed to load the dataset:

class ComponentsDataset(utils.Dataset):

    def load_components(self, dataset_dir, subset):
        """Load a subset of the Balloon dataset.
        dataset_dir: Root directory of the dataset.
        subset: Subset to load: train or val
        """
        # Add classes. We have only one class to add.
        self.add_class("components", 1, "screw")
        self.add_class("components", 2, "lid")

        # Train or validation dataset?
        assert subset in ["train", "val"]
        dataset_dir = os.path.join(dataset_dir, subset)

        # Load annotations
        # VGG Image Annotator saves each image in the form:
        # { 'filename': '28503151_5b5b7ec140_b.jpg',
        #   'regions': {
        #       '0': {
        #           'region_attributes': {name:'screw'},
        #           'shape_attributes': {
        #               'all_points_x': [...],
        #               'all_points_y': [...],
        #               'name': 'polygon'}},
        #       ... more regions ...
        #   },
        #   'size': 100202
        # }
        # We mostly care about the x and y coordinates of each region
        annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json")))
        annotations = list(annotations.values())  # don't need the dict keys

        # The VIA tool saves images in the JSON even if they don't have any
        # annotations. Skip unannotated images.
        annotations = [a for a in annotations if a['regions']]

        # Add images
        for a in annotations:
            # Get the x, y coordinaets of points of the polygons that make up
            # the outline of each object instance. There are stored in the
            # shape_attributes (see json format above)
            polygons = [r['shape_attributes'] for r in a['regions'].values()]
            names = [r['region_attributes'] for r in a['regions'].values()]
            # load_mask() needs the image size to convert polygons to masks.
            # Unfortunately, VIA doesn't include it in JSON, so we must read
            # the image. This is only managable since the dataset is tiny.
            image_path = os.path.join(dataset_dir, a['filename'])
            image = skimage.io.imread(image_path)
            height, width = image.shape[:2]

            self.add_image(
                "components",
                image_id=a['filename'],  # use file name as a unique image id
                path=image_path,
                width=width, height=height,
                polygons=polygons,
                names=names)

Even after converting them to polygons with the suggested piece of code and the modification, I still cannot load my dataset, hence I cannot train or do anything useful with the network. It's really frustrating to have the entire dataset annotated using the suggested annotation tool (VIA) and then not being able to load it. At this point the developers should really release a fix or stop suggesting that tool to annotate, or underline specifically that users should not use anything else than polygons. This is such a time waste as of now since the dataset cannot be loaded.

@skt7
Copy link
Contributor

skt7 commented Sep 18, 2018

You didn't read the last comment I clearly mentioned that

VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list

change
polygons = [r['shape_attributes'] for r in a['regions'].values()]
to
polygons = [r['shape_attributes'] for r in a['regions']]

and it will work.

And yes you are right they should update that line of code to make it work on the latest update of VIA or better make it compatible for all the versions.

@waleedka What say?

@eyildiz-ugoe
Copy link
Author

@skt7 Yes, now the dataset is loaded. Also this had to be changed:

names = [r['region_attributes'] for r in a['regions'].values()]

to this:

names = [r['region_attributes'] for r in a['regions']]

Thank you for the contribution!

skt7 added a commit to skt7/Mask_RCNN that referenced this issue Sep 18, 2018
VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue matterport#928
waleedka pushed a commit that referenced this issue Sep 21, 2018
VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue #928
LexLuc added a commit to LexLuc/Mask_RCNN that referenced this issue Sep 28, 2018
* Small typo fix

* loss weights

* Fix multi-GPU training.

A previous fix to let validation run across more
than one batch caused an issue with multi-GPU
training. The issue seems to be in how Keras
averages loss and metric values, where it expects
them to be scalars rather than arrays. This fix
causes scalar outputs from a model to remain
scalar in multi-GPU training.

* Replace keep_dims with keepdims in TF calls.

TF replaced keep_dims with keepdims a while ago
and now shows a warning when using the old name.

* Headline typo fix in README.md

Fixed the typo in the headline of the README.md file. "Spash" should be "Splash"

* Splash sample: fix filename and link to blog post

* Update utils.py

* Minor cleanup in compute_overlaps_masks()

* Fix: color_splash when no masks are detected

Reported here: matterport#500

* fix typo

fix typo

* fix "No such file or directory" if not use: "keras.callbacks.TensorBoard"

* Allow dashes in model name.
Print a message when re-starting from saved epoch

* Fix problem with argmax on (0,0) arrays.

Fix matterport#170

* Allow configuration of FPN layers size and top-down pyramid size

* Allow custom backbone implementation through Config.BACKBONE

This allows one to set a callable in Config.BACKBONE to use a custom
backbone model.

* modified comment for image augmentation line import to include correct 'pip3 install imgaug' instructions

* Raise clear error if last training weights are not foundIf using the --weights=last (or --model=last) to resume trainingbut the weights are not found now it raises a clear error message.

* Fix Keras engine topology to saving

* Fix load_weights() for Keras versions before 2.2

Improve previous commit to not break on older versions of Keras.

* Update README.md

* Add custom callbacks to model training

Add an optional parameter for calling a list of keras.callbacks to be add to the original list.

* Add no augmentation sources

Add the possibility to exclude some sources from augmentation by passing a list of sources. This is useful when you want to retrain a model having few images.

* Improve previous commit to avoid mutable default arguments

* Updated Coco Example

* edit loss desc

* spellcheck config.py

* doublecheck on config.py

* spellcheck utils.py

* spellcheck visualize.py

* Links to two more projects in README

* Add Bibtex to README

* make pre_nms_limit configurable

* Make pre_nms_limit configurable

* Made compatible to new version of VIA JSON format

VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue matterport#928

* Comments to explain VIA 2.0 JSON change

* Fix the comment on output shape in RPN

* Bugfix for MaskRCNN creating empty log_dir that breaks find_last()
- Current implementation creates self.log_dir in set_log_dir() function,
  which creates an empty log directory if none exists. This causes
  find_last() to fail after creating a model because it finds this new
  empty directory instead of the previous training directory.
- New implementation moves log_dir creation to the train() function to
  ensure it is only created when it will be used.

* Added automated epoch recognition for Windows. (matterport#798)

Unified regex expression for both, Linux and Windows.

* Fixed tabbing issue in previous commit

* bug fix: the output_shape of roi_gt_class_ids is incorrect

* Bug fix: inspect_balloon_model.ipynb

Fix bugs of not showing boxes in 1.b RPN Predictions.
TF 1.9 introduces "ROI/rpn_non_max_suppression/NonMaxSuppressionV3:0", so the original code can't work.

* Apply previous commit to the other notebooks

* Fixed comment on GPU_COUNT (matterport#878)

Fixed comment on GPU_COUNT

* add IMAGE_CHANNEL_COUNT class variable to config to make it easier to use Mask_RCNN for non 3-channel images

* Additional comments for the previous commit

* Link to new projects in README

* Tiny correction in README.

* Adjust PyramidROIAlign layer shape comment

For PyramidROIAlign's output shape, use pool_height and pool_width instead of height and width to avoid confusion with those of feature_maps.

* fix output shape of fpn_classifier_graph

1. fix the comment on output shape in fpn_classifier_graph
2. unify NUM_CLASSES and num_classes to NUM_CLASSES
3. unify boxes, num_boxes, num_rois, roi_count to num_rois
4. use more specific POOL_SIZE and MASK_ POOL_SIZE to replace pool_height and pool_width

* Fix PyramidROIAlign output shape

As discussed in: matterport#919

* Fix comments in Detection Layer

1. fix description on window
2. fix output shape of detection layer

* use smooth_l1_loss() to reduce code duplication

* A wrapper for skimage resize() to avoid warnings

skimage generates different warnings depending on the version. This wrapper function calls skimage.tranform.resize() with the right parameter for each version.

* Remove unused method: append_data()
LexLuc added a commit to LexLuc/Mask_RCNN that referenced this issue Sep 28, 2018
* Small typo fix

* loss weights

* Fix multi-GPU training.

A previous fix to let validation run across more
than one batch caused an issue with multi-GPU
training. The issue seems to be in how Keras
averages loss and metric values, where it expects
them to be scalars rather than arrays. This fix
causes scalar outputs from a model to remain
scalar in multi-GPU training.

* Replace keep_dims with keepdims in TF calls.

TF replaced keep_dims with keepdims a while ago
and now shows a warning when using the old name.

* Headline typo fix in README.md

Fixed the typo in the headline of the README.md file. "Spash" should be "Splash"

* Splash sample: fix filename and link to blog post

* Update utils.py

* Minor cleanup in compute_overlaps_masks()

* Fix: color_splash when no masks are detected

Reported here: matterport#500

* fix typo

fix typo

* fix "No such file or directory" if not use: "keras.callbacks.TensorBoard"

* Allow dashes in model name.
Print a message when re-starting from saved epoch

* Fix problem with argmax on (0,0) arrays.

Fix matterport#170

* Allow configuration of FPN layers size and top-down pyramid size

* Allow custom backbone implementation through Config.BACKBONE

This allows one to set a callable in Config.BACKBONE to use a custom
backbone model.

* modified comment for image augmentation line import to include correct 'pip3 install imgaug' instructions

* Raise clear error if last training weights are not foundIf using the --weights=last (or --model=last) to resume trainingbut the weights are not found now it raises a clear error message.

* Fix Keras engine topology to saving

* Fix load_weights() for Keras versions before 2.2

Improve previous commit to not break on older versions of Keras.

* Update README.md

* Add custom callbacks to model training

Add an optional parameter for calling a list of keras.callbacks to be add to the original list.

* Add no augmentation sources

Add the possibility to exclude some sources from augmentation by passing a list of sources. This is useful when you want to retrain a model having few images.

* Improve previous commit to avoid mutable default arguments

* Updated Coco Example

* edit loss desc

* spellcheck config.py

* doublecheck on config.py

* spellcheck utils.py

* spellcheck visualize.py

* Links to two more projects in README

* Add Bibtex to README

* make pre_nms_limit configurable

* Make pre_nms_limit configurable

* Made compatible to new version of VIA JSON format

VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue matterport#928

* Comments to explain VIA 2.0 JSON change

* Fix the comment on output shape in RPN

* Bugfix for MaskRCNN creating empty log_dir that breaks find_last()
- Current implementation creates self.log_dir in set_log_dir() function,
  which creates an empty log directory if none exists. This causes
  find_last() to fail after creating a model because it finds this new
  empty directory instead of the previous training directory.
- New implementation moves log_dir creation to the train() function to
  ensure it is only created when it will be used.

* Added automated epoch recognition for Windows. (matterport#798)

Unified regex expression for both, Linux and Windows.

* Fixed tabbing issue in previous commit

* bug fix: the output_shape of roi_gt_class_ids is incorrect

* Bug fix: inspect_balloon_model.ipynb

Fix bugs of not showing boxes in 1.b RPN Predictions.
TF 1.9 introduces "ROI/rpn_non_max_suppression/NonMaxSuppressionV3:0", so the original code can't work.

* Apply previous commit to the other notebooks

* Fixed comment on GPU_COUNT (matterport#878)

Fixed comment on GPU_COUNT

* add IMAGE_CHANNEL_COUNT class variable to config to make it easier to use Mask_RCNN for non 3-channel images

* Additional comments for the previous commit

* Link to new projects in README

* Tiny correction in README.

* Adjust PyramidROIAlign layer shape comment

For PyramidROIAlign's output shape, use pool_height and pool_width instead of height and width to avoid confusion with those of feature_maps.

* fix output shape of fpn_classifier_graph

1. fix the comment on output shape in fpn_classifier_graph
2. unify NUM_CLASSES and num_classes to NUM_CLASSES
3. unify boxes, num_boxes, num_rois, roi_count to num_rois
4. use more specific POOL_SIZE and MASK_ POOL_SIZE to replace pool_height and pool_width

* Fix PyramidROIAlign output shape

As discussed in: matterport#919

* Fix comments in Detection Layer

1. fix description on window
2. fix output shape of detection layer

* use smooth_l1_loss() to reduce code duplication

* A wrapper for skimage resize() to avoid warnings

skimage generates different warnings depending on the version. This wrapper function calls skimage.tranform.resize() with the right parameter for each version.

* Remove unused method: append_data()
Cpruce pushed a commit to Cpruce/Mask_RCNN that referenced this issue Jan 17, 2019
VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue matterport#928
@mymultiverse
Copy link

mymultiverse commented Mar 19, 2019

The samples only accept the polygons. That's why you get this error. You can either re-annotate the circles or use something like this to convert the circles to polygons in your json file:
Thanks for the script I notices one thing it the number of polynomial vertex is not sufficient while conversion from circle, It raises error during checking data set. I modified your script with math a bit so that N vertex can be chosen on circle.

import json
from pprint import pprint
import numpy as np

N = 20

thita = np.linspace(-np.pi, np.pi, N)
with open('via_region_dataCir.json') as f:
    data = json.load(f)

    for attr, val in data.items():
        for attr2, val2 in val.items():
            if attr2 == 'regions':
                for attr3, val3 in val2.items():
                    if val3['shape_attributes']['name'] == 'circle':
                        cx = val3['shape_attributes']['cx']
                        cy = val3['shape_attributes']['cy']
                        r = val3['shape_attributes']['r']
                        
                        all_points_x = [cx+r*np.cos(i)for i in thita]
                        all_points_y = [cy+r*np.sin(i) for i in thita]
                        
                        val3['shape_attributes']['cx'] = all_points_x
                        val3['shape_attributes']['cy'] = all_points_y

                        val3['shape_attributes']['all_points_x'] = val3['shape_attributes'].pop('cx')
                        val3['shape_attributes']['all_points_y'] = val3['shape_attributes'].pop('cy')
                        val3['shape_attributes']['name'] = 'polygon'


pprint(data)

with open('via_region_data.json', 'w') as f:
    json.dump(data, f)

aneeshchauhan pushed a commit to aneeshchauhan/Mask_RCNN that referenced this issue Jul 9, 2019
VIA has changed JSON formatting in later versions. Now instead of a dictionary, "regions" has a list, see the issue matterport#928
@girijesh97
Copy link

@mymultiverse @sajjad-taheri Hi, How can I convert polyline and rect to ploygon?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants