Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to load the repository in google colab #80

Closed
sankalpmittal1911-BitSian opened this issue Mar 20, 2019 · 17 comments
Closed

Unable to load the repository in google colab #80

sankalpmittal1911-BitSian opened this issue Mar 20, 2019 · 17 comments

Comments

@sankalpmittal1911-BitSian

I have already cloned the repository using:
!git clone https://github.com/qubvel/segmentation_models

image

Now when I try to load it (import it), it shows following error:

from segmentation_models import Unet

--------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-21-95926c7db055> in <module>()
----> 1 from segmentation_models import Unet

/content/segmentation_models/__init__.py in <module>()
----> 1 from .segmentation_models import *

/content/segmentation_models/segmentation_models/__init__.py in <module>()
      3 from .__version__ import __version__
      4 
----> 5 from .unet import Unet
      6 from .fpn import FPN
      7 from .linknet import Linknet

/content/segmentation_models/segmentation_models/unet/__init__.py in <module>()
----> 1 from .model import Unet

/content/segmentation_models/segmentation_models/unet/model.py in <module>()
      2 from ..utils import freeze_model
      3 from ..utils import legacy_support
----> 4 from ..backbones import get_backbone, get_feature_layers
      5 
      6 old_args_map = {

/content/segmentation_models/segmentation_models/backbones/__init__.py in <module>()
----> 1 from classification_models import Classifiers
      2 from classification_models import resnext
      3 
      4 from . import inception_resnet_v2 as irv2
      5 from . import inception_v3 as iv3

/content/classification_models/__init__.py in <module>()
----> 1 from .classification_models import *

/content/classification_models/classification_models/__init__.py in <module>()
      3 from . import resnet as rn
      4 from . import senet as sn
----> 5 from . import keras_applications as ka
      6 
      7 

/content/classification_models/classification_models/keras_applications/__init__.py in <module>()
      1 import keras
----> 2 from .keras_applications.keras_applications import *
      3 
      4 set_keras_submodules(
      5     backend=keras.backend,

ModuleNotFoundError: No module named 'classification_models.classification_models.keras_applications.keras_applications.keras_applications'

Can anyone help regarding this? Thanks.

@qubvel
Copy link
Owner

qubvel commented Mar 22, 2019

Hi @sankalpmittal1911-BitSian
you have to clone using --recursive flag or use git submodule update --init

@sankalpmittal1911-BitSian
Copy link
Author

Thank you for the reply. I will check and get back to this ASAP.

@sankalpmittal1911-BitSian
Copy link
Author

image

I followed your suggestions.

image

Now it shows this error. I think it is a typo. Can you please change the name in your repository? I think it is in init.py inside segmentation_models/backbone/ (Here: segmentation_models/segmentation_models/backbones/init.py)

(On second thought, its not a typo. Then why is it showing the error?)

Thank you.

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

@sankalpmittal1911-BitSian

  1. Libraries work correctly and pass tests, so the problem is not a typo.
  2. Try inside classification models directory
    $ python setup.py install
  3. Why don't you use pip?

@sankalpmittal1911-BitSian
Copy link
Author

I am trying to run the model in google collaboratory and it uses !pip instead of pip and !git instead of git.

Shall I use !pip install setup.py or something?

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

  1. !pip install -U segmentation_models
    if permission denied error try:
    !pip install -U segmentation_models --user
  2. !pip install git+https://github.com/qubvel/segmentation_models

@sankalpmittal1911-BitSian
Copy link
Author

Thanks a lot. The first one worked. Will get back with the results.

@sankalpmittal1911-BitSian
Copy link
Author

I am trying to do multiclass segmentation here. I call the model by:

model = Unet(BACKBONE, encoder_weights='imagenet', input_shape=(None, None, 7),classes=255, activation='softmax')
model.compile('Adam', loss=categorical_crossentropy, metrics=['categorical_accuracy'])

The input is formed by combining 7 grayscale images stacked as (128,128,7) i.e. 7 channels. The error which comes now is:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Downloading data from https://github.com/qubvel/classification_models/releases/download/0.0.1/resnet34_imagenet_1000_no_top.h5
85524480/85521592 [==============================] - 3s 0us/step
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1658   try:
-> 1659     c_op = c_api.TF_FinishOperation(op_desc)
   1660   except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 7 and 3. Shapes are [7] and [3]. for 'Assign' (op: 'Assign') with input shapes: [7], [3].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-9-17b3d4e136e9> in <module>()
----> 1 model = Unet(BACKBONE, encoder_weights='imagenet', input_shape=(None, None, 7),classes=255, activation='softmax')
      2 model.compile('Adam', loss=categorical_crossentropy, metrics=['categorical_accuracy'])

/usr/local/lib/python3.6/dist-packages/segmentation_models/utils.py in wrapper(*args, **kwargs)
     28                     kwargs[new_arg] = kwargs[old_arg]
     29 
---> 30             return func(*args, **kwargs)
     31 
     32         return wrapper

/usr/local/lib/python3.6/dist-packages/segmentation_models/unet/model.py in Unet(backbone_name, input_shape, classes, activation, encoder_weights, encoder_freeze, encoder_features, decoder_block_type, decoder_filters, decoder_use_batchnorm, **kwargs)
     61                             input_tensor=None,
     62                             weights=encoder_weights,
---> 63                             include_top=False)
     64 
     65     if encoder_features == 'default':

/usr/local/lib/python3.6/dist-packages/segmentation_models/backbones/__init__.py in get_backbone(name, *args, **kwargs)
     73 
     74 def get_backbone(name, *args, **kwargs):
---> 75     return Classifiers.get_classifier(name)(*args, **kwargs)
     76 
     77 

/usr/local/lib/python3.6/dist-packages/classification_models/resnet/models.py in classifier(input_shape, input_tensor, weights, classes, include_top)
     23 
     24         if weights:
---> 25             load_model_weights(weights_collection, model, weights, classes, include_top)
     26 
     27         return model

/usr/local/lib/python3.6/dist-packages/classification_models/utils.py in load_model_weights(weights_collection, model, dataset, classes, include_top)
     24                                 md5_hash=weights['md5'])
     25 
---> 26         model.load_weights(weights_path)
     27 
     28     else:

/usr/local/lib/python3.6/dist-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
   1164             else:
   1165                 saving.load_weights_from_hdf5_group(
-> 1166                     f, self.layers, reshape=reshape)
   1167 
   1168     def _updated_config(self):

/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in load_weights_from_hdf5_group(f, layers, reshape)
   1056                              ' elements.')
   1057         weight_value_tuples += zip(symbolic_weights, weight_values)
-> 1058     K.batch_set_value(weight_value_tuples)
   1059 
   1060 

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in batch_set_value(tuples)
   2463                 assign_placeholder = tf.placeholder(tf_dtype,
   2464                                                     shape=value.shape)
-> 2465                 assign_op = x.assign(assign_placeholder)
   2466                 x._assign_placeholder = assign_placeholder
   2467                 x._assign_op = assign_op

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in assign(self, value, use_locking, name, read_value)
   1760     """
   1761     assign = state_ops.assign(self._variable, value, use_locking=use_locking,
-> 1762                               name=name)
   1763     if read_value:
   1764       return assign

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/state_ops.py in assign(ref, value, validate_shape, use_locking, name)
    221     return gen_state_ops.assign(
    222         ref, value, use_locking=use_locking, name=name,
--> 223         validate_shape=validate_shape)
    224   return ref.assign(value, name=name)
    225 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_state_ops.py in assign(ref, value, validate_shape, use_locking, name)
     62   _, _, _op = _op_def_lib._apply_op_helper(
     63         "Assign", ref=ref, value=value, validate_shape=validate_shape,
---> 64                   use_locking=use_locking, name=name)
     65   _result = _op.outputs[:]
     66   _inputs_flat = _op.inputs

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    786         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    787                          input_types=input_types, attrs=attr_protos,
--> 788                          op_def=op_def)
    789       return output_structure, op_def.is_stateful, op
    790 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    505                 'in a future version' if date is None else ('after %s' % date),
    506                 instructions)
--> 507       return func(*args, **kwargs)
    508 
    509     doc = _add_deprecated_arg_notice_to_docstring(

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in create_op(***failed resolving arguments***)
   3298           input_types=input_types,
   3299           original_op=self._default_original_op,
-> 3300           op_def=op_def)
   3301       self._create_op_helper(ret, compute_device=compute_device)
   3302     return ret

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   1821           op_def, inputs, node_def.attr)
   1822       self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1823                                 control_input_ops)
   1824 
   1825     # Initialize self._outputs.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1660   except errors.InvalidArgumentError as e:
   1661     # Convert to ValueError for backwards compatibility.
-> 1662     raise ValueError(str(e))
   1663 
   1664   return c_op

ValueError: Dimension 0 in both shapes must be equal, but are 7 and 3. Shapes are [7] and [3]. for 'Assign' (op: 'Assign') with input shapes: [7], [3].

This has something to do with input shapes. How to eliminate this error?

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

ImageNet weights are compatible only with (None, None, 3) shape. In case of (None, None, 7) set encoder_weights=None

@sankalpmittal1911-BitSian
Copy link
Author

Thanks. This essentially means I will have to train model from scratch.

@sankalpmittal1911-BitSian
Copy link
Author

I am closing this issue. I will create a new issue of implementation if necessary. Thank you once again.

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

One more note, choose another network for multiclass segmentation. Unet has only 16 filters at the end, it would be hard to separate 255 classes. Better take PSP or FPN.

@sankalpmittal1911-BitSian
Copy link
Author

Yes I will. I will get back with the results in the new issue (if needed). Also does increasing the filters in UNet help? I tried with 256 filters in my custom model and it was still failing. Thanks.

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

Never did it with so many classes. I think it depends on data a lot.
Try to use PSP with downsampling_factor=16 and heavy encoders like InceptionResNetV2/Senet154 (require a lot of GPU memory), do not downsample image a lot. Maybe you need several GPUs to train something.

P.S. add aux output to help training (reed paper of PSPNet)
P.P.S. use_batchnorm=False to reduce required memory
P.P.P.S. (:smile:) use weighted loss function

@sankalpmittal1911-BitSian
Copy link
Author

I will try batch-wise loading using a custom generator to reduce memory.

As a sanity check, I tried implementing U-Net for just 10 images to check if it at least overfits them:

Epoch 1/50
5/5 [==============================] - 36s 7s/step - loss: 1.9863 - categorical_accuracy: 0.4062

Epoch 00001: loss improved from inf to 1.98632, saving model to /content/drive/My Drive/model4.h5
Epoch 2/50
5/5 [==============================] - 12s 2s/step - loss: 1.9131 - categorical_accuracy: 0.4246

Epoch 00002: loss improved from 1.98632 to 1.91309, saving model to /content/drive/My Drive/model4.h5
Epoch 3/50
5/5 [==============================] - 11s 2s/step - loss: 1.8505 - categorical_accuracy: 0.5014

Epoch 00003: loss improved from 1.91309 to 1.85046, saving model to /content/drive/My Drive/model4.h5
Epoch 4/50
5/5 [==============================] - 11s 2s/step - loss: 2.0919 - categorical_accuracy: 0.4084

Epoch 00004: loss did not improve from 1.85046
Epoch 5/50
5/5 [==============================] - 25s 5s/step - loss: 1.9043 - categorical_accuracy: 0.4599

Epoch 00005: loss did not improve from 1.85046
Epoch 6/50
5/5 [==============================] - 32s 6s/step - loss: 2.1385 - categorical_accuracy: 0.3980

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: loss did not improve from 1.85046
Epoch 7/50
5/5 [==============================] - 29s 6s/step - loss: 1.8131 - categorical_accuracy: 0.4570

Epoch 00007: loss improved from 1.85046 to 1.81309, saving model to /content/drive/My Drive/model4.h5
Epoch 8/50
5/5 [==============================] - 30s 6s/step - loss: 2.0170 - categorical_accuracy: 0.3941

Epoch 00008: loss did not improve from 1.81309
Epoch 9/50
5/5 [==============================] - 31s 6s/step - loss: 1.9385 - categorical_accuracy: 0.4256

Epoch 00009: loss did not improve from 1.81309
Epoch 10/50
5/5 [==============================] - 31s 6s/step - loss: 2.1063 - categorical_accuracy: 0.4162

Epoch 00010: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00010: loss did not improve from 1.81309
Epoch 11/50
5/5 [==============================] - 32s 6s/step - loss: 2.0883 - categorical_accuracy: 0.3908

Epoch 00011: loss did not improve from 1.81309
Epoch 12/50
5/5 [==============================] - 31s 6s/step - loss: 2.1234 - categorical_accuracy: 0.4195

Epoch 00012: loss did not improve from 1.81309
Epoch 13/50
5/5 [==============================] - 36s 7s/step - loss: 1.9139 - categorical_accuracy: 0.4540

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.

Epoch 00013: loss did not improve from 1.81309
Epoch 14/50
5/5 [==============================] - 30s 6s/step - loss: 1.8893 - categorical_accuracy: 0.4360

Epoch 00014: loss did not improve from 1.81309
Epoch 15/50
5/5 [==============================] - 31s 6s/step - loss: 1.8587 - categorical_accuracy: 0.4351

Epoch 00015: loss did not improve from 1.81309
Epoch 16/50
2/5 [===========>..................] - ETA: 17s - loss: 2.2116 - categorical_accuracy: 0.3031

Hopefully I am not downsampling anything since original dimensions are (128,128). U-Net does not seem to figure out even for little data if the classes are 255. It's actually crop segmentation.

I will try those suggestions and create a new issue.

Thanks.

Edit: use weighted loss function. Please explain?

Currently I am using categorical cross entropy since it's pixel-wise classification in a way.

@qubvel
Copy link
Owner

qubvel commented Mar 26, 2019

I guess you have different number of pixels for each class, so your data is imbalanced. Read paper (pspnet), you will find a way to modify loss function in this case.

@ProtossDragoon
Copy link

ProtossDragoon commented May 28, 2021

Hi, I'm trying to install this library on google google colab with him. (from source code). Thank you for your help. ☺️

setup.py

try:
    with open(os.path.join(here, 'requirements.txt'), encoding='utf-8') as f:
        REQUIRED = f.read().split('\n')

requirements.txt

keras_applications>=1.0.7,<=1.0.8
image-classifiers==1.0.0
efficientnet==1.0.0

I found that keras_applications are needed, but I'm working on google colab (tf.keras.applications already exists.) So I want to install SM with tf.keras.applications, not separate keras_applications. Could I do that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants