Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageGenerator for 2 inputs #10499

Closed
MjdMahasneh opened this issue Jun 21, 2018 · 5 comments
Closed

ImageGenerator for 2 inputs #10499

MjdMahasneh opened this issue Jun 21, 2018 · 5 comments

Comments

@MjdMahasneh
Copy link

MjdMahasneh commented Jun 21, 2018

I have built a model of 2 branches which that will be be merged eventually. I would like to create an ImageGenerator to augment the image data, but I keep getting this error:

Epoch 1/100
Found 1206 images belonging to 1 classes.
Found 1206 images belonging to 1 classes.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-5-a34cfdbfff52> in <module>()
     13                     epochs = n_epoch,
     14                     validation_data = testgenerator,
---> 15                     validation_steps = 406)
     16 
     17 

~\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

~\Anaconda3\lib\site-packages\keras\models.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1313                                         use_multiprocessing=use_multiprocessing,
   1314                                         shuffle=shuffle,
-> 1315                                         initial_epoch=initial_epoch)
   1316 
   1317     @interfaces.legacy_generator_methods_support

~\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

~\Anaconda3\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2192                 batch_index = 0
   2193                 while steps_done < steps_per_epoch:
-> 2194                     generator_output = next(output_generator)
   2195 
   2196                     if not hasattr(generator_output, '__len__'):

~\Anaconda3\lib\site-packages\keras\utils\data_utils.py in get(self)
    791             success, value = self.queue.get()
    792             if not success:
--> 793                 six.reraise(value.__class__, value, value.__traceback__)

~\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb)
    691             if value.__traceback__ is not tb:
    692                 raise value.with_traceback(tb)
--> 693             raise value
    694         finally:
    695             value = None

~\Anaconda3\lib\site-packages\keras\utils\data_utils.py in _data_generator_task(self)
    656                             # => Serialize calls to
    657                             # infinite iterator/generator's next() function
--> 658                             generator_output = next(self._generator)
    659                             self.queue.put((True, generator_output))
    660                         else:

<ipython-input-2-c889f22031b4> in generate_generator_multiple(generator, dir1, dir2, batch_size, img_height, img_width)
     20                                           seed=7)
     21     while True:
---> 22             X1i = genX1.next()
     23             X2i = genX2.next()
     24             yield [X1i[0], X2i[0]], X2i[1]  #Yield both images and their mutual label

~\Anaconda3\lib\site-packages\keras\preprocessing\image.py in next(self)
   1475         # The transformation of images is not under thread lock
   1476         # so it can be done in parallel
-> 1477         return self._get_batches_of_transformed_samples(index_array)

~\Anaconda3\lib\site-packages\keras\preprocessing\image.py in _get_batches_of_transformed_samples(self, index_array)
   1428 
   1429     def _get_batches_of_transformed_samples(self, index_array):
-> 1430         batch_x = np.zeros((len(index_array),) + self.image_shape, dtype=K.floatx())
   1431         grayscale = self.color_mode == 'grayscale'
   1432         # build batch of image data

TypeError: 'tuple' object cannot be interpreted as an integer

here is my code:

import os
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D,Convolution2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras.models import Model
from keras.layers import Merge
input_imgen = ImageDataGenerator(horizontal_flip = True)

test_imgen = ImageDataGenerator(horizontal_flip = True)



def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width):
    genX1 = generator.flow_from_directory(dir1,
                                          target_size = (img_height,img_width),
                                          class_mode = 'binary',
                                          batch_size = batch_size,
                                          shuffle=False, 
                                          seed=7)
    
    genX2 = generator.flow_from_directory(dir2,
                                          target_size = (img_height,img_width),
                                          class_mode = 'binary',
                                          batch_size = batch_size,
                                          shuffle=False, 
                                          seed=7)
    while True:
            X1i = genX1.next()
            X2i = genX2.next()
            yield [X1i[0], X2i[0]], X2i[1]  #Yield both images and their mutual label

            
            
            
            
#define data generator parameters
#batch size
batch_size = 16

#training dir
CaII_train = 'E:\Deep Projects\Multispectral Image Classification\Arch_one\Data\CaIItrain'
MDI_train  = 'E:\Deep Projects\Multispectral Image Classification\Arch_one\Data\MDItrain'

#testing dir (validation)
CaII_test  = 'E:\Deep Projects\Multispectral Image Classification\Arch_one\Data\CaIItest'
MDI_test   = 'E:\Deep Projects\Multispectral Image Classification\Arch_one\Data\MDItest'

#trainsetsize
trainsetsize = 1206

#testsetsize
testsetsize = 406


#target resolution
img_height = (64, 64)
img_width  = (64, 64)    
         

    
            
inputgenerator = generate_generator_multiple(generator=input_imgen,
                                           dir1 = CaII_train,
                                           dir2 = MDI_train,
                                           batch_size = batch_size,
                                           img_height = img_height,
                                           img_width = img_width)       


testgenerator = generate_generator_multiple(test_imgen,
                                          dir1 = CaII_test,
                                          dir2 = MDI_test,
                                          batch_size = batch_size,
                                          img_height = img_height,
                                          img_width = img_width)    

#build the multi-branches model



#CaII_Branch
CaII_Branch = Sequential()
CaII_Branch.add(Conv2D(32, (3, 3), input_shape=(64, 64, 3))) #this has to be change to the image_size
CaII_Branch.add(Activation('relu'))
CaII_Branch.add(MaxPooling2D(pool_size=(2, 2)))

CaII_Branch.add(Conv2D(32, (3, 3)))
CaII_Branch.add(Activation('relu'))
CaII_Branch.add(MaxPooling2D(pool_size=(2, 2)))

CaII_Branch.add(Conv2D(64, (3, 3)))
CaII_Branch.add(Activation('relu'))
CaII_Branch.add(MaxPooling2D(pool_size=(2, 2)))
CaII_Branch.add(Flatten())



#MDI_Branch
MDI_Branch = Sequential()
MDI_Branch.add(Conv2D(32, (3, 3), input_shape=(64, 64, 3)))
MDI_Branch.add(Activation('relu'))
MDI_Branch.add(MaxPooling2D(pool_size=(2, 2)))

MDI_Branch.add(Conv2D(32, (3, 3)))
MDI_Branch.add(Activation('relu'))
MDI_Branch.add(MaxPooling2D(pool_size=(2, 2)))

MDI_Branch.add(Conv2D(64, (3, 3)))
MDI_Branch.add(Activation('relu'))
MDI_Branch.add(MaxPooling2D(pool_size=(2, 2)))
MDI_Branch.add(Flatten())

#merging the CaII_Branch and the MDI_Branch
model = Sequential()
model.add(Merge([CaII_Branch, MDI_Branch], mode = 'concat'))

model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

#compile the model
model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

print (model.summary())
#fit the network

# configure network

n_epoch = 100





model.fit_generator(inputgenerator,
                    steps_per_epoch = 1206,
                    epochs = n_epoch,
                    validation_data = testgenerator,
                    validation_steps = 406)




model.save_weights('bottleneck_fc_model.h5')

my goal is to build a classification model that takes as input 2 co-related images for an object and outputs whether it belongs to the class or not (i have single class).

Any help would be appreciated

@hrdkjain
Copy link

I am wondering how can you perform a classification with single class ?
If you are planning for binary classification, don't you think you should provide one more class, which contains image data not belonging to your required class.

@MjdMahasneh
Copy link
Author

@hrdkjain indeed its a problem of two classes. any ideas how to create the generator?

@hrdkjain
Copy link

To me it seems like the parsing of generator is not proper. May be try to change the

while True:
            X1i = genX1.next()
            X2i = genX2.next()
            yield [X1i[0], X2i[0]], X2i[1]  #Yield both images and their mutual label

block in generate_generator_multiple function to (taken from stackoverflow):

 while True:
        for (x1,y1),(x2,y2) in zip(genX1,genX2):
            yield ([x1,x2],y1)

And also add another class of images.

@MjdMahasneh
Copy link
Author

i think maybe using a single generator is easier to generate the two images, i will work on it and see how it goes!

@anamika06jain
Copy link

@MjdMahasneh how two different datasets can be passed using one generator??? @hrdkjain can u please suggest some alternative. i have tried what you said but my validation loss is not converging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants