Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays #9475

Closed
vinayakumarr opened this issue Feb 24, 2018 · 29 comments

Comments

@vinayakumarr
Copy link

vinayakumarr commented Feb 24, 2018

I want to pass a pair (good and bad) to the CNN and while testing also I will pass a pair of images. The code is given below

import cv2

X_bad = []
X_bad_id = []
for i in range(1,53):
    a = 'data/train/data/bad/bad'+`i`+'.jpg'
    img = cv2.imread(a)
    X_bad.append(img)
    X_bad_id.append("0")

import numpy as np
X_bad = np.array(X_bad)
X_bad_id = np.array(X_bad_id)

X_good = []
X_good_id = []
for i in range(1,53):
    a = 'data/train/data/good/good'+`i`+'.jpg'
    img = cv2.imread(a)
    X_good.append(img)
    X_good_id.append("1")

import numpy as np
X_good = np.array(X_good)
X_good_id = np.array(X_good_id)`

import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.optimizers import SGD
from keras.callbacks import ModelCheckpoint
from keras.layers import Input
from keras.models import Model


X_good = X_good.astype('float32')
X_bad  = X_bad.astype('float32')

X_good /= 255
X_bad /= 255

visible1 = Input(shape=(250,250,3))
conv11 = Conv2D(32, kernel_size=4, activation='relu')(visible1)
pool11 = MaxPooling2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(16, kernel_size=4, activation='relu')(pool11)
pool12 = MaxPooling2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)

visible2 = Input(shape=(250,250,3))
conv21 = Conv2D(32, kernel_size=4, activation='relu')(visible2)
pool21 = MaxPooling2D(pool_size=(2, 2))(conv21)
conv22 = Conv2D(16, kernel_size=4, activation='relu')(pool21)
pool22 = MaxPooling2D(pool_size=(2, 2))(conv22)
flat2 = Flatten()(pool22)

merge = concatenate([flat1, flat2])

# interpretation model
hidden1 = Dense(10, activation='relu')(merge)
hidden2 = Dense(10, activation='relu')(hidden1)
output = Dense(1, activation='sigmoid')(hidden2)
model = Model(inputs=[visible1, visible2], outputs=output)

model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit([X_good, X_bad], [X_good_id, X_bad_id],epochs=50, batch_size=32)

The above program is giving the following error

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: [array(['1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1',
'1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1',
'1', '1', '1', '1', '1', '1', '1', '1', '1', '1'...

@Yashs744
Copy link

What's the shape of X_good, X_bad, X_good_id & X_bad_id ??

@vinayakumarr
Copy link
Author

print(X_good.shape)
print(X_bad.shape)

(52, 250, 250, 3)
(52, 250, 250, 3)

@Yashs744
Copy link

Yashs744 commented Feb 25, 2018

output = Dense(1, activation='sigmoid')(hidden2)
The issue is that final layer of the network is expecting 1 array (target) as you can see in the code above but you are passing 2 arrays one for good_id and one for bad_id.

Try changing the output layer to output = Dense(2, activation='sigmoid')(hidden2) and see if this works.

@vinayakumarr
Copy link
Author

Getting Same error.

@vinayakumarr
Copy link
Author

It is multi-input and multi-output model. The algorithm takes pair of images as input at a time.

@Yashs744
Copy link

While running this code

X_bad_id = []
for i in range(1,53):
    X_bad_id.append("0")


X_good_id = []
for i in range(1,53):
    X_good_id.append("1")

X_bad_id = np.array(X_bad_id)
X_good_id = np.array(X_good_id)
  1. You are appending 0 & 1 in quotes (') but instead, it should be an integer.
  2. You need to reshape the array X_good_id & X_bad_id. Currently the shape of X_bad_id & X_good_id is (52,) instead it should be (52, 1).

@vinayakumarr
Copy link
Author

No getting same error

@vinayakumarr
Copy link
Author

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: [array([['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
...

@chez8990
Copy link

chez8990 commented Mar 29, 2018

you need to concatenate X_good_id and X_bad_id using numpy. your model.fit line is indicating that your model as multi-output [X_good_id, X_out_id] whereas the model you have built only has one output output = Dense(1, activation='sigmoid')(hidden2).

So instead, try doing the following

X_out = np.concatenate([X_good_id, X_bad_id])
model.fit([X_good, X_bad], X_out, epochs=50, batch_size=32)

@vinayakumarr
Copy link
Author

Input: it should take two images at a time (good and bad image). It should produce two outputs

The code is given below

import cv2

X_bad = [] X_bad_id = [] for i in range(1,53): a = 'data/train/data/bad/bad'+i+'.jpg' img = cv2.imread(a) X_bad.append(img) X_bad_id.append('0')

import numpy as np X_bad = np.array(X_bad) X_bad_id = np.array(X_bad_id)

X_good = [] X_good_id = [] for i in range(1,53): a = 'data/train/data/good/good'+i+'.jpg' img = cv2.imread(a) X_good.append(img) X_good_id.append('1')

import numpy as np X_good = np.array(X_good) X_good_id = np.array(X_good_id)

print(X_good.shape) print(X_bad.shape)

X_bad_id = np.array(X_bad_id) X_good_id = np.array(X_good_id)

X_bad_id = X_bad_id.reshape((X_bad_id.shape[0], 1)) X_good_id = X_good_id.reshape((X_good_id.shape[0], 1))

import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Conv2D from keras.layers.pooling import MaxPooling2D from keras.layers.merge import concatenate from keras.optimizers import SGD from keras.callbacks import ModelCheckpoint from keras.layers import Input from keras.models import Model

`X_good = X_good.astype('float32')
X_bad = X_bad.astype('float32')

X_good /= 255
X_bad /= 255`

`visible1 = Input(shape=(250,250,3))
conv11 = Conv2D(32, kernel_size=4, activation='relu')(visible1)
pool11 = MaxPooling2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(16, kernel_size=4, activation='relu')(pool11)
pool12 = MaxPooling2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)

visible2 = Input(shape=(250,250,3))
conv21 = Conv2D(32, kernel_size=4, activation='relu')(visible2)
pool21 = MaxPooling2D(pool_size=(2, 2))(conv21)
conv22 = Conv2D(16, kernel_size=4, activation='relu')(pool21)
pool22 = MaxPooling2D(pool_size=(2, 2))(conv22)
flat2 = Flatten()(pool22)

merge = concatenate([flat1, flat2])

interpretation model

hidden1 = Dense(10, activation='relu')(merge)
hidden2 = Dense(10, activation='relu')(hidden1)
output = Dense(2, activation='softmax')(hidden2)
model = Model(inputs=[visible1, visible2], outputs=output)`

model.compile(optimizer='adam', loss='categorical_crossentropy') X_out = np.concatenate([X_good_id, X_bad_id]) model.fit([X_good, X_bad], [X_good_id, X_bad_id],epochs=50, batch_size=32)

It is showing the below error. Could you please tell how to correct this?

`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 model.compile(optimizer='adam', loss='categorical_crossentropy')
2 X_out = np.concatenate([X_good_id, X_bad_id])
----> 3 model.fit([X_good, X_bad], [X_good_id, X_bad_id],epochs=50, batch_size=32)

/home/vinay/securetensor/local/lib/python2.7/site-packages/keras/engine/training.pyc in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1628 sample_weight=sample_weight,
1629 class_weight=class_weight,
-> 1630 batch_size=batch_size)
1631 # Prepare validation data.
1632 do_validation = False

/home/vinay/securetensor/local/lib/python2.7/site-packages/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
1478 output_shapes,
1479 check_batch_axis=False,
-> 1480 exception_prefix='target')
1481 sample_weights = _standardize_sample_weights(sample_weight,
1482 self._feed_output_names)

/home/vinay/securetensor/local/lib/python2.7/site-packages/keras/engine/training.pyc in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
84 'Expected to see ' + str(len(names)) + ' array(s), '
85 'but instead got the following list of ' +
---> 86 str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
87 elif len(names) > 1:
88 raise ValueError(

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: [array([['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
['1'],
...`

@chez8990
Copy link

@vinayakumarr I think I misunderstood what you wanted to do with the model. If you do want 2 outputs, then you need to specify it in the Model API, see here for the docs.

Example:

output1 = Dense(1, activation='sigmoid')(x)
output2 = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[visible1, visible2], outputs=[output1, output2])

then you can call fit just like you did

model.fit([X_good, X_bad], [X_good_id, X_bad_id], epochs=50, batch_size=32)

@vinayakumarr
Copy link
Author

It works. But I have a doubt. My problem is that

Input: it should take two images at a time (good and bad image). It should produce two outputs. Whether the followed method is correct?

@chez8990
Copy link

chez8990 commented Mar 29, 2018

@vinayakumarr Can you elaborate more? If there are no more technical issues then please close the issue

@vinayakumarr
Copy link
Author

The code is already given above. It works fine.

When i give model.predict([X_good, X_bad])
It is predicting properly. I am getting a single vector like give below

[array([[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.]], dtype=float32), array([[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.]], dtype=float32)]

When I swap the inputs like give below
model.predict([X_bad, X_good])
This time it is predicting the same above vector

The model should give same output even if i revrse the inputs

@chez8990
Copy link

@vinayakumarr
you are experience something known as overfitting. You should find more data or use a simpler network.

@vinayakumarr
Copy link
Author

So then there is no problem in the network am I right? Also, I can reverse the inputs during the testing stage?

@vinayakumarr
Copy link
Author

vinayakumarr commented Apr 4, 2018

The code is given below. It takes two images, 33 images from the first category and 33 images from another category. At a time two input is passed and the network should tell whether it belongs to the first or second category.
During testing, I passed the same data set. Total 66 images (33 from each category). It should produce 33 output instead of 66. How to do this?

import cv2

X_bad = []
X_bad_id = []
for i in range(1,33):
a = 'data/train/data/bad/bad'+i+'.jpg'
img = cv2.imread(a)
X_bad.append(img)
X_bad_id.append('0')

import numpy as np
X_bad = np.array(X_bad)
X_bad_id = np.array(X_bad_id)
X_good = []
X_good_id = []
for i in range(1,33):
a = 'data/train/data/good/good'+i+'.jpg'
img = cv2.imread(a)
X_good.append(img)
X_good_id.append('1')

import numpy as np
X_good = np.array(X_good)
X_good_id = np.array(X_good_id)
print(X_good.shape)
print(X_bad.shape)
X_bad_id = np.array(X_bad_id)
X_good_id = np.array(X_good_id)
X_bad_id = X_bad_id.reshape((X_bad_id.shape[0], 1))
X_good_id = X_good_id.reshape((X_good_id.shape[0], 1))

import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.optimizers import SGD
from keras.callbacks import ModelCheckpoint
from keras.layers import Input
from keras.models import Model`
X_good = X_good.astype('float32')
X_bad = X_bad.astype('float32')

X_good /= 255
X_bad /= 255
visible1 = Input(shape=(250,250,3))
conv11 = Conv2D(32, kernel_size=4, activation='relu')(visible1)
pool11 = MaxPooling2D(pool_size=(2, 2))(conv11)
conv12 = Conv2D(16, kernel_size=4, activation='relu')(pool11)
pool12 = MaxPooling2D(pool_size=(2, 2))(conv12)
flat1 = Flatten()(pool12)

visible2 = Input(shape=(250,250,3))
conv21 = Conv2D(32, kernel_size=4, activation='relu')(visible2)
pool21 = MaxPooling2D(pool_size=(2, 2))(conv21)
conv22 = Conv2D(16, kernel_size=4, activation='relu')(pool21)
pool22 = MaxPooling2D(pool_size=(2, 2))(conv22)
flat2 = Flatten()(pool22)

merge = concatenate([flat1, flat2])

interpretation model

hidden1 = Dense(10, activation='relu')(merge)
hidden2 = Dense(10, activation='relu')(hidden1)
output1 = Dense(1, activation='sigmoid')(hidden1)
output2 = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=[visible1, visible2], outputs=[output1, output2])
model.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
X_out = np.concatenate([X_good_id, X_bad_id])
model.fit([X_good, X_bad], [X_good_id, X_bad_id],epochs=50, batch_size=32)
pr = model.predict([X_good, X_bad])

pr is a list of list contains 33 images in one list and 33 images in another list. But it should contain only one list with 33 elements. How to do this?

@zbokaee
Copy link

zbokaee commented Jul 18, 2018

hi,im working on a sentiment analysis project with keras since im new to keras i don't have any view to solve this problem, this is my keras model:

model = Sequential()

model.add(Conv1D(32, kernel_size=3, activation='elu', padding='same',
input_shape=(max_tweet_length, vector_size)))
model.add(Conv1D(32, kernel_size=3, activation='elu', padding='same'))
model.add(Conv1D(32, kernel_size=3, activation='elu', padding='same'))
model.add(Conv1D(32, kernel_size=3, activation='elu', padding='same'))
model.add(Dropout(0.25))
model.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
model.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
model.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
model.add(Conv1D(32, kernel_size=2, activation='elu', padding='same'))
model.add(Dropout(0.25))
model.add(Dense(256, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
and when i want to predict the sentiment of an input, i face with this error:

""" ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 3 arrays: [array([[ 0.08031651, 0.05684812, 0.22872323, ..., -0.19047852..."""

sorry if its such a stupid question! i know it was asked several times in the SOF but i did most of their suggestion, seems it was not practical to me since my poor knowledge about keras
thanks a lot

@niti2539
Copy link

@zbokaee
i have ever met this problem i think it relate about your input size. Follow the text error "size the model expected. Expected to see 1 array(s), but instead got the following list of 3 " then you have to change your input size or input model.
model.add(Conv1D(32, kernel_size=3, activation='elu', padding='same',
input_shape=(max_tweet_length, vector_size))) <<<<<< i think this line

@boxingprogrammer
Copy link

I had this trouble on a later version of tensorflow/keras, but not with an earlier version. The later version was on Sagemaker with Python 3.6, keras 2.2.4, tensorflow 1.12. The trick was to update the arrays to np.array:
x=np.array(x)
y=np.array(y)

@sunilchinnahalli
Copy link

model.fit({'main_input': X_text_train, 'aux_input': X_number_train},
{'main_output': y_train, 'aux_output': y_train},
validation_data=[{'main_input': X_text_test, 'aux_input': X_number_test}, {'main_output': y_test, 'aux_output': y_test}],
epochs=20,
batch_size=2048)

predicted_classes = model.predict(np.array(X_text_train))
#predicted_classes = model.predict(np.array(X_text_train), np.array(X_number_train))
#predicted_classes = model.predict(X_text_train)

getting following error
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[ 0, 0, 0, ..., 7614, 450, 963],
[ 0, 0, 0, ..., 28, 3, 1259],
[ 0, 0, 0, ..., 332, 136, 140],
...,
[ 0, 0, 0, ..., 1174, ...

@SaiSandeepKantareddy
Copy link

[X_good, X_bad], [X_good_id, X_bad_id]

Why do you send the array like this? What's the use of this?

@shahboztjk
Copy link

shahboztjk commented Oct 30, 2019

I think that here is correct (I change a little):
model.fit([X_good, X_good_id], [X_bad, X_bad_id], epochs=50, batch_size=32)

@tamaraalshekhli
Copy link

tamaraalshekhli commented Dec 11, 2019

I got similar error in semantic segmentation task, with 2 inputs and 2 outputs , is there anyone who tried similar task? how to use multi image_generator and mask_generator for the 2 inputs and outputs?

@Mahmood-Hoseini
Copy link

model.fit({'main_input': X_text_train, 'aux_input': X_number_train},
{'main_output': y_train, 'aux_output': y_train},
validation_data=[{'main_input': X_text_test, 'aux_input': X_number_test}, {'main_output': y_test, 'aux_output': y_test}],
epochs=20,
batch_size=2048)

predicted_classes = model.predict(np.array(X_text_train))
#predicted_classes = model.predict(np.array(X_text_train), np.array(X_number_train))
#predicted_classes = model.predict(X_text_train)

getting following error
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[ 0, 0, 0, ..., 7614, 450, 963],
[ 0, 0, 0, ..., 28, 3, 1259],
[ 0, 0, 0, ..., 332, 136, 140],
...,
[ 0, 0, 0, ..., 1174, ...

I had the same issue and turning list of arrays to arrays solved it. e.g. do this {'main_input': np.asarray(X_text_train), 'aux_input': np.asarray(X_number_train)}

@nivetha1234
Copy link

I got similar error when i use dataloader (keras.utils.Sequence) for loading my data from dataset.
And also I have tried to load only one input image at a time with its corresponding mask. The training and validation process is executing well. I got this error when i try to compute confusion matrix after prediction.
Here is my code: I have included only some necessary parts of my code.
class Dataset:
CLASSES = ['background', 'cloud_shadow', 'double_plant', 'planter_skip', 'standing_water',
'waterway', 'weed_cluster','ignore']
def init(self, mode='train',image=None,target=None, windSize=(512, 512),classes=None,
num_samples=10000, pre_norm=False, scale=1.0 / 1.0,preprocessing=None):
assert mode in ['train', 'val', 'test']
self.mode = mode
self.norm = pre_norm
self.winsize = windSize
self.samples = num_samples
self.scale = scale
self.image_files = image # image_files = [[bands1, bands2,..], ...]
self.mask_files = target # mask_files = [gt1, gt2, ...]
self.classes=None,
self.preprocessing=preprocessing
self.class_values = [self.CLASSES.index(cls.lower()) for cls in classes]

def __len__(self):
    return len(self.image_files)

def __getitem__(self, idx):
  filename = self.image_files[idx]
  path, _ = os.path.split(filename)
  image = imload(filename, scale_rate=self.scale)

  mask = imload(self.mask_files[idx], gray=True, scale_rate=self.scale)
  mask[mask==255]=7

  mask = [(mask == v) for v in self.class_values]
  mask = np.stack(mask, axis=-1).astype('float')

    #mask=np.expand_dims(mask,axis=2)

    #image = np.asarray(image, np.float32).transpose((2, 0, 1)) / 255.0
    #image = np.asarray(image, np.float32)/ 255.0
    #mask = np.asarray(mask, dtype='int64')

    #image, mask = torch.from_numpy(image), torch.from_numpy(mask)
    
  if self.preprocessing:
    sample = self.preprocessing(image=image, mask=mask)
    image, mask = sample['image'], sample['mask']

  return image,mask

class Dataloder(keras.utils.Sequence):
"""Load data from dataset and form batches

Args:
    dataset: instance of Dataset class for image loading and preprocessing.
    batch_size: Integet number of images in batch.
    shuffle: Boolean, if `True` shuffle image indexes each epoch.
"""

def __init__(self, dataset, batch_size=1, shuffle=False):
    self.dataset = dataset
    self.batch_size = batch_size
    self.shuffle = shuffle
    self.indexes = np.arange(len(dataset))

    self.on_epoch_end()

def __getitem__(self, i):
    
    # collect batch data
    start = i * self.batch_size
    stop = (i + 1) * self.batch_size
    data = []
    for j in range(start, stop):
        data.append(self.dataset[j])
    
    # transpose list of lists
    batch = [np.stack(samples, axis=0) for samples in zip(*data)]
    
    return batch

def __len__(self):
    """Denotes the number of batches per epoch"""
    return len(self.indexes) // self.batch_size

def on_epoch_end(self):
    """Callback function to shuffle indexes each epoch"""
    if self.shuffle:
        self.indexes = np.random.permutation(self.indexes)

valid_dataset = Dataset(
image=va_input_img_paths,
target=va_target_img_paths,
classes=['background', 'cloud_shadow','double_plant', 'planter_skip', 'standing_water',
'waterway', 'weed_cluster','ignore'],
preprocessing=get_preprocessing(preprocess_input),
)

valid_dataloader = Dataloder(valid_dataset, batch_size=1, shuffle=False)

Y_pred = model1.predict_generator(valid_dataloader, len(valid_dataloader))
y_pred = np.argmax(Y_pred, axis=1)
print('Confusion Matrix')
cm = confusion_matrix(valid_dataloader.classes, y_pred)

please help me to resolve this issue. Thanks in advance.

@qyum
Copy link

qyum commented Apr 18, 2021

How to fix this issue:

Here is my code snippet:

 outputs = {'ctc': np.zeros([final_df.shape[0]])}
 
    
inputs = {'input': np.asarray(X_data),   
          'labels': np.asarray(labels),
             'input_length':np.asarray(input_length) ,
              'label_length':np.asarray(label_length)
          }

At model,I fit the inputs,outputs....whenever I'm trying to predict model then return a same error

    model.fit(inputs,outputs,batch_size=10,epochs=epochs,validation_split=0.25)

   def get_predictions(model, data_point): 
       print(data_point.shape[0])    
 # obtain and decode the acoustic model's predictions
      model.load_weights("E:/Deep_speech_recognition/model.h5")
      prediction = model.predict(np.array(np.expand_dims(data_point, axis=0)))
      output_length = [model.output_length(data_point.shape[0])] 
      pred_ints = (K.eval(K.ctc_decode(
            prediction, output_length)[0][0])+1).flatten().tolist()
     print('-'*80)
     print('Predicted transcription:\n' + '\n' + ''.join(int_sequence_to_text(pred_ints)))
     print('-'*80)  
    get_predictions(model,mfcc_vec[0])

This shows,
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 4 array(s), but instead got the following list of 1 arrays: [array([[[-717.90515, -717.90515, -717.90515, ..., 0. ,
0. , 0. ],
[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ],
...

@qyum
Copy link

qyum commented Apr 19, 2021

@Mahmood-Hoseini
I am trying following this way,whenever I'm trying to predict model then return a same error

      outputs = {'ctc': np.zeros([final_df.shape[0]])}


      inputs = {'input': np.asarray(X_data),   
      'labels': np.asarray(labels),
         'input_length':np.asarray(input_length) ,
          'label_length':np.asarray(label_length)
      }
      model.fit(inputs,outputs,batch_size=10,epochs=epochs,validation_split=0.25)

Can you consult me please?

@usthbstar
Copy link

@vinayakumarr I think I misunderstood what you wanted to do with the model. If you do want 2 outputs, then you need to specify it in the Model API, see here for the docs.

Example:

output1 = Dense(1, activation='sigmoid')(x)
output2 = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[visible1, visible2], outputs=[output1, output2])

then you can call fit just like you did

model.fit([X_good, X_bad], [X_good_id, X_bad_id], epochs=50, batch_size=32)

Dear @chez8990

If I only need one output, how it comes ?
In my model it appear the following error:

Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays:

please help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests