Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preprocessing new png/jpg image to predict on deep learning model #60

Open
myamaak opened this issue Nov 4, 2020 · 0 comments
Open

Comments

@myamaak
Copy link

myamaak commented Nov 4, 2020

When I load npy data which was provided by google quick draw, the prediction works fine on my deep learning model.

data_url = '/content/gdrive/My Drive/Colab Notebooks/img/numpy_bitmap/sun.npy'
example_cat = np.load(data_url)

cat_len = example_cat.shape[0] # number of total image

start_num = 11 

example = example_cat[start_num,:784+start_num]

plt.imshow(example.reshape(28, 28))
example = example.reshape(28,28,1).astype('float32')
example /=255.0
print(example)
import matplotlib.pyplot as plt
from random import randint
%matplotlib inline  

pred = model.predict(np.expand_dims(example, axis=0))[0]
ind = (-pred).argsort()[:5]
print(ind)
latex = [categories_dict[x] for x in ind]
plt.imshow(example.squeeze()) 
print(latex)

somehow the image file won't be uploaded here so I attach the result of above code by the link: https://s3.us-west-2.amazonaws.com/secure.notion-static.com/57460690-0ad2-42c1-9d5a-cc9d756534ea/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAT73L2G45O3KS52Y5%2F20201104%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20201104T012109Z&X-Amz-Expires=86400&X-Amz-Signature=cc6febeea2aee67315b8c7d353e7e378677ac965707a8ac0eb7bb6ecfb8a5f0b&X-Amz-SignedHeaders=host&response-content-disposition=filename%20%3D%22Untitled.png%22

Then, I captured the exact same image and saved it as a png image. I loaded the file again as NumPy array and preprocessed it so that I can put in my model to predict which category it belongs to. And somehow it does not work and returns completely different prediction. This is happening for every new png image I am trying to work with.

im = cv2.imread('/content/gdrive/My Drive/Colab Notebooks/sun2.PNG', cv2.IMREAD_GRAYSCALE)
resize_img = cv2.resize(im, (28,28), interpolation = cv2.INTER_AREA) 
img_vector = np.asarray(resize_img, dtype="uint8")
img = img_vector.reshape(28,28,1).astype('float32')
import matplotlib.pyplot as plt
from random import randint
%matplotlib inline  

img /= 255.0
pred = model.predict(np.expand_dims(img, axis=0))[0]
ind = (-pred).argsort()[:5]
print(ind)
latex = [categories_dict[x] for x in ind]
plt.imshow(img.squeeze()) 
print(latex)

again, I attach the result for this code as a link: https://s3.us-west-2.amazonaws.com/secure.notion-static.com/a64bd7ce-f72f-4c67-a9f3-0689421ef10e/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAT73L2G45O3KS52Y5%2F20201104%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20201104T012838Z&X-Amz-Expires=86400&X-Amz-Signature=1cc52972a8a50202efa90618acd41c8be483f188f5b70d092ef63c6bc5ce8a18&X-Amz-SignedHeaders=host&response-content-disposition=filename%20%3D%22Untitled.png%22

below is how I preprocessed data and how I trained my model.

# Reshape and normalize
x_train = x_train.reshape(x_train.shape[0], image_size, image_size, 1).astype('float32')
x_test = x_test.reshape(x_test.shape[0], image_size, image_size, 1).astype('float32')
#image_size is 28

x_train /= 255.0
x_test /= 255.0

# Convert class vectors to class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
def cnn_model():
    # create model
    model = Sequential()
    model.add(Conv2D(30, (5, 5), input_shape=x_train.shape[1:], activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(15, (3, 3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(50, activation='relu'))
    model.add(Dense(num_classes, activation='softmax'))
    # Compile model
    
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

the training process and evaluation results are like below.

Epoch 1/100
22356/22356 [==============================] - 1323s 59ms/step - loss: 2.7714 - accuracy: 0.3795 - val_loss: 2.2759 - val_accuracy: 0.4751
Epoch 2/100
22356/22356 [==============================] - 1339s 60ms/step - loss: 2.3925 - accuracy: 0.4481 - val_loss: 2.1659 - val_accuracy: 0.4948
Epoch 3/100
22356/22356 [==============================] - 1323s 59ms/step - loss: 2.3365 - accuracy: 0.4588 - val_loss: 2.1333 - val_accuracy: 0.5015
Epoch 4/100
22356/22356 [==============================] - 1303s 58ms/step - loss: 2.3131 - accuracy: 0.4630 - val_loss: 2.1396 - val_accuracy: 0.4996
Epoch 5/100
22356/22356 [==============================] - 1262s 56ms/step - loss: 2.3013 - accuracy: 0.4655 - val_loss: 2.1199 - val_accuracy: 0.5026
Epoch 6/100
22356/22356 [==============================] - 1326s 59ms/step - loss: 2.2932 - accuracy: 0.4663 - val_loss: 2.1190 - val_accuracy: 0.5046
Epoch 7/100
22356/22356 [==============================] - 1269s 57ms/step - loss: 2.2870 - accuracy: 0.4676 - val_loss: 2.1067 - val_accuracy: 0.5053
Epoch 8/100
22356/22356 [==============================] - 1299s 58ms/step - loss: 2.2844 - accuracy: 0.4678 - val_loss: 2.1090 - val_accuracy: 0.5053
Epoch 9/100
22356/22356 [==============================] - 1288s 58ms/step - loss: 2.2828 - accuracy: 0.4683 - val_loss: 2.1147 - val_accuracy: 0.5045
Epoch 10/100
22356/22356 [==============================] - 1289s 58ms/step - loss: 2.2797 - accuracy: 0.4683 - val_loss: 2.0907 - val_accuracy: 0.5073
Epoch 11/100
22356/22356 [==============================] - 1280s 57ms/step - loss: 2.2784 - accuracy: 0.4690 - val_loss: 2.1087 - val_accuracy: 0.5058
Epoch 12/100
22356/22356 [==============================] - 1262s 56ms/step - loss: 2.2787 - accuracy: 0.4688 - val_loss: 2.1078 - val_accuracy: 0.5035
Epoch 13/100
22356/22356 [==============================] - 1335s 60ms/step - loss: 2.2773 - accuracy: 0.4690 - val_loss: 2.1078 - val_accuracy: 0.5049
Epoch 14/100
22356/22356 [==============================] - 1292s 58ms/step - loss: 2.2789 - accuracy: 0.4687 - val_loss: 2.1239 - val_accuracy: 0.5014
Epoch 15/100
22356/22356 [==============================] - 1277s 57ms/step - loss: 2.2824 - accuracy: 0.4676 - val_loss: 2.1220 - val_accuracy: 0.5016
Epoch 16/100
22356/22356 [==============================] - 1291s 58ms/step - loss: 2.2816 - accuracy: 0.4682 - val_loss: 2.1093 - val_accuracy: 0.5058
CPU times: user 18h 13min 31s, sys: 4h 19min 8s, total: 22h 32min 40s
Wall time: 5h 46min 14s
19407/19407 [==============================] - 101s 5ms/step - loss: 2.1135 - accuracy: 0.5047
Test accuarcy: 50.47%

I am assuming that something is wrong with how I am preprocessing the data, but I cannot find why this is happening and what I am doing wrong. I would be glad if you'd look up what is needed to be done for my code or data. Thank you for open sourcing this amazing project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant