Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error occurring while running the code #1

Open
vikash512 opened this issue Jul 31, 2017 · 9 comments
Open

Error occurring while running the code #1

vikash512 opened this issue Jul 31, 2017 · 9 comments

Comments

@vikash512
Copy link

/home/ashok/anaconda3/bin/python /home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py
Using TensorFlow backend.
Found 4654 images belonging to 10 classes.
Found 1168 images belonging to 10 classes.
/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py:121: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(32, (7, 7), input_shape=(128, 128,...)
model.add(Convolution2D(32, 7, 7, input_shape=(128, 128, 3)))
/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py:124: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(64, (5, 5))
model.add(Convolution2D(64, 5, 5))
/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py:127: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(128, (3, 3))
model.add(Convolution2D(128, 3, 3))


Layer (type) Output Shape Param #

conv2d_1 (Conv2D) (None, 122, 122, 32) 4736


activation_1 (Activation) (None, 122, 122, 32) 0


max_pooling2d_1 (MaxPooling2 (None, 61, 61, 32) 0


conv2d_2 (Conv2D) (None, 57, 57, 64) 51264


activation_2 (Activation) (None, 57, 57, 64) 0


max_pooling2d_2 (MaxPooling2 (None, 28, 28, 64) 0


conv2d_3 (Conv2D) (None, 26, 26, 128) 73856


activation_3 (Activation) (None, 26, 26, 128) 0


max_pooling2d_3 (MaxPooling2 (None, 13, 13, 128) 0


dropout_1 (Dropout) (None, 13, 13, 128) 0


flatten_1 (Flatten) (None, 21632) 0


dense_1 (Dense) (None, 128) 2769024


activation_4 (Activation) (None, 128) 0


dropout_2 (Dropout) (None, 128) 0


dense_2 (Dense) (None, 10) 1290


activation_5 (Activation) (None, 10) 0

Total params: 2,900,170.0
Trainable params: 2,900,170.0
Non-trainable params: 0.0


None
/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py:188: UserWarning: Update your fit_generator call to the Keras 2 API: fit_generator(<keras.pre..., verbose=1, validation_data=<keras.pre..., callbacks=[<keras.ca..., steps_per_epoch=145, epochs=100, validation_steps=1168)
callbacks = [earlystopper, lrate, checkpoint, hist])
Epoch 1/100
2017-07-31 10:22:34.909867: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-31 10:22:34.909906: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-31 10:22:34.909917: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-31 10:22:34.909925: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-31 10:22:34.909933: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
1/145 [..............................] - ETA: 858s - loss: 2.3407 - acc: 0.0625
2/145 [..............................] - ETA: 686s - loss: 2.3314 - acc: 0.1094
3/145 [..............................] - ETA: 619s - loss: 2.3092 - acc: 0.1250
4/145 [..............................] - ETA: 583s - loss: 2.3028 - acc: 0.1562
5/145 [>.............................] - ETA: 560s - loss: 2.2906 - acc: 0.1875
6/145 [>.............................] - ETA: 544s - loss: 2.2735 - acc: 0.2031
7/145 [>.............................] - ETA: 532s - loss: 2.2548 - acc: 0.2098
8/145 [>.............................] - ETA: 521s - loss: 2.2769 - acc: 0.2070
9/145 [>.............................] - ETA: 512s - loss: 2.2668 - acc: 0.2188
10/145 [=>............................] - ETA: 503s - loss: 2.2576 - acc: 0.2188
11/145 [=>............................] - ETA: 496s - loss: 2.2542 - acc: 0.2131
12/145 [=>............................] - ETA: 489s - loss: 2.2457 - acc: 0.2135
13/145 [=>............................] - ETA: 483s - loss: 2.2414 - acc: 0.2163
14/145 [=>............................] - ETA: 477s - loss: 2.2427 - acc: 0.2143
15/145 [==>...........................] - ETA: 472s - loss: 2.2332 - acc: 0.2146
16/145 [==>...........................] - ETA: 466s - loss: 2.2248 - acc: 0.2148
17/145 [==>...........................] - ETA: 461s - loss: 2.2325 - acc: 0.2151
18/145 [==>...........................] - ETA: 456s - loss: 2.2320 - acc: 0.2135
19/145 [==>...........................] - ETA: 452s - loss: 2.2321 - acc: 0.2171
20/145 [===>..........................] - ETA: 447s - loss: 2.2338 - acc: 0.2156
21/145 [===>..........................] - ETA: 444s - loss: 2.2291 - acc: 0.2158
22/145 [===>..........................] - ETA: 440s - loss: 2.2230 - acc: 0.2159
23/145 [===>..........................] - ETA: 435s - loss: 2.2234 - acc: 0.2160
24/145 [===>..........................] - ETA: 431s - loss: 2.2170 - acc: 0.2201
25/145 [====>.........................] - ETA: 427s - loss: 2.2212 - acc: 0.2150
26/145 [====>.........................] - ETA: 423s - loss: 2.2185 - acc: 0.2188
27/145 [====>.........................] - ETA: 419s - loss: 2.2147 - acc: 0.2199
28/145 [====>.........................] - ETA: 417s - loss: 2.2144 - acc: 0.2176
29/145 [=====>........................] - ETA: 415s - loss: 2.2071 - acc: 0.2166
30/145 [=====>........................] - ETA: 412s - loss: 2.1974 - acc: 0.2177
31/145 [=====>........................] - ETA: 409s - loss: 2.1981 - acc: 0.2147
32/145 [=====>........................] - ETA: 406s - loss: 2.1949 - acc: 0.2148
33/145 [=====>........................] - ETA: 403s - loss: 2.1962 - acc: 0.2140
34/145 [======>.......................] - ETA: 399s - loss: 2.1949 - acc: 0.2142
35/145 [======>.......................] - ETA: 397s - loss: 2.1944 - acc: 0.2134
36/145 [======>.......................] - ETA: 394s - loss: 2.1894 - acc: 0.2153
37/145 [======>.......................] - ETA: 392s - loss: 2.1966 - acc: 0.2162
38/145 [======>.......................] - ETA: 391s - loss: 2.1942 - acc: 0.2171
39/145 [=======>......................] - ETA: 387s - loss: 2.1928 - acc: 0.2163
40/145 [=======>......................] - ETA: 384s - loss: 2.1878 - acc: 0.2180
41/145 [=======>......................] - ETA: 380s - loss: 2.1854 - acc: 0.2203
42/145 [=======>......................] - ETA: 377s - loss: 2.1826 - acc: 0.2202
43/145 [=======>......................] - ETA: 375s - loss: 2.1807 - acc: 0.2195
44/145 [========>.....................] - ETA: 373s - loss: 2.1782 - acc: 0.2202
45/145 [========>.....................] - ETA: 370s - loss: 2.1766 - acc: 0.2222
46/145 [========>.....................] - ETA: 367s - loss: 2.1713 - acc: 0.2262
47/145 [========>.....................] - ETA: 365s - loss: 2.1703 - acc: 0.2267
48/145 [========>.....................] - ETA: 362s - loss: 2.1696 - acc: 0.2259
49/145 [=========>....................] - ETA: 358s - loss: 2.1673 - acc: 0.2283
50/145 [=========>....................] - ETA: 354s - loss: 2.1631 - acc: 0.2281
51/145 [=========>....................] - ETA: 351s - loss: 2.1585 - acc: 0.2273
52/145 [=========>....................] - ETA: 347s - loss: 2.1520 - acc: 0.2302
53/145 [=========>....................] - ETA: 343s - loss: 2.1488 - acc: 0.2300
54/145 [==========>...................] - ETA: 339s - loss: 2.1460 - acc: 0.2309
55/145 [==========>...................] - ETA: 334s - loss: 2.1435 - acc: 0.2307
56/145 [==========>...................] - ETA: 330s - loss: 2.1406 - acc: 0.2321
57/145 [==========>...................] - ETA: 326s - loss: 2.1392 - acc: 0.2319
58/145 [===========>..................] - ETA: 322s - loss: 2.1359 - acc: 0.2322
59/145 [===========>..................] - ETA: 318s - loss: 2.1303 - acc: 0.2336
60/145 [===========>..................] - ETA: 314s - loss: 2.1275 - acc: 0.2344
61/145 [===========>..................] - ETA: 310s - loss: 2.1252 - acc: 0.2346
62/145 [===========>..................] - ETA: 306s - loss: 2.1219 - acc: 0.2364
63/145 [============>.................] - ETA: 302s - loss: 2.1173 - acc: 0.2376
64/145 [============>.................] - ETA: 298s - loss: 2.1150 - acc: 0.2388
65/145 [============>.................] - ETA: 294s - loss: 2.1105 - acc: 0.2413
66/145 [============>.................] - ETA: 290s - loss: 2.1109 - acc: 0.2401
67/145 [============>.................] - ETA: 286s - loss: 2.1071 - acc: 0.2416
68/145 [=============>................] - ETA: 282s - loss: 2.1023 - acc: 0.2422
69/145 [=============>................] - ETA: 278s - loss: 2.0987 - acc: 0.2423
70/145 [=============>................] - ETA: 274s - loss: 2.1017 - acc: 0.2429
71/145 [=============>................] - ETA: 270s - loss: 2.1047 - acc: 0.2408
72/145 [=============>................] - ETA: 266s - loss: 2.1044 - acc: 0.2413
73/145 [==============>...............] - ETA: 262s - loss: 2.1038 - acc: 0.2423
74/145 [==============>...............] - ETA: 259s - loss: 2.1027 - acc: 0.2428
75/145 [==============>...............] - ETA: 255s - loss: 2.1017 - acc: 0.2433
76/145 [==============>...............] - ETA: 251s - loss: 2.1014 - acc: 0.2426
77/145 [==============>...............] - ETA: 247s - loss: 2.0975 - acc: 0.2447
78/145 [===============>..............] - ETA: 243s - loss: 2.0960 - acc: 0.2452
79/145 [===============>..............] - ETA: 239s - loss: 2.0941 - acc: 0.2460
80/145 [===============>..............] - ETA: 235s - loss: 2.0937 - acc: 0.2465
81/145 [===============>..............] - ETA: 232s - loss: 2.0931 - acc: 0.2450
82/145 [===============>..............] - ETA: 228s - loss: 2.0927 - acc: 0.2454
83/145 [================>.............] - ETA: 224s - loss: 2.0919 - acc: 0.2451
84/145 [================>.............] - ETA: 220s - loss: 2.0921 - acc: 0.2463
85/145 [================>.............] - ETA: 217s - loss: 2.0889 - acc: 0.2467
86/145 [================>.............] - ETA: 213s - loss: 2.0857 - acc: 0.2485
87/145 [=================>............] - ETA: 209s - loss: 2.0847 - acc: 0.2486
88/145 [=================>............] - ETA: 205s - loss: 2.0823 - acc: 0.2493
89/145 [=================>............] - ETA: 202s - loss: 2.0827 - acc: 0.2489
90/145 [=================>............] - ETA: 198s - loss: 2.0819 - acc: 0.2490
91/145 [=================>............] - ETA: 194s - loss: 2.0819 - acc: 0.2483
92/145 [==================>...........] - ETA: 190s - loss: 2.0781 - acc: 0.2486
93/145 [==================>...........] - ETA: 187s - loss: 2.0758 - acc: 0.2487
94/145 [==================>...........] - ETA: 183s - loss: 2.0728 - acc: 0.2500
95/145 [==================>...........] - ETA: 179s - loss: 2.0721 - acc: 0.2510
96/145 [==================>...........] - ETA: 176s - loss: 2.0676 - acc: 0.2523
97/145 [===================>..........] - ETA: 172s - loss: 2.0625 - acc: 0.2545
98/145 [===================>..........] - ETA: 168s - loss: 2.0613 - acc: 0.2554
99/145 [===================>..........] - ETA: 165s - loss: 2.0572 - acc: 0.2563
100/145 [===================>..........] - ETA: 161s - loss: 2.0519 - acc: 0.2578
101/145 [===================>..........] - ETA: 157s - loss: 2.0457 - acc: 0.2605
102/145 [====================>.........] - ETA: 154s - loss: 2.0494 - acc: 0.2607
103/145 [====================>.........] - ETA: 150s - loss: 2.0489 - acc: 0.2615
104/145 [====================>.........] - ETA: 146s - loss: 2.0451 - acc: 0.2632
105/145 [====================>.........] - ETA: 143s - loss: 2.0438 - acc: 0.2640
106/145 [====================>.........] - ETA: 139s - loss: 2.0407 - acc: 0.2647
107/145 [=====================>........] - ETA: 135s - loss: 2.0384 - acc: 0.2658
108/145 [=====================>........] - ETA: 132s - loss: 2.0368 - acc: 0.2674
109/145 [=====================>........] - ETA: 128s - loss: 2.0336 - acc: 0.2683
110/145 [=====================>........] - ETA: 125s - loss: 2.0334 - acc: 0.2693
111/145 [=====================>........] - ETA: 121s - loss: 2.0322 - acc: 0.2706
112/145 [======================>.......] - ETA: 117s - loss: 2.0315 - acc: 0.2712
113/145 [======================>.......] - ETA: 114s - loss: 2.0290 - acc: 0.2721
114/145 [======================>.......] - ETA: 110s - loss: 2.0298 - acc: 0.2728
115/145 [======================>.......] - ETA: 106s - loss: 2.0299 - acc: 0.2717
116/145 [=======================>......] - ETA: 103s - loss: 2.0273 - acc: 0.2724
117/145 [=======================>......] - ETA: 99s - loss: 2.0264 - acc: 0.2727
118/145 [=======================>......] - ETA: 96s - loss: 2.0261 - acc: 0.2728
119/145 [=======================>......] - ETA: 92s - loss: 2.0264 - acc: 0.2731
120/145 [=======================>......] - ETA: 88s - loss: 2.0255 - acc: 0.2732
121/145 [========================>.....] - ETA: 85s - loss: 2.0238 - acc: 0.2740
122/145 [========================>.....] - ETA: 81s - loss: 2.0202 - acc: 0.2759
123/145 [========================>.....] - ETA: 78s - loss: 2.0176 - acc: 0.2764
124/145 [========================>.....] - ETA: 74s - loss: 2.0171 - acc: 0.2760
125/145 [========================>.....] - ETA: 71s - loss: 2.0151 - acc: 0.2762
126/145 [=========================>....] - ETA: 67s - loss: 2.0134 - acc: 0.2773
127/145 [=========================>....] - ETA: 63s - loss: 2.0111 - acc: 0.2785
128/145 [=========================>....] - ETA: 60s - loss: 2.0086 - acc: 0.2791
129/145 [=========================>....] - ETA: 56s - loss: 2.0091 - acc: 0.2791
130/145 [=========================>....] - ETA: 53s - loss: 2.0077 - acc: 0.2793
131/145 [==========================>...] - ETA: 49s - loss: 2.0094 - acc: 0.2793
132/145 [==========================>...] - ETA: 46s - loss: 2.0085 - acc: 0.2791
133/145 [==========================>...] - ETA: 42s - loss: 2.0080 - acc: 0.2798
134/145 [==========================>...] - ETA: 38s - loss: 2.0063 - acc: 0.2796
135/145 [==========================>...] - ETA: 35s - loss: 2.0054 - acc: 0.2796
136/145 [===========================>..] - ETA: 31s - loss: 2.0047 - acc: 0.2796
137/145 [===========================>..] - ETA: 28s - loss: 2.0032 - acc: 0.2801
138/145 [===========================>..] - ETA: 24s - loss: 2.0050 - acc: 0.2799
139/145 [===========================>..] - ETA: 21s - loss: 2.0051 - acc: 0.2806
140/145 [===========================>..] - ETA: 17s - loss: 2.0035 - acc: 0.2815
141/145 [============================>.] - ETA: 14s - loss: 2.0037 - acc: 0.2812
142/145 [============================>.] - ETA: 10s - loss: 2.0018 - acc: 0.2819
143/145 [============================>.] - ETA: 7s - loss: 2.0009 - acc: 0.2815
144/145 [============================>.] - ETA: 3s - loss: 1.9996 - acc: 0.2823loss: 1.99745497539
New learning rate: 0.00737027467905
Epoch 00000: val_acc improved from -inf to 0.31695, saving model to /home/ashok/Desktop/out/modelWeights/cnnModelDEp80weights.best.hdf5
Traceback (most recent call last):
File "/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py", line 188, in
callbacks = [earlystopper, lrate, checkpoint, hist])
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/models.py", line 1097, in fit_generator
initial_epoch=initial_epoch)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1913, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/callbacks.py", line 75, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/callbacks.py", line 400, in on_epoch_end
self.model.save(filepath, overwrite=True)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 2429, in save
save_model(self, filepath, overwrite)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/keras/models.py", line 96, in save_model
f = h5py.File(filepath, 'w')
File "/home/ashok/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/ashok/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 107, in make_fid
fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028290543/work/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028290543/work/h5py/_objects.c:2804)
File "h5py/h5f.pyx", line 98, in h5py.h5f.create (/home/ilan/minonda/conda-bld/h5py_1490028290543/work/h5py/h5f.c:2290)
OSError: Unable to create file (Unable to open file: name = '/home/ashok/desktop/out/modelweights/cnnmodeldep80weights.best.hdf5', errno = 2, error message = 'no such file or directory', flags = 13, o_flags = 242)

Process finished with exit code 1

@jingweimo
Copy link
Owner

jingweimo commented Jul 31, 2017

Please note the original code is based on keras 1.1.0 and theano 0.82. If you use tensorflow, I cannot guarantee if it works or not

@vikash512
Copy link
Author

Thank you for your reply but the thing is that the same error is coming again even the keras with back end theano

/usr/bin/python2.7 /home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py
Using Theano backend.
/home/ashok/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py:1282: UserWarning: DEPRECATION: the 'ds' parameter is not going to exist anymore as it is going to be replaced by the parameter 'ws'.
mode='max')
/home/ashok/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py:1282: UserWarning: DEPRECATION: the 'st' parameter is not going to exist anymore as it is going to be replaced by the parameter 'stride'.
mode='max')
/home/ashok/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py:1282: UserWarning: DEPRECATION: the 'padding' parameter is not going to exist anymore as it is going to be replaced by the parameter 'pad'.
mode='max')


Layer (type) Output Shape Param # Connected to

convolution2d_1 (Convolution2D) (None, 122, 122, 32) 4736 convolution2d_input_1[0][0]


activation_1 (Activation) (None, 122, 122, 32) 0 convolution2d_1[0][0]


maxpooling2d_1 (MaxPooling2D) (None, 61, 61, 32) 0 activation_1[0][0]


convolution2d_2 (Convolution2D) (None, 57, 57, 64) 51264 maxpooling2d_1[0][0]


activation_2 (Activation) (None, 57, 57, 64) 0 convolution2d_2[0][0]


maxpooling2d_2 (MaxPooling2D) (None, 28, 28, 64) 0 activation_2[0][0]


convolution2d_3 (Convolution2D) (None, 26, 26, 128) 73856 maxpooling2d_2[0][0]


activation_3 (Activation) (None, 26, 26, 128) 0 convolution2d_3[0][0]


maxpooling2d_3 (MaxPooling2D) (None, 13, 13, 128) 0 activation_3[0][0]


dropout_1 (Dropout) (None, 13, 13, 128) 0 maxpooling2d_3[0][0]


flatten_1 (Flatten) (None, 21632) 0 dropout_1[0][0]


dense_1 (Dense) (None, 128) 2769024 flatten_1[0][0]


activation_4 (Activation) (None, 128) 0 dense_1[0][0]


dropout_2 (Dropout) (None, 128) 0 activation_4[0][0]


dense_2 (Dense) (None, 10) 1290 dropout_2[0][0]


activation_5 (Activation) (None, 10) 0 dense_2[0][0]

Total params: 2900170


None
('hello', '/home/ashok/Desktop/out/modelWeights/cnnModelDEp80weights.best.hdf5')
Found 4654 images belonging to 10 classes.
Found 1168 images belonging to 10 classes.
Epoch 1/1
32/4654 [..............................] - ETA: 577s - loss: 2.3012 - acc: 0.0312
64/4654 [..............................] - ETA: 579s - loss: 2.3018 - acc: 0.0938
96/4654 [..............................] - ETA: 578s - loss: 2.3013 - acc: 0.0833
128/4654 [..............................] - ETA: 578s - loss: 2.3004 - acc: 0.0938
160/4654 [>.............................] - ETA: 572s - loss: 2.2989 - acc: 0.1125
192/4654 [>.............................] - ETA: 568s - loss: 2.2959 - acc: 0.1094
224/4654 [>.............................] - ETA: 565s - loss: 2.2935 - acc: 0.1161
256/4654 [>.............................] - ETA: 561s - loss: 2.2915 - acc: 0.1211
288/4654 [>.............................] - ETA: 560s - loss: 2.2863 - acc: 0.1389
320/4654 [=>............................] - ETA: 556s - loss: 2.2822 - acc: 0.1469
352/4654 [=>............................] - ETA: 552s - loss: 2.2768 - acc: 0.1449
384/4654 [=>............................] - ETA: 547s - loss: 2.2658 - acc: 0.1562
416/4654 [=>............................] - ETA: 544s - loss: 2.2711 - acc: 0.1611
448/4654 [=>............................] - ETA: 540s - loss: 2.2673 - acc: 0.1674
480/4654 [==>...........................] - ETA: 535s - loss: 2.2703 - acc: 0.1667
512/4654 [==>...........................] - ETA: 530s - loss: 2.2702 - acc: 0.1582
544/4654 [==>...........................] - ETA: 526s - loss: 2.2635 - acc: 0.1654
576/4654 [==>...........................] - ETA: 522s - loss: 2.2667 - acc: 0.1615
608/4654 [==>...........................] - ETA: 518s - loss: 2.2613 - acc: 0.1628
640/4654 [===>..........................] - ETA: 514s - loss: 2.2563 - acc: 0.1672
672/4654 [===>..........................] - ETA: 510s - loss: 2.2580 - acc: 0.1637
704/4654 [===>..........................] - ETA: 506s - loss: 2.2568 - acc: 0.1634
736/4654 [===>..........................] - ETA: 502s - loss: 2.2581 - acc: 0.1630
768/4654 [===>..........................] - ETA: 497s - loss: 2.2534 - acc: 0.1667
800/4654 [====>.........................] - ETA: 493s - loss: 2.2496 - acc: 0.1737
832/4654 [====>.........................] - ETA: 491s - loss: 2.2485 - acc: 0.1731
864/4654 [====>.........................] - ETA: 487s - loss: 2.2421 - acc: 0.1771
896/4654 [====>.........................] - ETA: 484s - loss: 2.2347 - acc: 0.1808
928/4654 [====>.........................] - ETA: 481s - loss: 2.2303 - acc: 0.1853
960/4654 [=====>........................] - ETA: 477s - loss: 2.2299 - acc: 0.1865
992/4654 [=====>........................] - ETA: 472s - loss: 2.2283 - acc: 0.1855
1024/4654 [=====>........................] - ETA: 468s - loss: 2.2254 - acc: 0.1885
1056/4654 [=====>........................] - ETA: 464s - loss: 2.2237 - acc: 0.1884
1088/4654 [======>.......................] - ETA: 461s - loss: 2.2260 - acc: 0.1866
1120/4654 [======>.......................] - ETA: 457s - loss: 2.2163 - acc: 0.1955
1152/4654 [======>.......................] - ETA: 453s - loss: 2.2145 - acc: 0.1988
1184/4654 [======>.......................] - ETA: 449s - loss: 2.2093 - acc: 0.2010
1216/4654 [======>.......................] - ETA: 444s - loss: 2.2038 - acc: 0.2023
1248/4654 [=======>......................] - ETA: 440s - loss: 2.2048 - acc: 0.2027
1280/4654 [=======>......................] - ETA: 435s - loss: 2.2046 - acc: 0.2031
1312/4654 [=======>......................] - ETA: 431s - loss: 2.2048 - acc: 0.2012
1344/4654 [=======>......................] - ETA: 427s - loss: 2.2053 - acc: 0.2001
1376/4654 [=======>......................] - ETA: 422s - loss: 2.2043 - acc: 0.2013
1408/4654 [========>.....................] - ETA: 418s - loss: 2.2044 - acc: 0.2031
1440/4654 [========>.....................] - ETA: 413s - loss: 2.2039 - acc: 0.2028
1472/4654 [========>.....................] - ETA: 409s - loss: 2.2028 - acc: 0.2038
1504/4654 [========>.....................] - ETA: 405s - loss: 2.2012 - acc: 0.2028
1536/4654 [========>.....................] - ETA: 400s - loss: 2.2001 - acc: 0.2025
1568/4654 [=========>....................] - ETA: 396s - loss: 2.2009 - acc: 0.2015
1600/4654 [=========>....................] - ETA: 392s - loss: 2.2008 - acc: 0.2013
1632/4654 [=========>....................] - ETA: 388s - loss: 2.1998 - acc: 0.2034
1664/4654 [=========>....................] - ETA: 383s - loss: 2.1991 - acc: 0.2025
1696/4654 [=========>....................] - ETA: 379s - loss: 2.1981 - acc: 0.2017
1728/4654 [==========>...................] - ETA: 375s - loss: 2.1982 - acc: 0.2014
1760/4654 [==========>...................] - ETA: 371s - loss: 2.1966 - acc: 0.2023
1792/4654 [==========>...................] - ETA: 366s - loss: 2.1948 - acc: 0.2037
1824/4654 [==========>...................] - ETA: 362s - loss: 2.1955 - acc: 0.2023
1856/4654 [==========>...................] - ETA: 358s - loss: 2.1977 - acc: 0.2031
1888/4654 [===========>..................] - ETA: 354s - loss: 2.1989 - acc: 0.2018
1920/4654 [===========>..................] - ETA: 349s - loss: 2.1974 - acc: 0.2026
1952/4654 [===========>..................] - ETA: 345s - loss: 2.1966 - acc: 0.2034
1984/4654 [===========>..................] - ETA: 341s - loss: 2.1949 - acc: 0.2041
2016/4654 [===========>..................] - ETA: 337s - loss: 2.1909 - acc: 0.2034
2048/4654 [============>.................] - ETA: 332s - loss: 2.1899 - acc: 0.2036
2080/4654 [============>.................] - ETA: 328s - loss: 2.1922 - acc: 0.2038
2112/4654 [============>.................] - ETA: 324s - loss: 2.1901 - acc: 0.2045
2144/4654 [============>.................] - ETA: 320s - loss: 2.1904 - acc: 0.2043
2176/4654 [=============>................] - ETA: 316s - loss: 2.1898 - acc: 0.2036
2208/4654 [=============>................] - ETA: 312s - loss: 2.1901 - acc: 0.2038
2240/4654 [=============>................] - ETA: 307s - loss: 2.1890 - acc: 0.2036
2272/4654 [=============>................] - ETA: 303s - loss: 2.1891 - acc: 0.2038
2304/4654 [=============>................] - ETA: 299s - loss: 2.1864 - acc: 0.2049
2336/4654 [==============>...............] - ETA: 295s - loss: 2.1854 - acc: 0.2046
2368/4654 [==============>...............] - ETA: 291s - loss: 2.1867 - acc: 0.2048
2400/4654 [==============>...............] - ETA: 287s - loss: 2.1858 - acc: 0.2050
2432/4654 [==============>...............] - ETA: 283s - loss: 2.1864 - acc: 0.2039
2464/4654 [==============>...............] - ETA: 279s - loss: 2.1873 - acc: 0.2033
2496/4654 [===============>..............] - ETA: 275s - loss: 2.1866 - acc: 0.2043
2528/4654 [===============>..............] - ETA: 270s - loss: 2.1857 - acc: 0.2033
2560/4654 [===============>..............] - ETA: 266s - loss: 2.1866 - acc: 0.2035
2592/4654 [===============>..............] - ETA: 262s - loss: 2.1855 - acc: 0.2033
2624/4654 [===============>..............] - ETA: 258s - loss: 2.1854 - acc: 0.2024
2656/4654 [================>.............] - ETA: 254s - loss: 2.1843 - acc: 0.2029
2688/4654 [================>.............] - ETA: 250s - loss: 2.1837 - acc: 0.2035
2720/4654 [================>.............] - ETA: 246s - loss: 2.1814 - acc: 0.2033
2752/4654 [================>.............] - ETA: 242s - loss: 2.1800 - acc: 0.2035
2784/4654 [================>.............] - ETA: 237s - loss: 2.1796 - acc: 0.2029
2816/4654 [=================>............] - ETA: 233s - loss: 2.1787 - acc: 0.2031
2848/4654 [=================>............] - ETA: 229s - loss: 2.1779 - acc: 0.2026
2880/4654 [=================>............] - ETA: 225s - loss: 2.1756 - acc: 0.2031
2912/4654 [=================>............] - ETA: 221s - loss: 2.1741 - acc: 0.2023
2944/4654 [=================>............] - ETA: 217s - loss: 2.1716 - acc: 0.2028
2976/4654 [==================>...........] - ETA: 213s - loss: 2.1703 - acc: 0.2026
3008/4654 [==================>...........] - ETA: 209s - loss: 2.1702 - acc: 0.2021
3040/4654 [==================>...........] - ETA: 205s - loss: 2.1695 - acc: 0.2023
3072/4654 [==================>...........] - ETA: 200s - loss: 2.1670 - acc: 0.2038
3104/4654 [===================>..........] - ETA: 196s - loss: 2.1659 - acc: 0.2030
3136/4654 [===================>..........] - ETA: 192s - loss: 2.1639 - acc: 0.2031
3168/4654 [===================>..........] - ETA: 188s - loss: 2.1615 - acc: 0.2039
3200/4654 [===================>..........] - ETA: 184s - loss: 2.1607 - acc: 0.2037
3232/4654 [===================>..........] - ETA: 180s - loss: 2.1610 - acc: 0.2039
3264/4654 [====================>.........] - ETA: 176s - loss: 2.1602 - acc: 0.2047
3296/4654 [====================>.........] - ETA: 172s - loss: 2.1601 - acc: 0.2051
3328/4654 [====================>.........] - ETA: 168s - loss: 2.1584 - acc: 0.2067
3360/4654 [====================>.........] - ETA: 164s - loss: 2.1572 - acc: 0.2074
3392/4654 [====================>.........] - ETA: 160s - loss: 2.1561 - acc: 0.2081
3424/4654 [=====================>........] - ETA: 155s - loss: 2.1538 - acc: 0.2097
3456/4654 [=====================>........] - ETA: 151s - loss: 2.1526 - acc: 0.2101
3488/4654 [=====================>........] - ETA: 147s - loss: 2.1510 - acc: 0.2113
3520/4654 [=====================>........] - ETA: 143s - loss: 2.1522 - acc: 0.2111
3552/4654 [=====================>........] - ETA: 139s - loss: 2.1505 - acc: 0.2123
3584/4654 [======================>.......] - ETA: 135s - loss: 2.1474 - acc: 0.2137
3616/4654 [======================>.......] - ETA: 131s - loss: 2.1447 - acc: 0.2140
3648/4654 [======================>.......] - ETA: 127s - loss: 2.1440 - acc: 0.2144
3680/4654 [======================>.......] - ETA: 123s - loss: 2.1423 - acc: 0.2144
3712/4654 [======================>.......] - ETA: 119s - loss: 2.1420 - acc: 0.2142
3744/4654 [=======================>......] - ETA: 115s - loss: 2.1412 - acc: 0.2145
3776/4654 [=======================>......] - ETA: 111s - loss: 2.1406 - acc: 0.2145
3808/4654 [=======================>......] - ETA: 107s - loss: 2.1381 - acc: 0.2159
3840/4654 [=======================>......] - ETA: 103s - loss: 2.1342 - acc: 0.2177
3872/4654 [=======================>......] - ETA: 99s - loss: 2.1337 - acc: 0.2169
3904/4654 [========================>.....] - ETA: 95s - loss: 2.1322 - acc: 0.2164
3936/4654 [========================>.....] - ETA: 90s - loss: 2.1312 - acc: 0.2157
3968/4654 [========================>.....] - ETA: 86s - loss: 2.1305 - acc: 0.2157
4000/4654 [========================>.....] - ETA: 82s - loss: 2.1285 - acc: 0.2162
4032/4654 [========================>.....] - ETA: 78s - loss: 2.1275 - acc: 0.2163
4064/4654 [=========================>....] - ETA: 74s - loss: 2.1252 - acc: 0.2180
4096/4654 [=========================>....] - ETA: 70s - loss: 2.1235 - acc: 0.2195
4128/4654 [=========================>....] - ETA: 66s - loss: 2.1219 - acc: 0.2192
4160/4654 [=========================>....] - ETA: 62s - loss: 2.1226 - acc: 0.2188
4192/4654 [==========================>...] - ETA: 58s - loss: 2.1203 - acc: 0.2197
4224/4654 [==========================>...] - ETA: 54s - loss: 2.1178 - acc: 0.2206
4256/4654 [==========================>...] - ETA: 50s - loss: 2.1162 - acc: 0.2216
4288/4654 [==========================>...] - ETA: 46s - loss: 2.1149 - acc: 0.2215
4320/4654 [==========================>...] - ETA: 42s - loss: 2.1130 - acc: 0.2222
4352/4654 [===========================>..] - ETA: 38s - loss: 2.1119 - acc: 0.2231
4384/4654 [===========================>..] - ETA: 34s - loss: 2.1127 - acc: 0.2231
4416/4654 [===========================>..] - ETA: 30s - loss: 2.1115 - acc: 0.2231
4448/4654 [===========================>..] - ETA: 26s - loss: 2.1101 - acc: 0.2230
4480/4654 [===========================>..] - ETA: 22s - loss: 2.1083 - acc: 0.2232
4512/4654 [============================>.] - ETA: 17s - loss: 2.1066 - acc: 0.2238
4544/4654 [============================>.] - ETA: 13s - loss: 2.1054 - acc: 0.2245
4576/4654 [============================>.] - ETA: 9s - loss: 2.1043 - acc: 0.2247
4608/4654 [============================>.] - ETA: 5s - loss: 2.1037 - acc: 0.2250
4640/4654 [============================>.] - ETA: 1s - loss: 2.1020 - acc: 0.2259('loss: ', 2.1008654166866281)
('New learning rate: ', 0.0081732401111684842)
Epoch 00000: val_acc improved from -inf to 0.32192, saving model to /home/ashok/Desktop/out/modelWeights/cnnModelDEp80weights.best.hdf5
Traceback (most recent call last):
File "/home/ashok/PycharmProjects/Tensorflow/food-image-classification--master/imageClassificationByCNN.py", line 190, in
callbacks = [earlystopper, lrate, checkpoint, hist])
File "/home/ashok/.local/lib/python2.7/site-packages/keras/models.py", line 874, in fit_generator
pickle_safe=pickle_safe)
File "/home/ashok/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1485, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/home/ashok/.local/lib/python2.7/site-packages/keras/callbacks.py", line 40, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/home/ashok/.local/lib/python2.7/site-packages/keras/callbacks.py", line 296, in on_epoch_end
self.model.save(filepath, overwrite=True)
File "/home/ashok/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2423, in save
save_model(self, filepath, overwrite)
File "/home/ashok/.local/lib/python2.7/site-packages/keras/models.py", line 48, in save_model
f = h5py.File(filepath, 'w')
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 107, in make_fid
fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "h5py/h5f.pyx", line 98, in h5py.h5f.create (/tmp/pip-nCYoKW-build/h5py/h5f.c:2284)
IOError: Unable to create file (Unable to open file: name = '/home/ashok/desktop/out/modelweights/cnnmodeldep80weights.best.hdf5', errno = 2, error message = 'no such file or directory', flags = 13, o_flags = 242)

Process finished with exit code 1

@vikash512
Copy link
Author

hi,
the error which is posted was not resolving .so,could you give me the suggestion to solve it.

Thank you.

@vikash512
Copy link
Author

I can't understand why the model is not able to save in the given path .

@jingweimo
Copy link
Owner

jingweimo commented Aug 10, 2017

I am on traveling. I will check the code for you later

What version of keras you are using?

If you are using tensorflow, make sure to make some modifications to the original code to be compatible with it

@vikash512
Copy link
Author

i am using keras 1.1.1 and theano 0.8.2 versions .I have modified the in some lines and comment the code of Prepare data sets as we i have the complete data and the train and test data also. the code i have do some modifications in your code is below.

-- coding: utf-8 --

"""
Classification on Small-scale Image Data based on Data Expansion/Augmentation
Created on Wed Nov 09 22:25:36 2016

Edit history:
11/20/2016: add early stopping and modelcheckpoint
11/22/2016: data splitting of 80% training and 20% testing
Three training cases:
1) 100 epochs
2) 200 epochs
3) 400 epochs
4) 600 epochs
Obtained best accuracies
1): 83.37% for training and 87.24% for testing
2): 87.24% for training and 89.21% for testing
3): 90.34% for training and 90.41% for testing
4): 93.08% for training and 89.98% for testing

@author: Yuzhen Lu
"""

from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
import matplotlib.pyplot as plt

'''datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
#from PIL import Image
#img = Image.open('Test/apple/4.jpeg')
img = load_img('imagePath') # this is a PIL image
img.save('saveImagePath')
plt.imshow(img)
x = img_to_array(img) # this is a Numpy array with shape (3, 128, 128)
x = x.reshape((1,) + x.shape)

###############################################################################

Prepare data sets

###############################################################################
#Split images into training and testing parts
#Note: run only one time!!
def splitImageSet(rootFolder, outFolder, p):
import os
from os import listdir
import numpy as np
from PIL import Image
cats = listdir(rootFolder) # a list of subfolder names
for cat in cats:
print('Image Category...{}'.format(cat))

    folderPath = (os.path.join(rootFolder,cat))
    imgNames = listdir(folderPath)
    imgPaths = [os.path.join(folderPath,imgName) for imgName in imgNames]
    idx = np.random.permutation(len(imgPaths))
    trainIdx = idx[:int(p*len(idx))]
    testIdx = [ind for ind in idx if not ind in trainIdx]

    if not os.path.exists(os.path.join(outFolder,'Train',cat)):
        os.makedirs(os.path.join(outFolder,'Train',cat))
    for k in range(len(trainIdx)):
        img = Image.open(os.path.join(imgPaths[trainIdx[k]]))
        #temp = os.path.join(outFolder,'train',cat,imgNames[trainIdx[k]])
        img.save(os.path.join(outFolder,'Train',cat,imgNames[trainIdx[k]]))            
    if not os.path.exists(os.path.join(outFolder,'Test',cat)):
        os.makedirs(os.path.join(outFolder,'Test',cat))
    for k in range(len(testIdx)):
        img = Image.open(os.path.join(imgPaths[testIdx[k]]))
        img.save(os.path.join(outFolder,'Test',cat,imgNames[testIdx[k]])) 
        
print('Split Done!')
return

rootFolder = 'rootFolder' #add the image directory
outFolder = 'outFolder' #image output directory
splitImageSet(rootFolder, outFolder, 0.80)

#Please start from here!!'''
###############################################################################

Build a CNN model

###############################################################################
#CNN model
import os
import numpy as np
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.optimizers import SGD
from keras.callbacks import EarlyStopping, ModelCheckpoint, History
from keras import backend as K

model = Sequential()
model.add(Convolution2D(32, 7, 7, input_shape=(128, 128, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 5, 5))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(128, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) #initial lr = 0.01
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
print(model.summary())

####################################

Callback Schedule

###################################
import keras
class decaylr_loss(keras.callbacks.Callback):
def init(self):
super(decaylr_loss, self).init()
def on_epoch_end(self,epoch,logs={}):
#loss=logs.items()[1][1] #get loss
loss=logs.get('loss')
print("loss: ",loss)
old_lr = 0.001 #needs some adjustments
new_lr= old_lrnp.exp(loss) #lrexp(loss)
print("New learning rate: ", new_lr)
K.set_value(self.model.optimizer.lr, new_lr)
lrate = decaylr_loss()
#early stopping
patience = 20
earlystopper = EarlyStopping(monitor='val_acc', patience=patience,
verbose=1, mode='max')
#check point
wdir = '/home/ashok/Desktop/out' #work directory
filepath = os.path.join(wdir,'modelWeights','cnnModelDEp80weights.best.hdf5') #save model weights
print('hello',filepath)
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')

###############################################################################

Data Expansion or Augmentation

###############################################################################
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
#featurewise_center=True,
#featurewise_std_normalization=True,
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

trainDir = './Train'
train_generator = train_datagen.flow_from_directory(trainDir,
target_size=(128,128),
batch_size=32,
class_mode='categorical')
test_datagen = ImageDataGenerator(rescale=1./255)
testDir = './Test'
test_generator = test_datagen.flow_from_directory(testDir,
target_size=(128,128),
batch_size=32,
shuffle=False,
class_mode='categorical')

###############################################################################

Fit, Evaluate and Save Model

###############################################################################
epochs = 1
#epochs = 200
#epochs = 400
#epochs = 600
samples_per_epoch = 4654
val_samples = 1168

#Fit the model
hist = History()
model.fit_generator(train_generator,
samples_per_epoch= samples_per_epoch,
nb_epoch=epochs,
verbose=1,
validation_data=test_generator,
nb_val_samples=val_samples,
callbacks = [earlystopper, lrate, checkpoint, hist])

#evaluate the model
scores = model.evaluate_generator(test_generator, val_samples=val_samples)
print("Accuracy = ", scores[1])

#save model
savePath = wdir
print('hhh',savePath)
model.save_weights(os.path.join(savePath,'cnnModelDEp80.h5')) # save weights after training or during training
model.save(os.path.join(savePath,'cnnModelDEp80.h5')) #save complied model

#plot acc and loss vs epochs
import matplotlib.pyplot as plt
print(hist.history.keys())
#accuracy
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.savefig(os.path.join(savePath,'cmdeP80AccVsEpoch.jpeg'), dpi=1000, bbox_inches='tight')
plt.show()
#loss
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.savefig(os.path.join(savePath,'cmdeP80LossVsEpoch.jpeg'), dpi=1000, bbox_inches='tight')
plt.show()

###############################################################################
#Note: train 4364 images (80%) and test 1458 images (20%) #

100 epochs:

200 epoches: acc = 0.8724; val_acc = 0.89212

400 epoches:

600 epochs:

###############################################################################

load the model

not necessary the best at the end of training

from keras.models import load_model
myModel = load_model(os.path.join(savePath,'cnnModelDEp80.h5'))
scores = myModel.evaluate_generator(test_generator,val_samples)
print("Accuracy = ", scores[1])

########################

Check-pointed model

#######################
model = Sequential()
model.add(Convolution2D(32, 7, 7, input_shape=(3, 128, 128)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 5, 5))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(128, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
#lr = 0.00277615583366 #adjust lr based on training process
#lr = 0.00181503843077
#lr = 0.00163685841542 #case 2
lr = 0.00122869861281 # case3
sgd = SGD(lr=lr, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
model.load_weights(filepath) #load saved weights
scores = model.evaluate_generator(test_generator,val_samples)
print("Accuracy = ", scores[1])

#Confusion matrix on the test images
#imgDir = testDir
imgDir = trainDir
test_generator = test_datagen.flow_from_directory(imgDir,
target_size=(128,128),
batch_size=32,
shuffle=False,
class_mode='categorical')
#val_samples = 1168
val_samples = 4654
predict = model.predict_generator(test_generator,val_samples)

yTrue = test_generator.classes
yTrueIdx = test_generator.class_indices

from sklearn.metrics import classification_report, confusion_matrix
yHat = np.ones(predict.shape[0],dtype = int)
for i in range(predict.shape[0]):
temp = predict[i,:]
yHat[i] = np.argmax(temp)

from sklearn.metrics import accuracy_score
acc = accuracy_score(yTrue,yHat)
print("Accuracy on test images:", acc) #same as scores[1]

def numToLabels(y,cat):
numLabel = []
import numpy as np
yNew = np.unique(y) #sorted
for i in range(len(y)):
idx = np.where(yNew == y[i])[0][0]
numLabel.append(cat[idx])
return numLabel
#labels = sorted(yTrueIdx.keys())
labels = ['Ap','Ba','Br','Bu','Eg','Fr','Hd','Pz','Rc','St']
yActLabels = numToLabels(yTrue,labels)
yHatLabels = numToLabels(yHat,labels)
CM = confusion_matrix(yActLabels,yHatLabels,labels) #np.array
#print CM
print(classification_report(yTrue,yHat,target_names=labels))

#Alternatively: pd.crosstab
import pandas as pd
#preds = pd.DataFrame(predict)
y1 = pd.Categorical(yActLabels,categories=labels)
y2 = pd.Categorical(yHatLabels,categories=labels)
pd.crosstab(y1,y2,rownames=['True'], colnames=['Predicted'])

###############################################################################

Miscellaneous

###############################################################################
#evaluate execution efficiency
import time
t = time.time()
s = model.predict_generator(test_generator,val_samples)
elapsedTime = time.time()-t
print('Average time: {} second'.format(elapsedTime/val_samples))
#average time: 0.00895 s, less than 0.01 s
def getImgPaths2(rootPath):
import os
from os import listdir
print('Extract paths and labels...')
cats = listdir(rootPath)
imgPaths = []
imgLabels = []
for cat in cats:
print('{}...'.format(cat))
foldPaths = os.path.join(rootPath, cat)
imgPaths.extend([os.path.join(foldPaths,imgName) for imgName in listdir(foldPaths)])
imgLabels.extend([cat]len(listdir(foldPaths)))
return imgPaths, imgLabels
def getImgData(imgPaths):
from scipy import misc
import numpy as np
print('Extract image data...')
temp1 = misc.imread(imgPaths[0])
imgData = np.zeros((len(imgPaths),temp1.shape[0],temp1.shape[1],temp1.shape[2]),
dtype='float32')
for ii in range(len(imgPaths)):
temp = misc.imread(imgPaths[ii])
imgData[ii,:,:,:] = temp
print("\r{} Percent complete\r".format(100
(ii+1)/len(imgPaths)),)
return imgData
imgPaths = getImgPaths2(trainDir)

Expanded image data

for i in range(0, 9):
plt.subplot(330 + 1 + i)
plt.imshow()
plt.imshow()

#######################################################

Images with correct or wroning predicted Labels

#######################################################
imgPaths, imgLabels = getImgPaths2(testDir)
X_test = getImgData(imgPaths) #(1168, 128, 128, 3)
X_test /= 255
test_wrong = [im for im in zip(X_test, yHatLabels, yActLabels) if im[1] != im[2]]
print(len(test_wrong))
#112 misclassified images

#show some misclassified images
import matplotlib.pyplot as plt
plt.figure(figsize=(7, 8))
import numpy as np
for ind, val in enumerate(test_wrong):
#print ind
plt.subplots_adjust(left=0, right=1, top=1, bottom=0)
plt.subplot(7, 8, ind + 1)
img = val[0]
img *= 255
plt.axis("off")
plt.text(0,0,val[2], fontsize=12, color='blue')
plt.text(40,0,val[1], fontsize=12, color='red')
plt.imshow(img.astype('uint8'))
if ind==55:
break
plt.savefig(os.path.join(savePath,'MissClassifiedImages1.jpeg'),
dpi=800, bbox_inches='tight')
plt.show()

#show some misclassified images
import matplotlib.pyplot as plt
plt.figure(figsize=(7, 8))
import numpy as np
for ind, val in enumerate(test_wrong[56:]):
#print ind
plt.subplots_adjust(left=0, right=1, top=1, bottom=0)
plt.subplot(7, 8, ind + 1)
img = val[0]
img *= 255
plt.axis("off")
plt.text(0,0,val[2], fontsize=12, color='blue')
plt.text(40,0,val[1], fontsize=12, color='red')
plt.imshow(img.astype('uint8'))
if ind==55:
break
plt.savefig(os.path.join(savePath,'MissClassifiedImages2.jpeg'),
dpi=800, bbox_inches='tight')
plt.show()

########################################

Rotated, shifted, sheared Images

########################################
imgData = getImgData(imgPaths[:9]) #extract 9 images, (9L, 128L, 128L, 3L)

plot raw images

for i in range(0, 9):
from scipy import misc
img = misc.imread(imgPaths[i])
plt.subplot(3,3,i+1)
fig = plt.imshow(img)
ax = plt.gca()
#ax.set_axis_off()
ax.axes.get_xaxis().set_ticks([])
ax.axes.get_yaxis().set_ticks([])
plt.show()
#or
for i in range(0, 9):
img = imgData[i,:,:,:].astype('uint8')
plt.subplot(3,3,i+1)
fig = plt.imshow(img)
ax = plt.gca()
ax.set_axis_off()
plt.savefig(os.path.join(savePath,'RawImages.jpeg'), dpi=1000, bbox_inches='tight')
plt.show()

#image distortion by ImageDataGenerator
from keras.preprocessing.image import ImageDataGenerator #, array_to_img, img_to_array
datagen = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.2,
horizontal_flip=True
)
#single fruit
#from PIL import Image
x = imgData[1].reshape((1,)+imgData[0].shape)
fig = plt.imshow(imgData[1].astype('uint8')) #imshow: img_dim_ordering (w,h,channels)
plt.gca().set_axis_off()
plt.savefig(os.path.join(savePath,'rawImage.jpeg'),
dpi=1000, bbox_inches='tight')
x = x.transpose(0,3,1,2) #reshaped into 4d for datagen.flow
i = 0
for x_batch in datagen.flow(x, batch_size=1):
i += 1
plt.subplot(3,3,i)
I = x_batch[0]
img = (I.transpose(1,2,0)).astype('uint8')
fig=plt.imshow(img,cmap=plt.get_cmap('gray'))
plt.gca().set_axis_off()
if i==9:
break
plt.savefig(os.path.join(savePath,'expandedImages1.jpeg'),
dpi=1000, bbox_inches='tight')
#multiple fruit
for x_batch in datagen.flow(imgData.transpose(0,3,1,2), batch_size=9):
for i in range(0,9):
plt.subplot(3,3,i+1)
I = x_batch[i]
#img = array_to_img(I)
img = (I.transpose(1,2,0)).astype('uint8')
fig = plt.imshow(img,cmap=plt.get_cmap('gray'))
plt.gca().set_axis_off()
plt.show()
break
plt.savefig(os.path.join(savePath,'expandedImages2.jpeg'),
dpi=1000, bbox_inches='tight')

#evaluate execution efficiency
import time
t = time.time()
s = model.predict_generator(test_generator,val_samples)
elapsedTime = time.time()-t
print ('Average time: {} second'.format(elapsedTime/val_samples))
#average time: 0.00895 s, less than 0.01 s

@vikash512
Copy link
Author

Hi ,can you guide me.

@vikash512
Copy link
Author

can you please do the favour for me,as I am not getting the model.

@vikash512
Copy link
Author

vikash512 commented Aug 30, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants