-
Notifications
You must be signed in to change notification settings - Fork 0
/
VGG19-Real&Fake-Data-Train
365 lines (287 loc) · 19.1 KB
/
VGG19-Real&Fake-Data-Train
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
covid_done
normal_done
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 50178
=================================================================
Total params: 20,074,562
Trainable params: 50,178
Non-trainable params: 20,024,384
_________________________________________________________________
Epoch 1/250
203/202 [==============================] - 84s 412ms/step - loss: 0.2917 - accuracy: 0.8809 - val_loss: 0.0039 - val_accuracy: 0.9228
Epoch 00001: val_accuracy improved from -inf to 0.92277, saving model to best_model_generated_v4.h5
Epoch 2/250
203/202 [==============================] - 81s 398ms/step - loss: 0.1964 - accuracy: 0.9240 - val_loss: 0.3726 - val_accuracy: 0.9162
Epoch 00002: val_accuracy did not improve from 0.92277
Epoch 3/250
203/202 [==============================] - 82s 402ms/step - loss: 0.1396 - accuracy: 0.9485 - val_loss: 0.1446 - val_accuracy: 0.9058
Epoch 00003: val_accuracy did not improve from 0.92277
Epoch 4/250
203/202 [==============================] - 82s 402ms/step - loss: 0.1394 - accuracy: 0.9485 - val_loss: 0.0754 - val_accuracy: 0.8796
Epoch 00004: val_accuracy did not improve from 0.92277
Epoch 5/250
203/202 [==============================] - 81s 400ms/step - loss: 0.1287 - accuracy: 0.9530 - val_loss: 0.0043 - val_accuracy: 0.9254
Epoch 00005: val_accuracy improved from 0.92277 to 0.92539, saving model to best_model_generated_v4.h5
Epoch 6/250
203/202 [==============================] - 81s 399ms/step - loss: 0.1052 - accuracy: 0.9618 - val_loss: 0.0074 - val_accuracy: 0.9188
Epoch 00006: val_accuracy did not improve from 0.92539
Epoch 7/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0960 - accuracy: 0.9643 - val_loss: 0.0292 - val_accuracy: 0.9332
Epoch 00007: val_accuracy improved from 0.92539 to 0.93325, saving model to best_model_generated_v4.h5
Epoch 8/250
203/202 [==============================] - 81s 399ms/step - loss: 0.1026 - accuracy: 0.9624 - val_loss: 0.7097 - val_accuracy: 0.8678
Epoch 00008: val_accuracy did not improve from 0.93325
Epoch 9/250
203/202 [==============================] - 81s 399ms/step - loss: 0.1044 - accuracy: 0.9599 - val_loss: 0.2821 - val_accuracy: 0.9241
Epoch 00009: val_accuracy did not improve from 0.93325
Epoch 10/250
203/202 [==============================] - 81s 399ms/step - loss: 0.1084 - accuracy: 0.9579 - val_loss: 0.0890 - val_accuracy: 0.9136
Epoch 00010: val_accuracy did not improve from 0.93325
Epoch 11/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0922 - accuracy: 0.9641 - val_loss: 0.6773 - val_accuracy: 0.9149
Epoch 00011: val_accuracy did not improve from 0.93325
Epoch 12/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0936 - accuracy: 0.9669 - val_loss: 0.5556 - val_accuracy: 0.9463
Epoch 00012: val_accuracy improved from 0.93325 to 0.94634, saving model to best_model_generated_v4.h5
Epoch 13/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0760 - accuracy: 0.9721 - val_loss: 0.0448 - val_accuracy: 0.9306
Epoch 00013: val_accuracy did not improve from 0.94634
Epoch 14/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0773 - accuracy: 0.9698 - val_loss: 0.0025 - val_accuracy: 0.9424
Epoch 00014: val_accuracy did not improve from 0.94634
Epoch 15/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0711 - accuracy: 0.9730 - val_loss: 0.0476 - val_accuracy: 0.9385
Epoch 00015: val_accuracy did not improve from 0.94634
Epoch 16/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0675 - accuracy: 0.9749 - val_loss: 0.0224 - val_accuracy: 0.9058
Epoch 00016: val_accuracy did not improve from 0.94634
Epoch 17/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0846 - accuracy: 0.9681 - val_loss: 0.0210 - val_accuracy: 0.9450
Epoch 00017: val_accuracy did not improve from 0.94634
Epoch 18/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0641 - accuracy: 0.9761 - val_loss: 0.1219 - val_accuracy: 0.9346
Epoch 00018: val_accuracy did not improve from 0.94634
Epoch 19/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0821 - accuracy: 0.9707 - val_loss: 0.9596 - val_accuracy: 0.8730
Epoch 00019: val_accuracy did not improve from 0.94634
Epoch 20/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0989 - accuracy: 0.9632 - val_loss: 1.2644e-04 - val_accuracy: 0.9476
Epoch 00020: val_accuracy improved from 0.94634 to 0.94764, saving model to best_model_generated_v4.h5
Epoch 21/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0664 - accuracy: 0.9752 - val_loss: 0.0581 - val_accuracy: 0.9450
Epoch 00021: val_accuracy did not improve from 0.94764
Epoch 22/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0627 - accuracy: 0.9763 - val_loss: 0.0241 - val_accuracy: 0.9346
Epoch 00022: val_accuracy did not improve from 0.94764
Epoch 23/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0829 - accuracy: 0.9681 - val_loss: 2.3245e-05 - val_accuracy: 0.9411
Epoch 00023: val_accuracy did not improve from 0.94764
Epoch 24/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0756 - accuracy: 0.9720 - val_loss: 0.0028 - val_accuracy: 0.9516
Epoch 00024: val_accuracy improved from 0.94764 to 0.95157, saving model to best_model_generated_v4.h5
Epoch 25/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0525 - accuracy: 0.9812 - val_loss: 0.1465 - val_accuracy: 0.9463
Epoch 00025: val_accuracy did not improve from 0.95157
Epoch 26/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0707 - accuracy: 0.9727 - val_loss: 8.9407e-08 - val_accuracy: 0.8613
Epoch 00026: val_accuracy did not improve from 0.95157
Epoch 27/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0648 - accuracy: 0.9767 - val_loss: 0.9447 - val_accuracy: 0.9202
Epoch 00027: val_accuracy did not improve from 0.95157
Epoch 28/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0526 - accuracy: 0.9795 - val_loss: 0.0251 - val_accuracy: 0.9529
Epoch 00028: val_accuracy improved from 0.95157 to 0.95288, saving model to best_model_generated_v4.h5
Epoch 29/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0766 - accuracy: 0.9740 - val_loss: 0.0140 - val_accuracy: 0.9476
Epoch 00029: val_accuracy did not improve from 0.95288
Epoch 30/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0522 - accuracy: 0.9823 - val_loss: 0.0048 - val_accuracy: 0.9450
Epoch 00030: val_accuracy did not improve from 0.95288
Epoch 31/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0688 - accuracy: 0.9753 - val_loss: 3.7914 - val_accuracy: 0.9332
Epoch 00031: val_accuracy did not improve from 0.95288
Epoch 32/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0581 - accuracy: 0.9783 - val_loss: 0.0067 - val_accuracy: 0.9463
Epoch 00032: val_accuracy did not improve from 0.95288
Epoch 33/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0648 - accuracy: 0.9757 - val_loss: 2.6822e-07 - val_accuracy: 0.9450
Epoch 00033: val_accuracy did not improve from 0.95288
Epoch 34/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0584 - accuracy: 0.9800 - val_loss: 1.9963e-04 - val_accuracy: 0.8992
Epoch 00034: val_accuracy did not improve from 0.95288
Epoch 35/250
203/202 [==============================] - 82s 404ms/step - loss: 0.0793 - accuracy: 0.9726 - val_loss: 1.7881e-06 - val_accuracy: 0.9097
Epoch 00035: val_accuracy did not improve from 0.95288
Epoch 36/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0501 - accuracy: 0.9831 - val_loss: 0.0399 - val_accuracy: 0.9280
Epoch 00036: val_accuracy did not improve from 0.95288
Epoch 37/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0472 - accuracy: 0.9821 - val_loss: 0.5838 - val_accuracy: 0.8822
Epoch 00037: val_accuracy did not improve from 0.95288
Epoch 38/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0515 - accuracy: 0.9818 - val_loss: 0.0826 - val_accuracy: 0.9267
Epoch 00038: val_accuracy did not improve from 0.95288
Epoch 39/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0405 - accuracy: 0.9843 - val_loss: 1.7181 - val_accuracy: 0.8665
Epoch 00039: val_accuracy did not improve from 0.95288
Epoch 40/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0584 - accuracy: 0.9769 - val_loss: 1.3113e-06 - val_accuracy: 0.9411
Epoch 00040: val_accuracy did not improve from 0.95288
Epoch 41/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0619 - accuracy: 0.9804 - val_loss: 0.0695 - val_accuracy: 0.9228
Epoch 00041: val_accuracy did not improve from 0.95288
Epoch 42/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0430 - accuracy: 0.9831 - val_loss: 0.2748 - val_accuracy: 0.9332
Epoch 00042: val_accuracy did not improve from 0.95288
Epoch 43/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0507 - accuracy: 0.9807 - val_loss: 0.1305 - val_accuracy: 0.9332
Epoch 00043: val_accuracy did not improve from 0.95288
Epoch 44/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0645 - accuracy: 0.9769 - val_loss: 0.0019 - val_accuracy: 0.9346
Epoch 00044: val_accuracy did not improve from 0.95288
Epoch 45/250
203/202 [==============================] - 81s 398ms/step - loss: 0.0555 - accuracy: 0.9814 - val_loss: 0.0130 - val_accuracy: 0.9503
Epoch 00045: val_accuracy did not improve from 0.95288
Epoch 46/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0773 - accuracy: 0.9730 - val_loss: 0.3762 - val_accuracy: 0.9188
Epoch 00046: val_accuracy did not improve from 0.95288
Epoch 47/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0433 - accuracy: 0.9851 - val_loss: 0.0486 - val_accuracy: 0.9437
Epoch 00047: val_accuracy did not improve from 0.95288
Epoch 48/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0451 - accuracy: 0.9823 - val_loss: 1.5608e-04 - val_accuracy: 0.9411
Epoch 00048: val_accuracy did not improve from 0.95288
Epoch 49/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0397 - accuracy: 0.9858 - val_loss: 1.6093e-06 - val_accuracy: 0.9411
Epoch 00049: val_accuracy did not improve from 0.95288
Epoch 50/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0314 - accuracy: 0.9884 - val_loss: 0.0460 - val_accuracy: 0.9188
Epoch 00050: val_accuracy did not improve from 0.95288
Epoch 51/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0303 - accuracy: 0.9889 - val_loss: 0.0705 - val_accuracy: 0.9490
Epoch 00051: val_accuracy did not improve from 0.95288
Epoch 52/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0353 - accuracy: 0.9875 - val_loss: 0.0782 - val_accuracy: 0.9319
Epoch 00052: val_accuracy did not improve from 0.95288
Epoch 53/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0438 - accuracy: 0.9857 - val_loss: 0.0019 - val_accuracy: 0.9503
Epoch 00053: val_accuracy did not improve from 0.95288
Epoch 54/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0387 - accuracy: 0.9860 - val_loss: 0.0238 - val_accuracy: 0.9437
Epoch 00054: val_accuracy did not improve from 0.95288
Epoch 55/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0858 - accuracy: 0.9721 - val_loss: 1.9845 - val_accuracy: 0.9463
Epoch 00055: val_accuracy did not improve from 0.95288
Epoch 56/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0443 - accuracy: 0.9844 - val_loss: 1.9682 - val_accuracy: 0.9542
Epoch 00056: val_accuracy improved from 0.95288 to 0.95419, saving model to best_model_generated_v4.h5
Epoch 57/250
203/202 [==============================] - 82s 403ms/step - loss: 0.0371 - accuracy: 0.9860 - val_loss: 0.0225 - val_accuracy: 0.9346
Epoch 00057: val_accuracy did not improve from 0.95419
Epoch 58/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0395 - accuracy: 0.9854 - val_loss: 2.9802e-07 - val_accuracy: 0.9071
Epoch 00058: val_accuracy did not improve from 0.95419
Epoch 59/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0350 - accuracy: 0.9872 - val_loss: 0.0274 - val_accuracy: 0.9254
Epoch 00059: val_accuracy did not improve from 0.95419
Epoch 60/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0717 - accuracy: 0.9730 - val_loss: 2.1893e-04 - val_accuracy: 0.9476
Epoch 00060: val_accuracy did not improve from 0.95419
Epoch 61/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0936 - accuracy: 0.9729 - val_loss: 0.0029 - val_accuracy: 0.9463
Epoch 00061: val_accuracy did not improve from 0.95419
Epoch 62/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0486 - accuracy: 0.9829 - val_loss: 2.7458 - val_accuracy: 0.9136
Epoch 00062: val_accuracy did not improve from 0.95419
Epoch 63/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0563 - accuracy: 0.9794 - val_loss: 0.0599 - val_accuracy: 0.8979
Epoch 00063: val_accuracy did not improve from 0.95419
Epoch 64/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0404 - accuracy: 0.9834 - val_loss: 0.7682 - val_accuracy: 0.9476
Epoch 00064: val_accuracy did not improve from 0.95419
Epoch 65/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0342 - accuracy: 0.9877 - val_loss: 6.4002e-04 - val_accuracy: 0.9490
Epoch 00065: val_accuracy did not improve from 0.95419
Epoch 66/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0306 - accuracy: 0.9892 - val_loss: 0.7088 - val_accuracy: 0.9215
Epoch 00066: val_accuracy did not improve from 0.95419
Epoch 67/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0458 - accuracy: 0.9840 - val_loss: 0.0059 - val_accuracy: 0.9018
Epoch 00067: val_accuracy did not improve from 0.95419
Epoch 68/250
203/202 [==============================] - 81s 400ms/step - loss: 0.0383 - accuracy: 0.9849 - val_loss: 0.0014 - val_accuracy: 0.9372
Epoch 00068: val_accuracy did not improve from 0.95419
Epoch 69/250
203/202 [==============================] - 81s 399ms/step - loss: 0.0441 - accuracy: 0.9820 - val_loss: 9.9974e-05 - val_accuracy: 0.9555
Epoch 00069: val_accuracy improved from 0.95419 to 0.95550, saving model to best_model_generated_v4.h5
Epoch 70/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0349 - accuracy: 0.9854 - val_loss: 0.0011 - val_accuracy: 0.9581
Epoch 00070: val_accuracy improved from 0.95550 to 0.95812, saving model to best_model_generated_v4.h5
Epoch 71/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0313 - accuracy: 0.9878 - val_loss: 5.8078e-05 - val_accuracy: 0.9372
Epoch 00071: val_accuracy did not improve from 0.95812
Epoch 72/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0386 - accuracy: 0.9860 - val_loss: 0.0089 - val_accuracy: 0.8979
Epoch 00072: val_accuracy did not improve from 0.95812
Epoch 73/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0775 - accuracy: 0.9752 - val_loss: 5.0393e-04 - val_accuracy: 0.9476
Epoch 00073: val_accuracy did not improve from 0.95812
Epoch 74/250
203/202 [==============================] - 82s 402ms/step - loss: 0.0391 - accuracy: 0.9858 - val_loss: 3.8743e-06 - val_accuracy: 0.9372
Epoch 00074: val_accuracy did not improve from 0.95812
Epoch 75/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0369 - accuracy: 0.9863 - val_loss: 0.0018 - val_accuracy: 0.9503
Epoch 00075: val_accuracy did not improve from 0.95812
Epoch 76/250
203/202 [==============================] - 81s 401ms/step - loss: 0.0382 - accuracy: 0.9857 - val_loss: 1.1623e-05 - val_accuracy: 0.9490
Epoch 00076: val_accuracy did not improve from 0.95812
Epoch 00076: early stopping