-
Notifications
You must be signed in to change notification settings - Fork 0
/
out.txt
202 lines (201 loc) · 15.4 KB
/
out.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
2019-08-13 16:35:12.376172: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-08-13 16:35:12.475075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.476144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-08-13 16:35:12.484260: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-08-13 16:35:12.499802: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-08-13 16:35:12.511995: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2019-08-13 16:35:12.524804: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2019-08-13 16:35:12.540300: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2019-08-13 16:35:12.557628: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2019-08-13 16:35:12.580362: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-13 16:35:12.580700: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.583210: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.585449: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-13 16:35:12.586302: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-08-13 16:35:12.622353: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3600000000 Hz
2019-08-13 16:35:12.622981: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55762f62fea0 executing computations on platform Host. Devices:
2019-08-13 16:35:12.622999: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2019-08-13 16:35:12.710195: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.710855: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557632ab6020 executing computations on platform CUDA. Devices:
2019-08-13 16:35:12.710873: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2019-08-13 16:35:12.711000: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.711567: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-08-13 16:35:12.711597: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-08-13 16:35:12.711610: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-08-13 16:35:12.711620: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2019-08-13 16:35:12.711638: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2019-08-13 16:35:12.711649: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2019-08-13 16:35:12.711660: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2019-08-13 16:35:12.711671: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-13 16:35:12.711709: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.712288: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.712835: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-13 16:35:12.712859: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-08-13 16:35:12.714002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-13 16:35:12.714012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-13 16:35:12.714017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-13 16:35:12.714314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.714967: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-13 16:35:12.715535: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10479 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING: Logging before flag parsing goes to stderr.
W0813 16:35:13.208417 139659744106304 deprecation.py:323] From /home/lijiawei/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/util/random_seed.py:58: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
2019-08-13 16:35:18.119142: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-08-13 16:35:18.266270: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-13 16:35:23.636124: W tensorflow/core/framework/model.cc:475] Failed to find a tunable parameter that would decrease the output time. This means that the autotuning optimization got stuck in a local maximum. The optimization attempt will be aborted.
Model: "deep_sea"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) multiple 10240
_________________________________________________________________
max_pooling1d (MaxPooling1D) multiple 0
_________________________________________________________________
dropout (Dropout) multiple 0
_________________________________________________________________
conv1d_1 (Conv1D) multiple 1228800
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 multiple 0
_________________________________________________________________
dropout_1 (Dropout) multiple 0
_________________________________________________________________
conv1d_2 (Conv1D) multiple 3686400
_________________________________________________________________
dropout_2 (Dropout) multiple 0
_________________________________________________________________
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 55944000
_________________________________________________________________
dense_1 (Dense) multiple 850075
=================================================================
Total params: 61,719,515
Trainable params: 61,719,515
Non-trainable params: 0
_________________________________________________________________
Epoch 1/60
68750/68750 - 4446s - loss: 0.0849 - val_loss: 0.0785
Epoch 2/60
68750/68750 - 4447s - loss: 0.0808 - val_loss: 0.0757
Epoch 3/60
68750/68750 - 4451s - loss: 0.0793 - val_loss: 0.0742
Epoch 4/60
68750/68750 - 4450s - loss: 0.0775 - val_loss: 0.0705
Epoch 5/60
68750/68750 - 4452s - loss: 0.0744 - val_loss: 0.0666
Epoch 6/60
68750/68750 - 4453s - loss: 0.0722 - val_loss: 0.0643
Epoch 7/60
68750/68750 - 4450s - loss: 0.0710 - val_loss: 0.0636
Epoch 8/60
68750/68750 - 4451s - loss: 0.0703 - val_loss: 0.0630
Epoch 9/60
68750/68750 - 4450s - loss: 0.0698 - val_loss: 0.0625
Epoch 10/60
68750/68750 - 4457s - loss: 0.0693 - val_loss: 0.0621
Epoch 11/60
68750/68750 - 4459s - loss: 0.0690 - val_loss: 0.0619
Epoch 12/60
68750/68750 - 4458s - loss: 0.0686 - val_loss: 0.0612
Epoch 13/60
68750/68750 - 4454s - loss: 0.0683 - val_loss: 0.0610
Epoch 14/60
68750/68750 - 4471s - loss: 0.0681 - val_loss: 0.0609
Epoch 15/60
68750/68750 - 4464s - loss: 0.0678 - val_loss: 0.0611
Epoch 16/60
68750/68750 - 4468s - loss: 0.0676 - val_loss: 0.0604
Epoch 17/60
68750/68750 - 4470s - loss: 0.0674 - val_loss: 0.0602
Epoch 18/60
68750/68750 - 4469s - loss: 0.0672 - val_loss: 0.0603
Epoch 19/60
68750/68750 - 4469s - loss: 0.0671 - val_loss: 0.0597
Epoch 20/60
68750/68750 - 4469s - loss: 0.0669 - val_loss: 0.0603
Epoch 21/60
68750/68750 - 4467s - loss: 0.0667 - val_loss: 0.0598
Epoch 22/60
68750/68750 - 4455s - loss: 0.0666 - val_loss: 0.0599
Epoch 23/60
68750/68750 - 4480s - loss: 0.0664 - val_loss: 0.0596
Epoch 24/60
68750/68750 - 4480s - loss: 0.0663 - val_loss: 0.0594
Epoch 25/60
68750/68750 - 4480s - loss: 0.0661 - val_loss: 0.0592
Epoch 26/60
68750/68750 - 4481s - loss: 0.0660 - val_loss: 0.0589
Epoch 27/60
68750/68750 - 4482s - loss: 0.0658 - val_loss: 0.0589
Epoch 28/60
68750/68750 - 4479s - loss: 0.0657 - val_loss: 0.0588
Epoch 29/60
68750/68750 - 4481s - loss: 0.0655 - val_loss: 0.0584
Epoch 30/60
68750/68750 - 4480s - loss: 0.0654 - val_loss: 0.0584
Epoch 31/60
68750/68750 - 4480s - loss: 0.0653 - val_loss: 0.0585
Epoch 32/60
68750/68750 - 4459s - loss: 0.0652 - val_loss: 0.0586
Epoch 33/60
68750/68750 - 4482s - loss: 0.0651 - val_loss: 0.0580
Epoch 34/60
68750/68750 - 4459s - loss: 0.0650 - val_loss: 0.0577
Epoch 35/60
68750/68750 - 4481s - loss: 0.0649 - val_loss: 0.0579
Epoch 36/60
68750/68750 - 4481s - loss: 0.0648 - val_loss: 0.0579
Epoch 37/60
68750/68750 - 4481s - loss: 0.0647 - val_loss: 0.0577
Epoch 38/60
68750/68750 - 4459s - loss: 0.0646 - val_loss: 0.0579
Epoch 39/60
68750/68750 - 4481s - loss: 0.0645 - val_loss: 0.0576
Epoch 40/60
68750/68750 - 4481s - loss: 0.0644 - val_loss: 0.0578
Epoch 41/60
68750/68750 - 4481s - loss: 0.0643 - val_loss: 0.0575
Epoch 42/60
68750/68750 - 4482s - loss: 0.0643 - val_loss: 0.0573
Epoch 43/60
68750/68750 - 4482s - loss: 0.0642 - val_loss: 0.0573
Epoch 44/60
68750/68750 - 4480s - loss: 0.0641 - val_loss: 0.0572
Epoch 45/60
68750/68750 - 4479s - loss: 0.0641 - val_loss: 0.0572
Epoch 46/60
68750/68750 - 4481s - loss: 0.0640 - val_loss: 0.0572
Epoch 47/60
68750/68750 - 4481s - loss: 0.0639 - val_loss: 0.0574
Epoch 48/60
68750/68750 - 4458s - loss: 0.0639 - val_loss: 0.0574
Epoch 49/60
68750/68750 - 4474s - loss: 0.0638 - val_loss: 0.0579
Epoch 50/60
68750/68750 - 4471s - loss: 0.0638 - val_loss: 0.0573
Epoch 51/60
68750/68750 - 4475s - loss: 0.0637 - val_loss: 0.0571
Epoch 52/60
68750/68750 - 4481s - loss: 0.0637 - val_loss: 0.0572
Epoch 53/60
68750/68750 - 4481s - loss: 0.0636 - val_loss: 0.0570
Epoch 54/60
68750/68750 - 4482s - loss: 0.0635 - val_loss: 0.0569
Epoch 55/60
68750/68750 - 4461s - loss: 0.0635 - val_loss: 0.0576
Epoch 56/60
68750/68750 - 4482s - loss: 0.0634 - val_loss: 0.0567
Epoch 57/60
68750/68750 - 4482s - loss: 0.0634 - val_loss: 0.0564
Epoch 58/60
68750/68750 - 4480s - loss: 0.0633 - val_loss: 0.0570
Epoch 59/60
68750/68750 - 4460s - loss: 0.0633 - val_loss: 0.0567
Epoch 60/60
68750/68750 - 4479s - loss: 0.0633 - val_loss: 0.0566
history dict: {'loss': [0.0849451608269323, 0.08080037057811564, 0.07929819229497151, 0.07751906542081724, 0.07443941302321173, 0.07218060060907494, 0.07101918425093998, 0.07029783823793585, 0.06977145769623193, 0.06934412133818323, 0.06896665472325954, 0.06863522322511131, 0.06833930801532485, 0.06807393153171648, 0.06783346461705186, 0.06762573275682601, 0.0674214681928808, 0.06722295720048926, 0.06705521684007211, 0.06688309726647355, 0.06672671703490345, 0.06657199896560474, 0.06641123510393229, 0.06625966560304165, 0.06612196978815577, 0.06595979910303246, 0.06580358250628818, 0.0656535134942965, 0.06552545294916087, 0.06540022441677072, 0.0652677051355351, 0.0651559421064095, 0.06506314042129299, 0.0649614969758283, 0.06485085233693773, 0.0647660781248862, 0.06466663256428458, 0.06456853301449256, 0.0645017734271017, 0.06443582862886515, 0.06434536902573976, 0.06429137461551211, 0.06422062827958303, 0.06414797819682143, 0.06408057575764982, 0.06401454469680787, 0.0639470008383285, 0.06389115880532698, 0.06381998443329875, 0.06376198734811761, 0.0637112146344781, 0.06365083505075086, 0.06359815109659325, 0.06354729375606233, 0.0635165003125505, 0.06344800009516152, 0.06340045302182436, 0.0633474468107657, 0.06331628389106556, 0.06327888237668709], 'val_loss': [0.07851645225286484, 0.07569628074765206, 0.07423979607224464, 0.0705151962339878, 0.06655762061476707, 0.06429833371937275, 0.06358585821092129, 0.06298362548649311, 0.06252534598112107, 0.062139869064092634, 0.06187531509995461, 0.06121611425280571, 0.06095437574386597, 0.06087301939725876, 0.06107571329176426, 0.0604016934633255, 0.060157693520188335, 0.06025254563987255, 0.05972477617859841, 0.0603002815246582, 0.05982316438853741, 0.059944200828671454, 0.059629833474755284, 0.059397377371788025, 0.05919339294731617, 0.05892674478888511, 0.05887921214103699, 0.05880398726463318, 0.05837863963842392, 0.05842508116364479, 0.05848709903657436, 0.05861999009549618, 0.05804419936239719, 0.05768404181301594, 0.05790799793601036, 0.057937047392129896, 0.057723737642168996, 0.057935383051633836, 0.05755616322159767, 0.05779138107597828, 0.057540866762399674, 0.05734662486612797, 0.05732364100217819, 0.05719672575592995, 0.05723265181481838, 0.057154532000422476, 0.057392926901578906, 0.05739798822999, 0.05790724550187588, 0.057334719583392146, 0.057062333419919016, 0.057170499101281164, 0.056954135701060296, 0.05694164411723614, 0.057640389189124105, 0.05670825763046741, 0.056439793393015865, 0.056969311282038686, 0.05666451118886471, 0.056634672731161115]}