Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Help - ValueError: Layer #37 (named "batch_normalization_34" in the current model) was found to correspond to layer conv2d_35 in the save file. However the new layer batch_normalization_34 expects 4 weights, but the saved weights have 2 elements. #30

Closed
chetancc opened this issue Jul 5, 2020 · 5 comments

Comments

@chetancc
Copy link

chetancc commented Jul 5, 2020

Hello Sir,

I like your project very much and I am trying it on Google Colab by referring this link (https://colab.research.google.com/drive/1NLUwupCBsB1HrpEmOIHeMgU63sus2LxP). Attaching video (output_00006.mp4) and audio (taunt.wav) files for your reference. After executing all steps successfully while running last step I get below log. Please let me know if I am missing something as I did not see output file generated in /content directory even after refreshing folder in Google Colab.

/content/LipGAN
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
2000
Number of frames available for inference: 3841
(80, 328)
Length of mel chunks: 95
0% 0/1 [00:00<?, ?it/s]
0% 0/61 [00:00<?, ?it/s]
2% 1/61 [00:02<02:19, 2.33s/it]
3% 2/61 [00:02<01:21, 1.37s/it]
5% 3/61 [00:03<01:01, 1.05s/it]
7% 4/61 [00:03<00:50, 1.12it/s]
8% 5/61 [00:03<00:44, 1.26it/s]
10% 6/61 [00:04<00:40, 1.37it/s]
11% 7/61 [00:04<00:36, 1.46it/s]
13% 8/61 [00:05<00:34, 1.54it/s]
15% 9/61 [00:05<00:32, 1.61it/s]
16% 10/61 [00:06<00:30, 1.66it/s]
18% 11/61 [00:06<00:29, 1.71it/s]
20% 12/61 [00:06<00:27, 1.76it/s]
21% 13/61 [00:07<00:26, 1.80it/s]
23% 14/61 [00:07<00:25, 1.83it/s]
25% 15/61 [00:08<00:24, 1.86it/s]
26% 16/61 [00:08<00:23, 1.89it/s]
28% 17/61 [00:08<00:22, 1.92it/s]
30% 18/61 [00:09<00:22, 1.94it/s]
31% 19/61 [00:09<00:21, 1.96it/s]
33% 20/61 [00:10<00:20, 1.98it/s]
34% 21/61 [00:10<00:20, 2.00it/s]
36% 22/61 [00:10<00:19, 2.01it/s]
38% 23/61 [00:11<00:18, 2.03it/s]
39% 24/61 [00:11<00:18, 2.05it/s]
41% 25/61 [00:12<00:17, 2.06it/s]
43% 26/61 [00:12<00:16, 2.07it/s]
44% 27/61 [00:12<00:16, 2.08it/s]
46% 28/61 [00:13<00:15, 2.09it/s]
48% 29/61 [00:13<00:15, 2.11it/s]
49% 30/61 [00:14<00:14, 2.12it/s]
51% 31/61 [00:14<00:14, 2.12it/s]
52% 32/61 [00:15<00:13, 2.13it/s]
54% 33/61 [00:15<00:13, 2.14it/s]
56% 34/61 [00:15<00:12, 2.15it/s]
57% 35/61 [00:16<00:12, 2.16it/s]
59% 36/61 [00:16<00:11, 2.16it/s]
61% 37/61 [00:17<00:11, 2.17it/s]
62% 38/61 [00:17<00:10, 2.18it/s]
64% 39/61 [00:17<00:10, 2.18it/s]
66% 40/61 [00:18<00:09, 2.19it/s]
67% 41/61 [00:18<00:09, 2.19it/s]
69% 42/61 [00:19<00:08, 2.20it/s]
70% 43/61 [00:19<00:08, 2.20it/s]
72% 44/61 [00:19<00:07, 2.21it/s]
74% 45/61 [00:20<00:07, 2.21it/s]
75% 46/61 [00:20<00:06, 2.22it/s]
77% 47/61 [00:21<00:06, 2.22it/s]
79% 48/61 [00:21<00:05, 2.23it/s]
80% 49/61 [00:21<00:05, 2.23it/s]
82% 50/61 [00:22<00:04, 2.24it/s]
84% 51/61 [00:22<00:04, 2.24it/s]
85% 52/61 [00:23<00:04, 2.24it/s]
87% 53/61 [00:23<00:03, 2.25it/s]
89% 54/61 [00:23<00:03, 2.25it/s]
90% 55/61 [00:24<00:02, 2.25it/s]
92% 56/61 [00:24<00:02, 2.26it/s]
93% 57/61 [00:25<00:01, 2.26it/s]
95% 58/61 [00:25<00:01, 2.26it/s]
97% 59/61 [00:26<00:00, 2.26it/s]
98% 60/61 [00:26<00:00, 2.27it/s]
100% 61/61 [00:26<00:00, 2.30it/s]WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2020-07-05 15:01:10.670458: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-07-05 15:01:10.674180: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2020-07-05 15:01:10.674808: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.675569: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xcd015b80 executing computations on platform CUDA. Devices:
2020-07-05 15:01:10.675601: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0
2020-07-05 15:01:10.677270: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000185000 Hz
2020-07-05 15:01:10.677454: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xcd015800 executing computations on platform Host. Devices:
2020-07-05 15:01:10.677481: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2020-07-05 15:01:10.677688: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.678238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285
pciBusID: 0000:00:04.0
2020-07-05 15:01:10.678764: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-07-05 15:01:10.682118: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2020-07-05 15:01:10.684604: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2020-07-05 15:01:10.685235: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2020-07-05 15:01:10.688999: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2020-07-05 15:01:10.690899: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2020-07-05 15:01:10.691003: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2020-07-05 15:01:10.691109: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.691678: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.692183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2020-07-05 15:01:10.692252: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2020-07-05 15:01:10.693576: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-07-05 15:01:10.693601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2020-07-05 15:01:10.693612: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2020-07-05 15:01:10.693725: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.694282: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-07-05 15:01:10.694774: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:40] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-07-05 15:01:10.694810: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15059 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.


Layer (type) Output Shape Param # Connected to

input_face (InputLayer) (None, 96, 96, 6) 0


conv2d_1 (Conv2D) (None, 96, 96, 32) 9440 input_face[0][0]


batch_normalization_1 (BatchNor (None, 96, 96, 32) 128 conv2d_1[0][0]


activation_1 (Activation) (None, 96, 96, 32) 0 batch_normalization_1[0][0]


conv2d_2 (Conv2D) (None, 48, 48, 64) 51264 activation_1[0][0]


batch_normalization_2 (BatchNor (None, 48, 48, 64) 256 conv2d_2[0][0]


activation_2 (Activation) (None, 48, 48, 64) 0 batch_normalization_2[0][0]


conv2d_3 (Conv2D) (None, 48, 48, 64) 36928 activation_2[0][0]


batch_normalization_3 (BatchNor (None, 48, 48, 64) 256 conv2d_3[0][0]


activation_3 (Activation) (None, 48, 48, 64) 0 batch_normalization_3[0][0]


conv2d_4 (Conv2D) (None, 48, 48, 64) 36928 activation_3[0][0]


batch_normalization_4 (BatchNor (None, 48, 48, 64) 256 conv2d_4[0][0]


activation_4 (Activation) (None, 48, 48, 64) 0 batch_normalization_4[0][0]


add_1 (Add) (None, 48, 48, 64) 0 activation_4[0][0]
activation_2[0][0]


activation_5 (Activation) (None, 48, 48, 64) 0 add_1[0][0]


input_audio (InputLayer) (None, 80, 27, 1) 0


conv2d_5 (Conv2D) (None, 48, 48, 64) 36928 activation_5[0][0]


conv2d_27 (Conv2D) (None, 80, 27, 32) 320 input_audio[0][0]


batch_normalization_5 (BatchNor (None, 48, 48, 64) 256 conv2d_5[0][0]


batch_normalization_27 (BatchNo (None, 80, 27, 32) 128 conv2d_27[0][0]


activation_6 (Activation) (None, 48, 48, 64) 0 batch_normalization_5[0][0]


activation_36 (Activation) (None, 80, 27, 32) 0 batch_normalization_27[0][0]


conv2d_6 (Conv2D) (None, 48, 48, 64) 36928 activation_6[0][0]


conv2d_28 (Conv2D) (None, 80, 27, 32) 9248 activation_36[0][0]


batch_normalization_6 (BatchNor (None, 48, 48, 64) 256 conv2d_6[0][0]


batch_normalization_28 (BatchNo (None, 80, 27, 32) 128 conv2d_28[0][0]


activation_7 (Activation) (None, 48, 48, 64) 0 batch_normalization_6[0][0]


activation_37 (Activation) (None, 80, 27, 32) 0 batch_normalization_28[0][0]


add_2 (Add) (None, 48, 48, 64) 0 activation_7[0][0]
activation_5[0][0]


conv2d_29 (Conv2D) (None, 80, 27, 32) 9248 activation_37[0][0]


activation_8 (Activation) (None, 48, 48, 64) 0 add_2[0][0]


batch_normalization_29 (BatchNo (None, 80, 27, 32) 128 conv2d_29[0][0]


conv2d_7 (Conv2D) (None, 24, 24, 128) 73856 activation_8[0][0]


activation_38 (Activation) (None, 80, 27, 32) 0 batch_normalization_29[0][0]


batch_normalization_7 (BatchNor (None, 24, 24, 128) 512 conv2d_7[0][0]


add_10 (Add) (None, 80, 27, 32) 0 activation_38[0][0]
activation_36[0][0]


activation_9 (Activation) (None, 24, 24, 128) 0 batch_normalization_7[0][0]


activation_39 (Activation) (None, 80, 27, 32) 0 add_10[0][0]


conv2d_8 (Conv2D) (None, 24, 24, 128) 147584 activation_9[0][0]


conv2d_30 (Conv2D) (None, 80, 27, 32) 9248 activation_39[0][0]


batch_normalization_8 (BatchNor (None, 24, 24, 128) 512 conv2d_8[0][0]


batch_normalization_30 (BatchNo (None, 80, 27, 32) 128 conv2d_30[0][0]


activation_10 (Activation) (None, 24, 24, 128) 0 batch_normalization_8[0][0]


activation_40 (Activation) (None, 80, 27, 32) 0 batch_normalization_30[0][0]


conv2d_9 (Conv2D) (None, 24, 24, 128) 147584 activation_10[0][0]


conv2d_31 (Conv2D) (None, 80, 27, 32) 9248 activation_40[0][0]


batch_normalization_9 (BatchNor (None, 24, 24, 128) 512 conv2d_9[0][0]


batch_normalization_31 (BatchNo (None, 80, 27, 32) 128 conv2d_31[0][0]


activation_11 (Activation) (None, 24, 24, 128) 0 batch_normalization_9[0][0]


activation_41 (Activation) (None, 80, 27, 32) 0 batch_normalization_31[0][0]


add_3 (Add) (None, 24, 24, 128) 0 activation_11[0][0]
activation_9[0][0]


add_11 (Add) (None, 80, 27, 32) 0 activation_41[0][0]
activation_39[0][0]


activation_12 (Activation) (None, 24, 24, 128) 0 add_3[0][0]


activation_42 (Activation) (None, 80, 27, 32) 0 add_11[0][0]


conv2d_10 (Conv2D) (None, 24, 24, 128) 147584 activation_12[0][0]


conv2d_32 (Conv2D) (None, 27, 9, 64) 18496 activation_42[0][0]


batch_normalization_10 (BatchNo (None, 24, 24, 128) 512 conv2d_10[0][0]


batch_normalization_32 (BatchNo (None, 27, 9, 64) 256 conv2d_32[0][0]


activation_13 (Activation) (None, 24, 24, 128) 0 batch_normalization_10[0][0]


activation_43 (Activation) (None, 27, 9, 64) 0 batch_normalization_32[0][0]


conv2d_11 (Conv2D) (None, 24, 24, 128) 147584 activation_13[0][0]


conv2d_33 (Conv2D) (None, 27, 9, 64) 36928 activation_43[0][0]


batch_normalization_11 (BatchNo (None, 24, 24, 128) 512 conv2d_11[0][0]


batch_normalization_33 (BatchNo (None, 27, 9, 64) 256 conv2d_33[0][0]


activation_14 (Activation) (None, 24, 24, 128) 0 batch_normalization_11[0][0]


activation_44 (Activation) (None, 27, 9, 64) 0 batch_normalization_33[0][0]


add_4 (Add) (None, 24, 24, 128) 0 activation_14[0][0]
activation_12[0][0]


conv2d_34 (Conv2D) (None, 27, 9, 64) 36928 activation_44[0][0]


activation_15 (Activation) (None, 24, 24, 128) 0 add_4[0][0]


batch_normalization_34 (BatchNo (None, 27, 9, 64) 256 conv2d_34[0][0]


conv2d_12 (Conv2D) (None, 24, 24, 128) 147584 activation_15[0][0]


activation_45 (Activation) (None, 27, 9, 64) 0 batch_normalization_34[0][0]


batch_normalization_12 (BatchNo (None, 24, 24, 128) 512 conv2d_12[0][0]


add_12 (Add) (None, 27, 9, 64) 0 activation_45[0][0]
activation_43[0][0]


activation_16 (Activation) (None, 24, 24, 128) 0 batch_normalization_12[0][0]


activation_46 (Activation) (None, 27, 9, 64) 0 add_12[0][0]


conv2d_13 (Conv2D) (None, 24, 24, 128) 147584 activation_16[0][0]


conv2d_35 (Conv2D) (None, 27, 9, 64) 36928 activation_46[0][0]


batch_normalization_13 (BatchNo (None, 24, 24, 128) 512 conv2d_13[0][0]


batch_normalization_35 (BatchNo (None, 27, 9, 64) 256 conv2d_35[0][0]


activation_17 (Activation) (None, 24, 24, 128) 0 batch_normalization_13[0][0]


activation_47 (Activation) (None, 27, 9, 64) 0 batch_normalization_35[0][0]


add_5 (Add) (None, 24, 24, 128) 0 activation_17[0][0]
activation_15[0][0]


conv2d_36 (Conv2D) (None, 27, 9, 64) 36928 activation_47[0][0]


activation_18 (Activation) (None, 24, 24, 128) 0 add_5[0][0]


batch_normalization_36 (BatchNo (None, 27, 9, 64) 256 conv2d_36[0][0]


conv2d_14 (Conv2D) (None, 12, 12, 256) 295168 activation_18[0][0]


activation_48 (Activation) (None, 27, 9, 64) 0 batch_normalization_36[0][0]


batch_normalization_14 (BatchNo (None, 12, 12, 256) 1024 conv2d_14[0][0]


add_13 (Add) (None, 27, 9, 64) 0 activation_48[0][0]
activation_46[0][0]


activation_19 (Activation) (None, 12, 12, 256) 0 batch_normalization_14[0][0]


activation_49 (Activation) (None, 27, 9, 64) 0 add_13[0][0]


conv2d_15 (Conv2D) (None, 12, 12, 256) 590080 activation_19[0][0]


conv2d_37 (Conv2D) (None, 9, 9, 128) 73856 activation_49[0][0]


batch_normalization_15 (BatchNo (None, 12, 12, 256) 1024 conv2d_15[0][0]


batch_normalization_37 (BatchNo (None, 9, 9, 128) 512 conv2d_37[0][0]


activation_20 (Activation) (None, 12, 12, 256) 0 batch_normalization_15[0][0]


activation_50 (Activation) (None, 9, 9, 128) 0 batch_normalization_37[0][0]


conv2d_16 (Conv2D) (None, 12, 12, 256) 590080 activation_20[0][0]


conv2d_38 (Conv2D) (None, 9, 9, 128) 147584 activation_50[0][0]


batch_normalization_16 (BatchNo (None, 12, 12, 256) 1024 conv2d_16[0][0]


batch_normalization_38 (BatchNo (None, 9, 9, 128) 512 conv2d_38[0][0]


activation_21 (Activation) (None, 12, 12, 256) 0 batch_normalization_16[0][0]


activation_51 (Activation) (None, 9, 9, 128) 0 batch_normalization_38[0][0]


add_6 (Add) (None, 12, 12, 256) 0 activation_21[0][0]
activation_19[0][0]


conv2d_39 (Conv2D) (None, 9, 9, 128) 147584 activation_51[0][0]


activation_22 (Activation) (None, 12, 12, 256) 0 add_6[0][0]


batch_normalization_39 (BatchNo (None, 9, 9, 128) 512 conv2d_39[0][0]


conv2d_17 (Conv2D) (None, 12, 12, 256) 590080 activation_22[0][0]


activation_52 (Activation) (None, 9, 9, 128) 0 batch_normalization_39[0][0]


batch_normalization_17 (BatchNo (None, 12, 12, 256) 1024 conv2d_17[0][0]


add_14 (Add) (None, 9, 9, 128) 0 activation_52[0][0]
activation_50[0][0]


activation_23 (Activation) (None, 12, 12, 256) 0 batch_normalization_17[0][0]


activation_53 (Activation) (None, 9, 9, 128) 0 add_14[0][0]


conv2d_18 (Conv2D) (None, 12, 12, 256) 590080 activation_23[0][0]


conv2d_40 (Conv2D) (None, 9, 9, 128) 147584 activation_53[0][0]


batch_normalization_18 (BatchNo (None, 12, 12, 256) 1024 conv2d_18[0][0]


batch_normalization_40 (BatchNo (None, 9, 9, 128) 512 conv2d_40[0][0]


activation_24 (Activation) (None, 12, 12, 256) 0 batch_normalization_18[0][0]


activation_54 (Activation) (None, 9, 9, 128) 0 batch_normalization_40[0][0]


add_7 (Add) (None, 12, 12, 256) 0 activation_24[0][0]
activation_22[0][0]


conv2d_41 (Conv2D) (None, 9, 9, 128) 147584 activation_54[0][0]


activation_25 (Activation) (None, 12, 12, 256) 0 add_7[0][0]


batch_normalization_41 (BatchNo (None, 9, 9, 128) 512 conv2d_41[0][0]


conv2d_19 (Conv2D) (None, 6, 6, 512) 1180160 activation_25[0][0]


activation_55 (Activation) (None, 9, 9, 128) 0 batch_normalization_41[0][0]


batch_normalization_19 (BatchNo (None, 6, 6, 512) 2048 conv2d_19[0][0]


add_15 (Add) (None, 9, 9, 128) 0 activation_55[0][0]
activation_53[0][0]


activation_26 (Activation) (None, 6, 6, 512) 0 batch_normalization_19[0][0]


activation_56 (Activation) (None, 9, 9, 128) 0 add_15[0][0]


conv2d_20 (Conv2D) (None, 6, 6, 512) 2359808 activation_26[0][0]


conv2d_42 (Conv2D) (None, 3, 3, 256) 295168 activation_56[0][0]


batch_normalization_20 (BatchNo (None, 6, 6, 512) 2048 conv2d_20[0][0]


batch_normalization_42 (BatchNo (None, 3, 3, 256) 1024 conv2d_42[0][0]


activation_27 (Activation) (None, 6, 6, 512) 0 batch_normalization_20[0][0]


activation_57 (Activation) (None, 3, 3, 256) 0 batch_normalization_42[0][0]


conv2d_21 (Conv2D) (None, 6, 6, 512) 2359808 activation_27[0][0]


conv2d_43 (Conv2D) (None, 3, 3, 256) 590080 activation_57[0][0]


batch_normalization_21 (BatchNo (None, 6, 6, 512) 2048 conv2d_21[0][0]


batch_normalization_43 (BatchNo (None, 3, 3, 256) 1024 conv2d_43[0][0]


activation_28 (Activation) (None, 6, 6, 512) 0 batch_normalization_21[0][0]


activation_58 (Activation) (None, 3, 3, 256) 0 batch_normalization_43[0][0]


add_8 (Add) (None, 6, 6, 512) 0 activation_28[0][0]
activation_26[0][0]


conv2d_44 (Conv2D) (None, 3, 3, 256) 590080 activation_58[0][0]


activation_29 (Activation) (None, 6, 6, 512) 0 add_8[0][0]


batch_normalization_44 (BatchNo (None, 3, 3, 256) 1024 conv2d_44[0][0]


conv2d_22 (Conv2D) (None, 6, 6, 512) 2359808 activation_29[0][0]


activation_59 (Activation) (None, 3, 3, 256) 0 batch_normalization_44[0][0]


batch_normalization_22 (BatchNo (None, 6, 6, 512) 2048 conv2d_22[0][0]


add_16 (Add) (None, 3, 3, 256) 0 activation_59[0][0]
activation_57[0][0]


activation_30 (Activation) (None, 6, 6, 512) 0 batch_normalization_22[0][0]


activation_60 (Activation) (None, 3, 3, 256) 0 add_16[0][0]


conv2d_23 (Conv2D) (None, 6, 6, 512) 2359808 activation_30[0][0]


conv2d_45 (Conv2D) (None, 3, 3, 256) 590080 activation_60[0][0]


batch_normalization_23 (BatchNo (None, 6, 6, 512) 2048 conv2d_23[0][0]


batch_normalization_45 (BatchNo (None, 3, 3, 256) 1024 conv2d_45[0][0]


activation_31 (Activation) (None, 6, 6, 512) 0 batch_normalization_23[0][0]


activation_61 (Activation) (None, 3, 3, 256) 0 batch_normalization_45[0][0]


add_9 (Add) (None, 6, 6, 512) 0 activation_31[0][0]
activation_29[0][0]


conv2d_46 (Conv2D) (None, 3, 3, 256) 590080 activation_61[0][0]


activation_32 (Activation) (None, 6, 6, 512) 0 add_9[0][0]


batch_normalization_46 (BatchNo (None, 3, 3, 256) 1024 conv2d_46[0][0]


conv2d_24 (Conv2D) (None, 3, 3, 512) 2359808 activation_32[0][0]


activation_62 (Activation) (None, 3, 3, 256) 0 batch_normalization_46[0][0]


batch_normalization_24 (BatchNo (None, 3, 3, 512) 2048 conv2d_24[0][0]


add_17 (Add) (None, 3, 3, 256) 0 activation_62[0][0]
activation_60[0][0]


activation_33 (Activation) (None, 3, 3, 512) 0 batch_normalization_24[0][0]


activation_63 (Activation) (None, 3, 3, 256) 0 add_17[0][0]


conv2d_25 (Conv2D) (None, 1, 1, 512) 2359808 activation_33[0][0]


conv2d_47 (Conv2D) (None, 1, 1, 512) 1180160 activation_63[0][0]


batch_normalization_25 (BatchNo (None, 1, 1, 512) 2048 conv2d_25[0][0]


batch_normalization_47 (BatchNo (None, 1, 1, 512) 2048 conv2d_47[0][0]


activation_34 (Activation) (None, 1, 1, 512) 0 batch_normalization_25[0][0]


activation_64 (Activation) (None, 1, 1, 512) 0 batch_normalization_47[0][0]


conv2d_26 (Conv2D) (None, 1, 1, 512) 262656 activation_34[0][0]


conv2d_48 (Conv2D) (None, 1, 1, 512) 262656 activation_64[0][0]


batch_normalization_26 (BatchNo (None, 1, 1, 512) 2048 conv2d_26[0][0]


batch_normalization_48 (BatchNo (None, 1, 1, 512) 2048 conv2d_48[0][0]


activation_35 (Activation) (None, 1, 1, 512) 0 batch_normalization_26[0][0]


activation_65 (Activation) (None, 1, 1, 512) 0 batch_normalization_48[0][0]


concatenate_1 (Concatenate) (None, 1, 1, 1024) 0 activation_35[0][0]
activation_65[0][0]


conv2d_transpose_1 (Conv2DTrans (None, 3, 3, 512) 4719104 concatenate_1[0][0]


batch_normalization_49 (BatchNo (None, 3, 3, 512) 2048 conv2d_transpose_1[0][0]


activation_66 (Activation) (None, 3, 3, 512) 0 batch_normalization_49[0][0]


concatenate_2 (Concatenate) (None, 3, 3, 1024) 0 activation_33[0][0]
activation_66[0][0]


conv2d_transpose_2 (Conv2DTrans (None, 6, 6, 512) 4719104 concatenate_2[0][0]


batch_normalization_50 (BatchNo (None, 6, 6, 512) 2048 conv2d_transpose_2[0][0]


activation_67 (Activation) (None, 6, 6, 512) 0 batch_normalization_50[0][0]


conv2d_49 (Conv2D) (None, 6, 6, 512) 2359808 activation_67[0][0]


batch_normalization_51 (BatchNo (None, 6, 6, 512) 2048 conv2d_49[0][0]


activation_68 (Activation) (None, 6, 6, 512) 0 batch_normalization_51[0][0]


conv2d_50 (Conv2D) (None, 6, 6, 512) 2359808 activation_68[0][0]


batch_normalization_52 (BatchNo (None, 6, 6, 512) 2048 conv2d_50[0][0]


activation_69 (Activation) (None, 6, 6, 512) 0 batch_normalization_52[0][0]


add_18 (Add) (None, 6, 6, 512) 0 activation_69[0][0]
activation_67[0][0]


activation_70 (Activation) (None, 6, 6, 512) 0 add_18[0][0]


conv2d_51 (Conv2D) (None, 6, 6, 512) 2359808 activation_70[0][0]


batch_normalization_53 (BatchNo (None, 6, 6, 512) 2048 conv2d_51[0][0]


activation_71 (Activation) (None, 6, 6, 512) 0 batch_normalization_53[0][0]


conv2d_52 (Conv2D) (None, 6, 6, 512) 2359808 activation_71[0][0]


batch_normalization_54 (BatchNo (None, 6, 6, 512) 2048 conv2d_52[0][0]


activation_72 (Activation) (None, 6, 6, 512) 0 batch_normalization_54[0][0]


add_19 (Add) (None, 6, 6, 512) 0 activation_72[0][0]
activation_70[0][0]


activation_73 (Activation) (None, 6, 6, 512) 0 add_19[0][0]


concatenate_3 (Concatenate) (None, 6, 6, 1024) 0 activation_32[0][0]
activation_73[0][0]


conv2d_transpose_3 (Conv2DTrans (None, 12, 12, 256) 2359552 concatenate_3[0][0]


batch_normalization_55 (BatchNo (None, 12, 12, 256) 1024 conv2d_transpose_3[0][0]


activation_74 (Activation) (None, 12, 12, 256) 0 batch_normalization_55[0][0]


conv2d_53 (Conv2D) (None, 12, 12, 256) 590080 activation_74[0][0]


batch_normalization_56 (BatchNo (None, 12, 12, 256) 1024 conv2d_53[0][0]


activation_75 (Activation) (None, 12, 12, 256) 0 batch_normalization_56[0][0]


conv2d_54 (Conv2D) (None, 12, 12, 256) 590080 activation_75[0][0]


batch_normalization_57 (BatchNo (None, 12, 12, 256) 1024 conv2d_54[0][0]


activation_76 (Activation) (None, 12, 12, 256) 0 batch_normalization_57[0][0]


add_20 (Add) (None, 12, 12, 256) 0 activation_76[0][0]
activation_74[0][0]


activation_77 (Activation) (None, 12, 12, 256) 0 add_20[0][0]


conv2d_55 (Conv2D) (None, 12, 12, 256) 590080 activation_77[0][0]


batch_normalization_58 (BatchNo (None, 12, 12, 256) 1024 conv2d_55[0][0]


activation_78 (Activation) (None, 12, 12, 256) 0 batch_normalization_58[0][0]


conv2d_56 (Conv2D) (None, 12, 12, 256) 590080 activation_78[0][0]


batch_normalization_59 (BatchNo (None, 12, 12, 256) 1024 conv2d_56[0][0]


activation_79 (Activation) (None, 12, 12, 256) 0 batch_normalization_59[0][0]


add_21 (Add) (None, 12, 12, 256) 0 activation_79[0][0]
activation_77[0][0]


activation_80 (Activation) (None, 12, 12, 256) 0 add_21[0][0]


concatenate_4 (Concatenate) (None, 12, 12, 512) 0 activation_25[0][0]
activation_80[0][0]


conv2d_transpose_4 (Conv2DTrans (None, 24, 24, 128) 589952 concatenate_4[0][0]


batch_normalization_60 (BatchNo (None, 24, 24, 128) 512 conv2d_transpose_4[0][0]


activation_81 (Activation) (None, 24, 24, 128) 0 batch_normalization_60[0][0]


conv2d_57 (Conv2D) (None, 24, 24, 128) 147584 activation_81[0][0]


batch_normalization_61 (BatchNo (None, 24, 24, 128) 512 conv2d_57[0][0]


activation_82 (Activation) (None, 24, 24, 128) 0 batch_normalization_61[0][0]


conv2d_58 (Conv2D) (None, 24, 24, 128) 147584 activation_82[0][0]


batch_normalization_62 (BatchNo (None, 24, 24, 128) 512 conv2d_58[0][0]


activation_83 (Activation) (None, 24, 24, 128) 0 batch_normalization_62[0][0]


add_22 (Add) (None, 24, 24, 128) 0 activation_83[0][0]
activation_81[0][0]


activation_84 (Activation) (None, 24, 24, 128) 0 add_22[0][0]


conv2d_59 (Conv2D) (None, 24, 24, 128) 147584 activation_84[0][0]


batch_normalization_63 (BatchNo (None, 24, 24, 128) 512 conv2d_59[0][0]


activation_85 (Activation) (None, 24, 24, 128) 0 batch_normalization_63[0][0]


conv2d_60 (Conv2D) (None, 24, 24, 128) 147584 activation_85[0][0]


batch_normalization_64 (BatchNo (None, 24, 24, 128) 512 conv2d_60[0][0]


activation_86 (Activation) (None, 24, 24, 128) 0 batch_normalization_64[0][0]


add_23 (Add) (None, 24, 24, 128) 0 activation_86[0][0]
activation_84[0][0]


activation_87 (Activation) (None, 24, 24, 128) 0 add_23[0][0]


concatenate_5 (Concatenate) (None, 24, 24, 256) 0 activation_18[0][0]
activation_87[0][0]


conv2d_transpose_5 (Conv2DTrans (None, 48, 48, 64) 147520 concatenate_5[0][0]


batch_normalization_65 (BatchNo (None, 48, 48, 64) 256 conv2d_transpose_5[0][0]


activation_88 (Activation) (None, 48, 48, 64) 0 batch_normalization_65[0][0]


conv2d_61 (Conv2D) (None, 48, 48, 64) 36928 activation_88[0][0]


batch_normalization_66 (BatchNo (None, 48, 48, 64) 256 conv2d_61[0][0]


activation_89 (Activation) (None, 48, 48, 64) 0 batch_normalization_66[0][0]


conv2d_62 (Conv2D) (None, 48, 48, 64) 36928 activation_89[0][0]


batch_normalization_67 (BatchNo (None, 48, 48, 64) 256 conv2d_62[0][0]


activation_90 (Activation) (None, 48, 48, 64) 0 batch_normalization_67[0][0]


add_24 (Add) (None, 48, 48, 64) 0 activation_90[0][0]
activation_88[0][0]


activation_91 (Activation) (None, 48, 48, 64) 0 add_24[0][0]


conv2d_63 (Conv2D) (None, 48, 48, 64) 36928 activation_91[0][0]


batch_normalization_68 (BatchNo (None, 48, 48, 64) 256 conv2d_63[0][0]


activation_92 (Activation) (None, 48, 48, 64) 0 batch_normalization_68[0][0]


conv2d_64 (Conv2D) (None, 48, 48, 64) 36928 activation_92[0][0]


batch_normalization_69 (BatchNo (None, 48, 48, 64) 256 conv2d_64[0][0]


activation_93 (Activation) (None, 48, 48, 64) 0 batch_normalization_69[0][0]


add_25 (Add) (None, 48, 48, 64) 0 activation_93[0][0]
activation_91[0][0]


activation_94 (Activation) (None, 48, 48, 64) 0 add_25[0][0]


concatenate_6 (Concatenate) (None, 48, 48, 128) 0 activation_8[0][0]
activation_94[0][0]


conv2d_transpose_6 (Conv2DTrans (None, 96, 96, 32) 36896 concatenate_6[0][0]


batch_normalization_70 (BatchNo (None, 96, 96, 32) 128 conv2d_transpose_6[0][0]


activation_95 (Activation) (None, 96, 96, 32) 0 batch_normalization_70[0][0]


concatenate_7 (Concatenate) (None, 96, 96, 64) 0 activation_1[0][0]
activation_95[0][0]


conv2d_65 (Conv2D) (None, 96, 96, 16) 9232 concatenate_7[0][0]


batch_normalization_71 (BatchNo (None, 96, 96, 16) 64 conv2d_65[0][0]


activation_96 (Activation) (None, 96, 96, 16) 0 batch_normalization_71[0][0]


conv2d_66 (Conv2D) (None, 96, 96, 16) 2320 activation_96[0][0]


batch_normalization_72 (BatchNo (None, 96, 96, 16) 64 conv2d_66[0][0]


activation_97 (Activation) (None, 96, 96, 16) 0 batch_normalization_72[0][0]


conv2d_67 (Conv2D) (None, 96, 96, 3) 51 activation_97[0][0]


prediction (Activation) (None, 96, 96, 3) 0 conv2d_67[0][0]

Total params: 49,573,971
Trainable params: 49,543,123
Non-trainable params: 30,848


WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

Model Created
Traceback (most recent call last):
File "batch_inference.py", line 217, in
main()
File "batch_inference.py", line 193, in main
model.load_weights(args.checkpoint_path)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 1056, in load_weights_from_hdf5_group
' elements.')
ValueError: Layer #37 (named "batch_normalization_34" in the current model) was found to correspond to layer conv2d_35 in the save file. However the new layer batch_normalization_34 expects 4 weights, but the saved weights have 2 elements.

@prajwalkr
Copy link
Collaborator

Could you please tell us your TensorFlow and Keras versions? Please ensure they are the same as the ones mentioned in requirements.txt.

@chetancc
Copy link
Author

chetancc commented Jul 5, 2020

Sure sir,
Based on Google Colab (https://colab.research.google.com/drive/1NLUwupCBsB1HrpEmOIHeMgU63sus2LxP#scrollTo=27x3sF9grFsH)
I was using
tensorflow-gpu==1.14.0
Keras==2.2.4

As per requirement.txt I will use
tensorflow-gpu==1.8.0
Keras==2.2.4

And I will keep you posted on the progress

@chetancc
Copy link
Author

chetancc commented Jul 6, 2020

Hello Sir,
Hope you are doing fine. I used existing colab notebook and changed
!pip install tensorflow-gpu==1.8.0
from
!pip install tensorflow-gpu==1.14.0
which is under # Fixing tf version issue.
and run all steps.
When it executes
import tensorflow as tf
I receive below log
Please help or provide me updated Google Colab notebook.


ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in ()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import version

7 frames
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

ImportError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in ()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long

ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.

@Rudrabha
Copy link
Owner

Rudrabha commented Jul 6, 2020

This is a CUDA error. Please check your CUDA version. You will need CUDA 9.0 for running the TensorFlow version

@chetancc
Copy link
Author

chetancc commented Jul 6, 2020

Thx Sir, I will get back to you

@chetancc chetancc closed this as completed Jul 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants