Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [float32, float16] that don't all match. #8

Closed
king398 opened this issue Jan 28, 2021 · 18 comments

Comments

@king398
Copy link

king398 commented Jan 28, 2021

So thank You for this implementation of VIT in Tensorflow. But whenever I Try to use this I get the Error
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [float32, float16] that don't all match.
I Have tried setting all the variables to float32 but that also does not work . Please help me , here is my code
`

`
VIT.zip

@faustomorales
Copy link
Owner

Hi! To help diagnose this issue, please share the following:

  • A minimal code snippet that results in this error. Keeping the reproducing code snippet short makes it easier for volunteer contributors to get started with debugging.
  • The full traceback from the error.
  • The versions for Python and TensorFlow.

@king398
Copy link
Author

king398 commented Jan 28, 2021

The code is attached below in VIT.zip
The trace back for error
2021-01-28 10:06:21.774441: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.summary API due to missing TensorBoard installation. 2021-01-28 10:06:27.314537: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-01-28 10:06:27.317058: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll 2021-01-28 10:06:27.357692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 3070 computeCapability: 8.6 coreClock: 1.725GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s 2021-01-28 10:06:27.358537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 2021-01-28 10:06:27.369766: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll 2021-01-28 10:06:27.370196: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll 2021-01-28 10:06:27.375969: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll 2021-01-28 10:06:27.378850: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll 2021-01-28 10:06:27.390353: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll 2021-01-28 10:06:27.395070: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll 2021-01-28 10:06:27.398422: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll 2021-01-28 10:06:27.398911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-01-28 10:06:27.399382: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set INFO:tensorflow:Mixed precision compatibility check (mixed_float16): OK Your GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: GeForce RTX 3070, compute capability 8.6 INFO:tensorflow:Mixed precision compatibility check (mixed_float16): OK Your GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: GeForce RTX 3070, compute capability 8.6 WARNING:tensorflow:From F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\mixed_precision\loss_scale.py:56: DynamicLossScale.__init__ (from tensorflow.python.training.experimental.loss_scale) is deprecated and will be removed in a future version. Instructions for updating: Use tf.keras.mixed_precision.LossScaleOptimizer instead. LossScaleOptimizer now has all the functionality of DynamicLossScale WARNING:tensorflow:From F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\mixed_precision\loss_scale.py:56: DynamicLossScale.__init__ (from tensorflow.python.training.experimental.loss_scale) is deprecated and will be removed in a future version. Instructions for updating: Use tf.keras.mixed_precision.LossScaleOptimizer instead. LossScaleOptimizer now has all the functionality of DynamicLossScale 2021-01-28 10:06:27.400916: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-01-28 10:06:27.402236: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 3070 computeCapability: 8.6 coreClock: 1.725GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s 2021-01-28 10:06:27.403451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 2021-01-28 10:06:27.403909: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll 2021-01-28 10:06:27.404304: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll 2021-01-28 10:06:27.404704: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll 2021-01-28 10:06:27.405096: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll 2021-01-28 10:06:27.405510: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll 2021-01-28 10:06:27.405925: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll 2021-01-28 10:06:27.406322: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll 2021-01-28 10:06:27.406754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-01-28 10:06:28.047884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-01-28 10:06:28.048396: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 2021-01-28 10:06:28.048649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N 2021-01-28 10:06:28.049105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6589 MB memory) -> physical GPU (device: 0, name: GeForce RTX 3070, pci bus id: 0000:01:00.0, compute capability: 8.6) 2021-01-28 10:06:28.051167: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set Traceback (most recent call last): File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 438, in _apply_op_helper as_ref=input_arg.is_ref) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 1608, in internal_convert_n_to_tensor ctx=ctx)) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped return func(*args, **kwargs) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\ops.py", line 1509, in convert_to_tensor (dtype.name, value.dtype.name, value)) ValueError: Tensor conversion requested dtype float32 for Tensor with dtype float16: <tf.Tensor 'Placeholder:0' shape=(None, 144, 1024) dtype=float16> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "F:\anaconda\envs\tf\lib\site-packages\IPython\core\interactiveshell.py", line 3418, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-60aca5ddbb4e>", line 1, in <module> runfile('F:/Pycharm_projects/Kaggle Cassava/notebooks/VIT.py', wdir='F:/Pycharm_projects/Kaggle Cassava/notebooks') File "F:\Pycharm professional\PyCharm 2020.2.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "F:\Pycharm professional\PyCharm 2020.2.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/Pycharm_projects/Kaggle Cassava/notebooks/VIT.py", line 33, in <module> classes=5 File "F:\anaconda\envs\tf\lib\site-packages\vit_keras\vit.py", line 250, in vit_l32 representation_size=1024 if weights == "imagenet21k" else None, File "F:\anaconda\envs\tf\lib\site-packages\vit_keras\vit.py", line 77, in build_model y = layers.ClassToken(name="class_token")(y) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 952, in __call__ input_list) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1091, in _functional_construction_call inputs, input_masks, args, kwargs) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 822, in _keras_tensor_symbolic_call return self._infer_output_signature(inputs, args, kwargs, input_masks) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 863, in _infer_output_signature outputs = call_fn(inputs, *args, **kwargs) File "F:\anaconda\envs\tf\lib\site-packages\vit_keras\layers.py", line 22, in call return tf.concat([cls_broadcasted, inputs], 1) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1677, in concat return gen_array_ops.concat_v2(values=values, axis=axis, name=name) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1207, in concat_v2 "ConcatV2", values=values, axis=axis, name=name) File "F:\anaconda\envs\tf\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 466, in _apply_op_helper raise TypeError("%s that don't all match." % prefix) TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [float32, float16] that don't all match.
Python Version -- 3.7.9
Tensorflow Version -- 2.4.1
Keras Version -- 2.4.3

@faustomorales
Copy link
Owner

Please provide a minimal code snippet, not an entire project. This will make it easier for the volunteer community to help.

@king398
Copy link
Author

king398 commented Jan 28, 2021

Oh Sorry for that, I will Try For that

@king398
Copy link
Author

king398 commented Jan 28, 2021

VIT.zip
Dataset link I will Provide In few Minutes

@king398
Copy link
Author

king398 commented Jan 28, 2021

@king398
Copy link
Author

king398 commented Jan 28, 2021

Is it Ok ?

@faustomorales
Copy link
Owner

A minimal code sample is typically no more than 10-20 lines of standalone code. Volunteer contributors and maintainers rarely have time to download datasets and set up projects like this. Please see other closed issues for examples of good minimal snippets like this one.

@king398
Copy link
Author

king398 commented Jan 28, 2021

Please Look at it

@king398
Copy link
Author

king398 commented Jan 28, 2021

This is the minimum code sample I could produce for the error to be replicated
VIT.zip

@faustomorales
Copy link
Owner

Please share the minimal code as an inline snippet in this issue instead of as a ZIP file. It is hard to comment on the contents of a ZIP file.

@king398
Copy link
Author

king398 commented Jan 28, 2021

Oh
sorry
`import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
from vit_keras import vit
from keras.datasets import mnist

policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
base_model = vit.vit_l32(
activation=None,
pretrained=False,
include_top=False,
pretrained_top=False,
)
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(5, activation='softmax', dtype='float32')
])
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['categorical_accuracy'])
model.fit(x=x_train, y=y_train, batch_size=32, epochs=20, shuffle=True, validation_data=(x_test, y_test))
`

@king398
Copy link
Author

king398 commented Jan 28, 2021

Will this Work ?

@faustomorales
Copy link
Owner

It may take time for someone to debug and work on making mixed precision function properly, but this should be enough to get started. For now, I suggest trying to work without mixed precision.

@king398
Copy link
Author

king398 commented Jan 28, 2021

even without mixed precision I am getting this error

@king398
Copy link
Author

king398 commented Jan 28, 2021

This code sample i have given works, But the code sample I shared earlier does not

@king398
Copy link
Author

king398 commented Jan 28, 2021

It is working now but mixed precision support will be awesome.

@king398 king398 closed this as completed Jan 28, 2021
@faustomorales
Copy link
Owner

Mixed precision support is now available thanks to https://github.com/Shiro-LK (see #11 (comment)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants