Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow warning #348

Closed
ManelAl opened this issue Mar 4, 2024 · 4 comments
Closed

Tensorflow warning #348

ManelAl opened this issue Mar 4, 2024 · 4 comments

Comments

@ManelAl
Copy link

ManelAl commented Mar 4, 2024

When executing the autoencoder training notebook https://github.com/NVlabs/sionna/blob/main/examples/Autoencoder.ipynb the model training generates tensorflow warnings "WARNING:tensorflow:You are casting an input of type complex64 to an incompatible dtype float32. This will discard the imaginary part and may not be what you intended."

The warning are supressed with the tf.get_logger().setLevel('ERROR') line which I'm removing
Does that mean that the training results are false because we are always supressing the imaginary part because of casting issues ?
How can I debug this issue to know at which level of the code is the casting problem happening ?

@SebastianCa
Copy link
Collaborator

Indeed, there are some TensorFlow warnings related to casting from tf.complex64 to tf.float32. However, this should not impact the result of your training.

In your example, the warning comes from the normalization in the custom constellation object. This is currently done by:

energy_sqrt = tf.cast(tf.sqrt(energy), tf.complex64)

I assume that the TensorFlow warning relates to the missing imaginary part in the backward path during gradient computation. However, from the context we know the imaginary component is zero anyhow. A workaround is to use the following normalization:

energy_sqrt = tf.complex(tf.sqrt(energy), tf.constant(0., tf.float32))

This will be fixed in the next release of Sionna.

@ManelAl
Copy link
Author

ManelAl commented Mar 6, 2024

Hello, thank you for the reply.
I tried replacing the constellation normalization with the suggested code energy_sqrt = tf.complex(tf.sqrt(energy), tf.constant(0., tf.float32)) but I am still getting the same warnings.

Is it possible to debug the backward gradient computation step by step to determine the exact step at which this casting problem is happening ?

Thank you

@SebastianCa
Copy link
Collaborator

One way is to use tf.stop_gradient() to identify the exact step.

Do you see the same warning if you set normalize=False in the Constellation object?

@ManelAl
Copy link
Author

ManelAl commented Mar 8, 2024

You are right, I do not get warnings when setting normalize=False and when I changed the normalization to
energy_sqrt = tf.complex(tf.sqrt(energy), tf.constant(0., tf.float32))
the warnings are also reduced to only showing one time instead of for each training iterations. It says :

WARNING:tensorflow:You are casting an input of type complex64 to an incompatible dtype float32. This will discard the imaginary part and may not be what you intended.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1709881166.749682 3665897 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.

and I don't see any warnings when removing @tf.function(jit_compile=True)

Thank you !

@ManelAl ManelAl closed this as completed Mar 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants