Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] tensorflow autograph with finite diff and M1 Mac kills the kernel #4953

Closed
1 task done
albi3ro opened this issue Dec 15, 2023 · 4 comments · Fixed by #4961
Closed
1 task done

[BUG] tensorflow autograph with finite diff and M1 Mac kills the kernel #4953

albi3ro opened this issue Dec 15, 2023 · 4 comments · Fixed by #4961
Labels
bug 🐛 Something isn't working

Comments

@albi3ro
Copy link
Contributor

albi3ro commented Dec 15, 2023

Expected behavior

I expect to get the gradient. Or at least an error message that tells me what I'm doing wrong.

Actual behavior

bus error python untitled.py

Very helpful traceback there.

Additional information

The kernel gets killed with finite diff or spsa, but works fine with parameter shift.

Source code

import pennylane as qml
import numpy as np

import tensorflow as tf

dev = qml.device('default.qubit', wires=1)

@tf.function
@qml.qnode(dev, diff_method="finite-diff")
def circuit(x):
    qml.RX(x, wires=0)
    return qml.expval(qml.PauliZ(0))

x = tf.Variable(0.1, dtype=tf.float64)
with tf.GradientTape() as tape:
    y = circuit(x)
tape.gradient(y, x)

Tracebacks

No response

System information

M1 Mac
tensorflow 2.14.1

PL master

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@albi3ro albi3ro added the bug 🐛 Something isn't working label Dec 15, 2023
@albi3ro albi3ro changed the title [BUG] [BUG] tensorflow autograph with finite diff and M1 Mac Dec 15, 2023
@albi3ro albi3ro changed the title [BUG] tensorflow autograph with finite diff and M1 Mac [BUG] tensorflow autograph with finite diff and M1 Mac kills the kerneo Dec 15, 2023
@albi3ro albi3ro changed the title [BUG] tensorflow autograph with finite diff and M1 Mac kills the kerneo [BUG] tensorflow autograph with finite diff and M1 Mac kills the kernel Dec 15, 2023
@minhtriet
Copy link
Contributor

minhtriet commented Dec 18, 2023

It seems to me that tensorflow/python/eager/imperative_grad.py uses a pywrap_tfe.TFE_Py_TapeGradient, which access a memory address it isn't supposed to. Guess that the next step is to create a MRE and raise to TensorFlow team?

If you are fine with that then I would be happy to take the issue, otherwise let's discuss more.

@albi3ro
Copy link
Contributor Author

albi3ro commented Dec 18, 2023

I've managed to track down the problem to this line:

coeffs = np.linalg.solve(A, b)

Why that would be causing a problem, I have no idea.

@albi3ro
Copy link
Contributor Author

albi3ro commented Dec 19, 2023

And it seems to be fixed by switching np.linalg.solve with scipy.linalg.solve. 😕

@albi3ro
Copy link
Contributor Author

albi3ro commented Dec 19, 2023

And an minimal example of the issue:


import tensorflow as tf

@tf.py_function(Tout=tf.float32)
def py_log_huber(x, m):
    print('Running with eager execution.')
    import numpy as np
    A = np.array([[1.0, 1.0], [0.0, 1.0]])
    b = np.array([0.0, 1.0])
    print(np.linalg.solve(A, b))
    return m**2 

x = tf.constant(1.0)
m = tf.constant(2.0)

print(py_log_huber(x,m).numpy())

@tf.function
def tf_wrapper(x):
  print('Tracing.')
  m = tf.constant(2.0)
  return py_log_huber(x,m)

print(tf_wrapper(x).numpy())

albi3ro added a commit that referenced this issue Dec 19, 2023
Fixes #4953  [sc-52180]

This problem does seem to M1's, but this does seem to fix it.
mudit2812 pushed a commit that referenced this issue Jan 19, 2024
Fixes #4953  [sc-52180]

This problem does seem to M1's, but this does seem to fix it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants