Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove abs from DefaultQubit #2057

Merged
merged 7 commits into from
Dec 22, 2021
Merged

Remove abs from DefaultQubit #2057

merged 7 commits into from
Dec 22, 2021

Conversation

dwierichs
Copy link
Contributor

Context:
This PR implements the fix to #1125 proposed in this comment

Description of the Change:
The line 792

    prob = self.marginal_prob(self._abs(self._flatten(self._state)) ** 2, wires)

in qml/devices/default_qubit.py is replaced by

    flat_state = self._flatten(self._state)
    real_state = self._real(flat_state)
    imag_state = self._imag(flat_state)
    prob = self.marginal_prob(real_state ** 2 + imag_state ** 2), wires)

avoiding the usage of abs, which does not have a well-defined derivative at 0.

Benefits:
Fixes #1125 in all interfaces.

Possible Drawbacks:
Performance, see short discussion here

Related GitHub Issues:
#1125

@codecov
Copy link

codecov bot commented Dec 22, 2021

Codecov Report

Merging #2057 (553e8c3) into master (20f1dd8) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2057   +/-   ##
=======================================
  Coverage   99.17%   99.17%           
=======================================
  Files         225      225           
  Lines       17284    17291    +7     
=======================================
+ Hits        17142    17149    +7     
  Misses        142      142           
Impacted Files Coverage Δ
pennylane/devices/default_qubit.py 100.00% <100.00%> (ø)
pennylane/devices/default_qubit_autograd.py 100.00% <100.00%> (ø)
pennylane/devices/default_qubit_jax.py 96.25% <100.00%> (+0.04%) ⬆️
pennylane/devices/default_qubit_tf.py 90.16% <100.00%> (+0.16%) ⬆️
pennylane/devices/default_qubit_torch.py 92.07% <100.00%> (+0.07%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 20f1dd8...553e8c3. Read the comment docs.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was super quick, thanks @dwierichs! I agree, this is good to have fixed even with the minor overhead.

pennylane/devices/default_qubit.py Show resolved Hide resolved
@@ -92,6 +92,7 @@ class DefaultQubitAutograd(DefaultQubit):
_transpose = staticmethod(np.transpose)
_tensordot = staticmethod(np.tensordot)
_conj = staticmethod(np.conj)
_real = staticmethod(np.real)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor, but we are in the process of slowly migrating the devices to qml.math :) Not an issue here though!

tests/devices/test_default_qubit_autograd.py Outdated Show resolved Hide resolved
tests/devices/test_default_qubit_jax.py Outdated Show resolved Hide resolved
tests/devices/test_default_qubit_tf.py Outdated Show resolved Hide resolved
tests/devices/test_default_qubit_torch.py Outdated Show resolved Hide resolved
Co-authored-by: Josh Izaac <josh146@gmail.com>
@dwierichs
Copy link
Contributor Author

Thanks for the fast review! And very good idea to make the tests stable against possible default changes. :)

@dwierichs dwierichs merged commit deffcee into master Dec 22, 2021
@dwierichs dwierichs deleted the remove-abs-from-defaultqubit branch December 22, 2021 13:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incorrect value of gradients at 0 after taking gradient twice
2 participants