Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[unitaryHACK] Create a Pytorch simulator #1225 #1360

Merged
merged 210 commits into from Aug 27, 2021

Conversation

Slimane33
Copy link
Contributor

@Slimane33 Slimane33 commented May 24, 2021

Context:
Create a quantum simulator with PyTorch.

Description of the Change:
A new device default.qubit.torch is created. It allows all quantum operations and measurements to be performed within the PyTorch worlkflow.

Benefits:
It allows end-to-end GPU computation and integration of quantum circuits with the torch interface of Pennylane. A fully working example can be found here: https://colab.research.google.com/drive/1Xb_-l3TIOZhbDw6K9jO34oDgDqzs3o7V?usp=sharing
All the gates implemented with Tensorflow have been reimplemented and seem to work.

Remaining work:

  1. The plugin is not fully tested. When I run the line pl-device-test --device default.qubit.torch --shots None
    I get the following error
/Users/slimane/Desktop/pennylane/pennylane/devices/tests/test_wires.py:65: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
pennylane/qnode.py:555: in __call__
    res = self.qtape.execute(device=self.device)
pennylane/tape/tape.py:1264: in execute
    return self._execute(params, device=device)
//anaconda3/lib/python3.7/site-packages/autograd/tracer.py:48: in f_wrapped
    return f_raw(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <AutogradQuantumTape: wires=[-1, -2], params=0>, params = (), device = <DefaultQubitTorch device (wires=2, shots=None) at 0x1a5fb4e278>

    @autograd.extend.primitive
    def _execute(self, params, device):
        # unwrap all NumPy scalar arrays to Python literals
        params = [p.item() if p.shape == tuple() else p for p in params]
        params = autograd.builtins.tuple(params)
    
        # unwrap constant parameters
        self._all_params_unwrapped = [
            p.numpy() if isinstance(p, np.tensor) else p for p in self._all_parameter_values
        ]
    
        # evaluate the tape
        self.set_parameters(self._all_params_unwrapped, trainable_only=False)
        res = self.execute_device(params, device=device)
        self.set_parameters(self._all_parameter_values, trainable_only=False)
    
        if self.is_sampled:
            return res
    
>       if res.dtype == np.dtype("object"):
E       TypeError: Cannot interpret 'torch.float64' as a data type

pennylane/interfaces/autograd.py:171: TypeError

Since it seems to be a fundamental error of incompatibility between torch.dtypeand numpy.dtype, I don't understand why I am still able to normally run the circuit.

  1. Writing the appropriate unit test test_default_qubit_torch.py

  2. Check formatting

Possible drawbacks:
More maintenance, torch operations are more different of numpy ones than tensorflow ones are.
Autograd in Pytorch is very sensitive. For example this formulation of RZ gate works

def RZ(theta, device=None):
    theta = torch.as_tensor(theta, dtype=C_DTYPE, device=device)
    p = torch.exp(-0.5j * theta)
    return p * torch.tensor([1, 0], dtype=torch.complex128) + torch.conj(p) * torch.tensor([0, 1], dtype=torch.complex128)

while this one doesn't

def RZ(theta, device=None):
    theta = torch.as_tensor(theta, dtype=C_DTYPE, device=device)
    p = torch.exp(-0.5j * theta)
    return torch.tensor([p, torch.conj(p)], dtype=torch.complex128)

Related GitHub Issues:
#1225

This is a collective contribution from: @arshpreetsingh, @PCesteban, @artm88, @charmerDark, @mkasmi, @Slimane33

@josh146 josh146 added unitaryhack Dedicated issue for Unitary Fund open-source hackathon WIP 🚧 Work-in-progress labels May 24, 2021
@josh146
Copy link
Member

josh146 commented May 24, 2021

Thanks @Slimane33! Looking forward to reviewing this, simply let us know via a comment in the PR when this is ready for review (or, alternatively, if you have any questions).

The plugin is not fully tested. When I run the line pl-device-test --device default.qubit.torch --shots None
I get the following error

Hmm, it seems that the Autograd interface is not compatible with the PyTorch device. This might be because the device test suite is defaulting to diff_method="parameter-shift", interface="autograd".

Instead, we would like to use diff_method="backprop", interface="torch". Perhaps the device test suite needs to be updated to support this?

More maintenance, torch operations are more different of numpy ones than tensorflow ones are. Autograd in Pytorch is very sensitive. For example this formulation of RZ gate works

In this particular example @Slimane33, could you do?

return torch.diag([p, torch.conj(p)])

@Slimane33
Copy link
Contributor Author

Instead, we would like to use diff_method="backprop", interface="torch". Perhaps the device test suite needs to be updated to support this?

How can we change that? The test suite works for default.qubit.tf which uses diff_method="backprop", interface="tf".

return torch.diag([p, torch.conj(p)])

It does not work because torch.diagreturns an 2D tensor given 1D tensor. The following line doesn't work either

return torch.diag(torch.tensor([[p, 0], [0, torch.conj(p)]], dtype=C_DTYPE, device=device))

@albi3ro
Copy link
Contributor

albi3ro commented Aug 26, 2021

@josh146 @antalszava Ready to override and merge from my side.

@josh146 josh146 merged commit 40aaeb6 into PennyLaneAI:master Aug 27, 2021
@co9olguy
Copy link
Member

co9olguy commented Aug 27, 2021

🎉
Congrats @PCesteban @arshpreetsingh @Slimane33!

@PCesteban PCesteban deleted the pytorch-device branch August 27, 2021 15:53
@PCesteban PCesteban restored the pytorch-device branch August 27, 2021 15:53
@arshpreetsingh
Copy link
Contributor

Congrats @PCesteban @arshpreetsingh!

Thanks @co9olguy ! and Great efforts from @Slimane33 as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
unitaryhack Dedicated issue for Unitary Fund open-source hackathon WIP 🚧 Work-in-progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet