Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Qiskit devices don't support vmap parameter broadcasting #5240

Closed
1 task done
lauracappelli opened this issue Feb 21, 2024 · 3 comments · Fixed by #5286
Closed
1 task done

[BUG] Qiskit devices don't support vmap parameter broadcasting #5240

lauracappelli opened this issue Feb 21, 2024 · 3 comments · Fixed by #5286
Labels
bug 🐛 Something isn't working

Comments

@lauracappelli
Copy link

Expected behavior

I'm trying to use the Qiskit plugin in a neural network defined in JAX. One of the network layers is a quantum circuit written in Pennylane and called with the vmap function. I have posted a simplified version of my code (useful for reproducibility purposes) in the Xanadu Discussion Forum at this link. I expect that the circuit is called for each element of the input and the result contains the values obtained for all the calls.

Actual behavior

The initialization of the training state fails with the error:
jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Generated function failed: CpuCallback error: CircuitError: "Invalid param type <class 'list'> for gate ry."

Additional information

As written by Christina in the forum mentioned above, your handling of vmap currently assumes that the device natively supports parameter broadcasting, which is only true is a limited subsection of devices.
A more minimal example of the problem is:

dev = qml.device('lightning.qubit', wires=10)

@qml.qnode(dev)
def circuit(x):
    qml.RX(x, wires=0)
    return qml.expval(qml.PauliZ(0))

jax.vmap(circuit)(jax.numpy.array([0.5, 0.6, 0.7]))

Source code

import psutil, optax
from typing import Callable
import numpy as np
from datetime import datetime as dt
import pennylane as qml
import jax, jax.numpy as jnp
from flax.training import train_state
from flax import linen as nn
from jax import config
config.update("jax_enable_x64", True)

n_node = 50
n_edge = 80
n_train = 5
n_valid = 2
hid_dim = 4
n_layers = 3
n_qubits = 4

def create_train_state(model, key, graph):
    params = model.init(key, graph)['params'] 
    optimizer = optax.adam(learning_rate=0.01)
    return train_state.TrainState.create(apply_fn=model.apply, params=params, tx=optimizer)

def rescale01(X):
    return (X-np.min(X))/(np.max(X)-np.min(X))

# @qml.qnode(qml.device("default.qubit.jax", wires=n_qubits), interface="jax-python", diff_method="backprop")
@qml.qnode(qml.device('qiskit.aer', wires=n_qubits), interface="jax")
def circuit(iec_params, pqc_params, n_qubits, n_layers):
    for i in range(n_qubits):
        qml.RY(iec_params[i], wires=i)
    w_iter = -1
    for i in range(n_qubits):
        w_iter = w_iter + 1
        qml.RY(pqc_params[w_iter], wires=i)
    for _ in range(n_layers):
        qml.Barrier()
        for i in range(n_qubits):
            qml.CZ(wires=[(n_qubits-2-i)%n_qubits, (n_qubits-1-i)%n_qubits])
        for i in range(n_qubits):
            w_iter = w_iter + 1
            qml.RY(pqc_params[w_iter], wires=i)
    exp_vals = [qml.expval(qml.PauliZ(position)) for position in range(n_qubits)]
    return tuple(exp_vals)

class QLayer(nn.Module):
    my_circuit: Callable
    num_params: int
    n_layers: int
    n_qubits: int

    def init_params(self, key: jnp.ndarray):
        return jnp.ones(self.n_qubits*(self.n_layers+1))
                   
    @nn.compact
    def __call__(self, X):
        qparams = self.param('qparams', self.init_params)
        circuit_vmap = jax.vmap(self.my_circuit, in_axes=(0, None, None, None))
        return circuit_vmap(X, qparams, self.n_qubits, self.n_layers)

class QEdgeNet(nn.Module):
    Qlayer: nn.Module
    @nn.compact
    def __call__(self, X, Ri, Ro):
        bo = jnp.tensordot(Ro, X, axes=([0],[0]))
        bi = jnp.tensordot(Ri, X, axes=([0],[0]))
        B = jnp.concatenate([bo, bi], axis = 1)
        I = nn.Dense(n_qubits)(B)
        I = nn.relu(I)
        I = rescale01(I) * jnp.pi
        Q = self.Qlayer(I)
        Q = jnp.asarray(Q).transpose(1,0)
        O = nn.Dense(1)(Q)
        O = nn.sigmoid(O)
        return O
    
class QNodeNet(nn.Module):  
    Qlayer: nn.Module
    @nn.compact
    def __call__(self, X, e, Ri, Ro):
        bo = jnp.tensordot(Ro, X, axes=([0],[0]))
        bi = jnp.tensordot(Ri, X, axes=([0],[0]))
        Rwo = Ro * e[:,0]
        Rwi = Ri * e[:,0]
        mi = jnp.tensordot(Rwi, bo, axes=([1],[0]))
        mo = jnp.tensordot(Rwo, bi, axes=([1],[0]))
        M = jnp.concatenate([mi, mo, X], axis=1)
        I = nn.Dense(n_qubits)(M)
        I = nn.relu(I)
        I = rescale01(I) * np.pi
        Q = self.Qlayer(I)
        Q = jnp.asarray(Q).transpose(1,0)
        O = nn.Dense(hid_dim)(Q)
        O = nn.relu(O)
        return O
    
class QGNN(nn.Module):
    EdgeLayer: nn.Module
    NodeLayer: nn.Module
    @nn.compact
    def __call__(self, graph_array):
        X, Ri, Ro = graph_array
        H = nn.Dense(hid_dim)(X)       
        H = nn.relu(H)
        H = jnp.concatenate([H, X], axis=1)
        for i in range(3):
            e = self.EdgeLayer(H, Ri, Ro)
            H = self.NodeLayer(H, e, Ri, Ro)
            H = jnp.concatenate([H, X], axis=1)
        H = self.EdgeLayer(H, Ri, Ro)
        H = jnp.squeeze(H, axis=1)
        return H

def generate_graph(nNode, nEdge, key):
    subk = jax.random.split(key, num=5)
    r = jax.random.randint(subk[4], (1,), -9, 9)
    nNode = int(nNode + nNode / 100 * r[0])
    nEdge = int(nEdge + nEdge / 100 * r[0])
    return (jax.random.normal(subk[0], (nNode, 3), dtype=np.float32),
            jax.random.randint(subk[1], (nNode, nEdge), 0, 2).astype(np.float32),
            jax.random.randint(subk[2], (nNode, nEdge), 0, 2).astype(np.float32),
            jax.random.randint(subk[3], (nEdge,), 0, 2))

def generate_random_dataset(key):
    dataset_dim = n_train+n_valid
    dataset = []
    subkeys = jax.random.split(key, dataset_dim)
    for i in range(dataset_dim):
        dataset.append(generate_graph(n_node, n_edge, subkeys[i]))
    return dataset
 
if __name__ == "__main__":
    # Initialize dataset
    process = psutil.Process()
    key = jax.random.PRNGKey(0)
    dataset = generate_random_dataset(key)
    train_list = [i for i in range(n_train)]
    valid_list = [i+n_train for i in range(n_valid)]
    
    # Initalize model
    print('[{}] Dataset loaded'.format(dt.now())
    model = QGNN(QEdgeNet(QLayer(circuit,n_qubits*(n_layers+1), n_layers, n_qubits)),
                 QNodeNet(QLayer(circuit,n_qubits*(n_layers+1), n_layers, n_qubits)))
    X, Ri, Ro, y = dataset[train_list[0]]
    key, init_key = jax.random.split(key)
    state = create_train_state(model, init_key, (X, Ri, Ro))

Tracebacks

jax.errors.SimplifiedTraceback: For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/lcappell/qgnn-tracking/qiskit-test.py", line 159, in <module>
    state = create_train_state(model, init_key, (X, Ri, Ro))
  File "/home/lcappell/qgnn-tracking/qiskit-test.py", line 21, in create_train_state
    params = model.init(key, graph)['params'] 
  File "/home/lcappell/qgnn-tracking/qiskit-test.py", line 119, in __call__
    e = self.EdgeLayer(H, Ri, Ro)
  File "/home/lcappell/qgnn-tracking/qiskit-test.py", line 73, in __call__
    Q = self.Qlayer(I)
  File "/home/lcappell/qgnn-tracking/qiskit-test.py", line 60, in __call__
    return circuit_vmap(X, qparams, self.n_qubits, self.n_layers)
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/qnode.py", line 1027, in __call__
    res = qml.execute(
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/execution.py", line 736, in execute
    results = ml_boundary_execute(
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py", line 278, in execute
    return _execute_bwd(
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py", line 363, in _execute_bwd
    return execute_wrapper(params)
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py", line 324, in execute_wrapper
    return jax.pure_callback(wrapper, shape_dtype_structs, inner_params, vectorized=True)
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py", line 334, in pure_callback_api
    return pure_callback(
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py", line 265, in pure_callback
    out_flat = pure_callback_p.bind(
  File "/home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py", line 107, in pure_callback_batching_rule
    outvals = pure_callback_p.bind(
jaxlib.xla_extension.XlaRuntimeError: INTERNAL: Generated function failed: CpuCallback error: CircuitError: "Invalid param type <class 'list'> for gate ry."

At:
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/qiskit/circuit/gate.py(245): validate_parameter
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/qiskit/circuit/instruction.py(285): params
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/qiskit/circuit/instruction.py(106): __init__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/qiskit/circuit/gate.py(45): __init__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/qiskit/circuit/library/standard_gates/ry.py(56): __init__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane_qiskit/qiskit_device.py(365): apply_operations
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane_qiskit/qiskit_device.py(295): create_circuit_object
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane_qiskit/qiskit_device.py(484): compile_circuits
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane_qiskit/qiskit_device.py(495): batch_execute
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/contextlib.py(79): inner
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/execution.py(371): wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/execution.py(249): inner_execute
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/execution.py(588): inner_execute_with_empty_jac
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py(307): wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(258): _flat_callback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(52): pure_callback_impl
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(188): _callback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/interpreters/mlir.py(2327): _wrapped_callback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/interpreters/pxla.py(1145): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/profiler.py(334): wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/pjit.py(1178): _pjit_call_impl_python
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/pjit.py(1222): call_impl_cache_miss
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/pjit.py(1238): _pjit_call_impl
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(893): process_primitive
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(405): bind_with_trace
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(2682): bind
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/pjit.py(166): _python_pjit_helper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/pjit.py(255): cache_miss
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/traceback_util.py(177): reraise_with_filtered_traceback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/dispatch.py(87): apply_primitive
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(893): process_primitive
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(405): bind_with_trace
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(402): bind
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(107): pure_callback_batching_rule
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/interpreters/batching.py(433): process_primitive
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(405): bind_with_trace
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(402): bind
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(265): pure_callback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/callback.py(334): pure_callback_api
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py(324): execute_wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/linear_util.py(191): call_wrapped
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/core.py(907): process_custom_jvp_call
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/custom_derivatives.py(359): bind
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/interpreters/batching.py(529): process_custom_jvp_call
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/custom_derivatives.py(359): bind
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/custom_derivatives.py(257): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/traceback_util.py(177): reraise_with_filtered_traceback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py(363): _execute_bwd
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/jax_jit.py(278): execute
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/interfaces/execution.py(736): execute
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/pennylane/qnode.py(1027): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/linear_util.py(191): call_wrapped
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/api.py(1258): vmap_f
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/traceback_util.py(177): reraise_with_filtered_traceback
  /home/lcappell/qgnn-tracking/qiskit-test.py(60): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(1101): _call_wrapped_method
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(584): wrapped_module_method
  /home/lcappell/qgnn-tracking/qiskit-test.py(73): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(1101): _call_wrapped_method
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(584): wrapped_module_method
  /home/lcappell/qgnn-tracking/qiskit-test.py(119): __call__
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(1101): _call_wrapped_method
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(584): wrapped_module_method
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(2637): scope_fn
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/core/scope.py(1080): wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/core/scope.py(1116): wrapper
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(1977): init_with_output
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/traceback_util.py(177): reraise_with_filtered_traceback
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/flax/linen/module.py(2083): init
  /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages/jax/_src/traceback_util.py(177): reraise_with_filtered_traceback
  /home/lcappell/qgnn-tracking/qiskit-test.py(21): create_train_state
  /home/lcappell/qgnn-tracking/qiskit-test.py(159): <module>

System information

Name: PennyLane
Version: 0.33.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/lcappell/.conda/envs/flax-gpu/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-qiskit

Platform info:           Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17
Python version:          3.9.18
Numpy version:           1.24.3
Scipy version:           1.11.1
Installed devices:
- default.gaussian (PennyLane-0.33.1)
- default.mixed (PennyLane-0.33.1)
- default.qubit (PennyLane-0.33.1)
- default.qubit.autograd (PennyLane-0.33.1)
- default.qubit.jax (PennyLane-0.33.1)
- default.qubit.legacy (PennyLane-0.33.1)
- default.qubit.tf (PennyLane-0.33.1)
- default.qubit.torch (PennyLane-0.33.1)
- default.qutrit (PennyLane-0.33.1)
- null.qubit (PennyLane-0.33.1)
- lightning.qubit (PennyLane-Lightning-0.33.1)
- qiskit.aer (PennyLane-qiskit-0.34.0)
- qiskit.basicaer (PennyLane-qiskit-0.34.0)
- qiskit.ibmq (PennyLane-qiskit-0.34.0)
- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.34.0)
- qiskit.ibmq.sampler (PennyLane-qiskit-0.34.0)
- qiskit.remote (PennyLane-qiskit-0.34.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@lauracappelli lauracappelli added the bug 🐛 Something isn't working label Feb 21, 2024
@albi3ro
Copy link
Contributor

albi3ro commented Feb 21, 2024

Thanks for opening this issue @lauracappelli . We'll try and get a fix in for the next release coming out on March 5th.

The problematic line of code is this:

out = jax.pure_callback(pure_callback_wrapper, shape_dtype_structs, params, vectorized=True)

This works for default.qubit because it does actually support native broadcasting. Most device don't, and use qml.transforms.broadcast_expand during the preprocessing step (Device.preprocess or Device.batch_transform). The problem is that jax is trying to add a batch dimension after we already handled any native broadcasting.

In the short-term, we just update the above line to be:

    out = jax.pure_callback(pure_callback_wrapper, shape_dtype_structs, params, vectorized=device.name == "default.qubit")

Though in the longer term, we should rethink jax.vmap and where we handle parameter broadcasting in pennylane.

@josh146
Copy link
Member

josh146 commented Feb 22, 2024

@albi3ro would this bug also impact lightning.qubit?

@albi3ro
Copy link
Contributor

albi3ro commented Feb 22, 2024

@josh146 Yes it does.

Minimal non-working example:

dev = qml.device('lightning.qubit', wires=10)

@qml.qnode(dev)
def circuit(x):
    qml.RX(x, wires=0)
    return qml.expval(qml.PauliZ(0))

jax.vmap(circuit)(jax.numpy.array([0.5, 0.6, 0.7]))

albi3ro added a commit that referenced this issue Apr 8, 2024
…ng (#5286)

Fixes #5240 [sc-57137] [sc-57848] Fixes #5289

Basically when we set `vectorized=True` inside the `pure_callback` call,
we assumed that the device natively supports broadcasting. And then we
only tested with devices that did indeed natively support parameter
broadcasting.

This problem was made worse by the fact that our `vmap` tests included a
`Hamiltonian` expectation value, which caused up to skip many of the
test cases that we really should have been testing. So I got rid of the
`Hamiltonian` from the test so we could actually test more situations.

I also added more `lightning.qubit` tests to the test configuration.
That forced one or two other changes.

The major problem with `jax.vmap` is that it adds in a parameter
broadcasting dimension *after* we have already handled all of our
preprocessing and breaking up parameter broadcasting.

---------

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
Co-authored-by: Astral Cai <astral.cai@xanadu.ai>
Co-authored-by: Mikhail Andrenkov <mikhail@xanadu.ai>
Co-authored-by: Korbinian Kottmann <43949391+Qottmann@users.noreply.github.com>
Co-authored-by: Thomas R. Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: Isaac De Vlugt <34751083+isaacdevlugt@users.noreply.github.com>
Co-authored-by: Isaac De Vlugt <isaacdevlugt@gmail.com>
Co-authored-by: Pietropaolo Frisoni <pietropaolo.frisoni@xanadu.ai>
Co-authored-by: soranjh <40344468+soranjh@users.noreply.github.com>
Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants