When using cir.loss_db(wires=[i], inputs=param) with a trainable PyTorch parameter, DeepQuantum converts the dB value into the internal loss theta during circuit construction.
If the circuit is cached and reused across optimization steps, this converted theta keeps the original autograd graph from build time. On the second backward pass, PyTorch raises:
RuntimeError: Trying to backward through the graph a second time
because the graph attached to the stored theta has already been freed after the first backward.
Expected behavior:
loss_db(inputs=param) should remain usable with trainable parameters in a cached circuit, similar to cir.bs(..., inputs=param) or cir.loss(..., inputs=param).
Possible fix:
store the dB parameter itself and convert to theta during forward, or refresh the internal theta from the current inputs before each forward pass.

When using cir.loss_db(wires=[i], inputs=param) with a trainable PyTorch parameter, DeepQuantum converts the dB value into the internal loss theta during circuit construction.
If the circuit is cached and reused across optimization steps, this converted theta keeps the original autograd graph from build time. On the second backward pass, PyTorch raises:
RuntimeError: Trying to backward through the graph a second timebecause the graph attached to the stored theta has already been freed after the first backward.
Expected behavior:
loss_db(inputs=param) should remain usable with trainable parameters in a cached circuit, similar to cir.bs(..., inputs=param) or cir.loss(..., inputs=param).
Possible fix:
store the dB parameter itself and convert to theta during forward, or refresh the internal theta from the current inputs before each forward pass.