Skip to content

loss_db(inputs=param) /loss_t stores a stale autograd graph in cached circuits #161

@Jooyuza

Description

@Jooyuza

When using cir.loss_db(wires=[i], inputs=param) with a trainable PyTorch parameter, DeepQuantum converts the dB value into the internal loss theta during circuit construction.

If the circuit is cached and reused across optimization steps, this converted theta keeps the original autograd graph from build time. On the second backward pass, PyTorch raises:

RuntimeError: Trying to backward through the graph a second time

because the graph attached to the stored theta has already been freed after the first backward.

Expected behavior:
loss_db(inputs=param) should remain usable with trainable parameters in a cached circuit, similar to cir.bs(..., inputs=param) or cir.loss(..., inputs=param).

Possible fix:
store the dB parameter itself and convert to theta during forward, or refresh the internal theta from the current inputs before each forward pass.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions