You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great input and sharing the code! I have a question about the compressed sensing case:
As you mentioned in the paper: "Note that in the task of compressive sensing, the degradation matrix A is exactly known, i.e., the sampling matrix Φ. Thus, we directly use Φ to calculate the gradient.". However, in the code, you set A to be trainable parameters instead:
where the self.Phi (referred as A) is learnable in the training. This makes me confused, because in compressed sensing, we assume that the only information we have are y and A, and we have no access to the raw image X_0. But here since A is learnable params, y is essentially a linear transformation of the real X_0, which means all the information of X_0 is known as the input of the model. Eventually, the model is actually learning \hat{X} (output) given the real image X_0 (input), which is somehow equivalent to a problem of recovering X_0 given X_0.
Instead, since you assume A is unknown, then the process of making measurement y should not involve the learnable parameter A, which is the process of getting Phix in the code.
What is more, the input of the model when testing is the real image (say X_0):
and the measurement y is obtained by y=AX_0, where the A is the trainable parameters. I couldn't understand this setting, since in the testing case, we assume that the only information we have is y and A (if we know the degradation model), but here the input of model is the real raw testing image.
Please correct me if I misunderstood anything here, and I apologize in advance if I missed anything or misunderstood anything that is already explained clearly in the paper and code. Thank you so much, and l look forward to your replying!
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. A is learnable, capturing the degradation matrix through training data, as in [1] and [2].
[1] Deep Memory-Augmented Proximal Unrolling Network for Compressive Sensing
[2] Optimization-inspired compact deep compressive sensing
Hi there @MC-E ,
Thank you for your great input and sharing the code! I have a question about the compressed sensing case:
As you mentioned in the paper: "Note that in the task of compressive sensing, the degradation matrix A is exactly known, i.e., the sampling matrix Φ. Thus, we directly use Φ to calculate the gradient.". However, in the code, you set A to be trainable parameters instead:
https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration/blob/bae845c2612d0df56a479020d59896441168d07a/Compressive-Sensing/DGUNet.py#L384C1-L391C30
where the self.Phi (referred as A) is learnable in the training. This makes me confused, because in compressed sensing, we assume that the only information we have are y and A, and we have no access to the raw image X_0. But here since A is learnable params, y is essentially a linear transformation of the real X_0, which means all the information of X_0 is known as the input of the model. Eventually, the model is actually learning \hat{X} (output) given the real image X_0 (input), which is somehow equivalent to a problem of recovering X_0 given X_0.
Instead, since you assume A is unknown, then the process of making measurement y should not involve the learnable parameter A, which is the process of getting Phix in the code.
What is more, the input of the model when testing is the real image (say X_0):
https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration/blob/bae845c2612d0df56a479020d59896441168d07a/Compressive-Sensing/train.py#L231C1-L238C62
and the measurement y is obtained by y=AX_0, where the A is the trainable parameters. I couldn't understand this setting, since in the testing case, we assume that the only information we have is y and A (if we know the degradation model), but here the input of model is the real raw testing image.
Please correct me if I misunderstood anything here, and I apologize in advance if I missed anything or misunderstood anything that is already explained clearly in the paper and code. Thank you so much, and l look forward to your replying!
The text was updated successfully, but these errors were encountered: