You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, sorry for the distrubance again, I got some ambiguous points after reading your paper:
Do you train CLIP Controller and Restoration Model separtely or train they at the same time?
I saw you introduce learnable prompt at this line, which is smart. However, I notice you incorporate prompt_embedding by t = t + prompt_embedding, my question is why you integrate degradation type into time step, instead of by cross attention like this x = attn(x, context=image_context).
For the NAFNet there's no time step, how did you integrate prompt_embedding into NAFNet?
Something I didn't find answer in your paper (or I missed), sorry for interrupting you. Thank you for your great work.
The text was updated successfully, but these errors were encountered:
Hi, sorry for the distrubance again, I got some ambiguous points after reading your paper:
prompt_embedding
byt = t + prompt_embedding
, my question is why you integrate degradation type intotime step
, instead of by cross attention like thisx = attn(x, context=image_context)
.time step
, how did you integrateprompt_embedding
into NAFNet?Something I didn't find answer in your paper (or I missed), sorry for interrupting you. Thank you for your great work.
The text was updated successfully, but these errors were encountered: