You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I notice this fantastic work, and have the following question:
In SAM, this work applies "a Softmax function on the channel dimension of T2 and chooses the second channel as the attention map". I wonder why choose the second channel.
I have already read the related paper "Edge-aware graph representation learning and reasoning for face parsing", but still have no idea about it. I would appreciate it if you could give me an answer!
Best regard
The text was updated successfully, but these errors were encountered:
@wangtong627
Thanks for your issue. 1) This is the result of our parameter fine-tuning. You can also choose other channels, but you need to ensure that the parameters of the selected channel are updated. 2) Meanwhile, we refer to previous research work (Edge-aware Graph Representation Learning and Reasoning for Face Parsing, ECCV2020).
Other channel is also feasible if we only update the weight of the selected channel.
Hello, I notice this fantastic work, and have the following question:
In SAM, this work applies "a Softmax function on the channel dimension of T2 and chooses the second channel as the attention map". I wonder why choose the second channel.
I have already read the related paper "Edge-aware graph representation learning and reasoning for face parsing", but still have no idea about it. I would appreciate it if you could give me an answer!
Best regard
The text was updated successfully, but these errors were encountered: