The author writes in the article that A (high-level feature maps) first concatenate B (edge-attention) and then multiply C (expand) in every reverse attention module, but the code shows that A first multiply C and then concatenate B. Which one is the author trying to express?
The author writes in the article that A (high-level feature maps) first concatenate B (edge-attention) and then multiply C (expand) in every reverse attention module, but the code shows that A first multiply C and then concatenate B. Which one is the author trying to express?