Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some details in the paper #1

Closed
irfanICMLL opened this issue Mar 27, 2020 · 1 comment
Closed

Some details in the paper #1

irfanICMLL opened this issue Mar 27, 2020 · 1 comment

Comments

@irfanICMLL
Copy link

Thank you for sharing such a great job!
I have some questions about the implementation details for this paper.

  1. What is the 'soft and hard propagation' In Table 6? Does it mean the propagation function during inference? I am a little confused about the details of this table.
  2. Can you share the function you use to achieve image-feature alignment? Can I use an odd input size and 'align_corner=True' to achieve the feature align?
  3. How is the result when removing the memory-augment tracker?

I would be so grateful if you can help me with these details. Looking forward to the coming code.

@zlai0
Copy link
Owner

zlai0 commented Jun 19, 2020

  1. Yes, that right. Hard just means you apply an argmax to quantize the results. Specifically,
_output = model(rgb_0, anno_0, rgb_1, ref_index, i+1)
_output = F.interpolate(_output, (h,w), mode='bilinear')

      
# Hard
output = torch.argmax(_output, 1, keepdim=True).float()

# Soft
output = _output
  1. Just do x = image.float()[:,:,::4,::4] because the center of CNN filters starts from the top left corner.

  2. About 59 (J&F Mean). Refer to table 5 of the paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants