Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth head facelift #97

Merged
merged 21 commits into from
Jul 23, 2020
Merged

Depth head facelift #97

merged 21 commits into from
Jul 23, 2020

Conversation

melisandeteng
Copy link
Contributor

@melisandeteng melisandeteng commented Jul 6, 2020

In this PR, the following changes are made to the depth head:

  • input depth is a npy array of log depth
  • log inverse depth images in comet
  • change loss to scale invariant MSE loss
  • change loader and trasforms so that they can be applied to float tensors
  • change image logging
    if you want to check what the inputs would look like, npy arrays of log depth predictions from Megadepth have been saved at /network/tmp1/ccai/data/munit_dataset/trainA_megadepth_resized

@melisandeteng melisandeteng marked this pull request as draft July 6, 2020 13:28
@melisandeteng melisandeteng changed the title Depth head facelift [WIP] Depth head facelift Jul 6, 2020
@melisandeteng melisandeteng self-assigned this Jul 13, 2020
@melisandeteng melisandeteng marked this pull request as ready for review July 17, 2020 14:36
@melisandeteng melisandeteng changed the title [WIP] Depth head facelift Depth head facelift Jul 17, 2020
@melisandeteng
Copy link
Contributor Author

@vict0rsch @alexrey88 @51N84D @tianyu-z @sashavor can you please review this ? Thanks !

def get_normalized_depth_t(arr, domain):
def norm_tensor(t):
t = t - torch.min(t)
t /= torch.max(t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please correct me if I am wrong. I think min - max normalization should be: (t - min) / (max - min)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I do the operation in 2 steps, then Min is 0 after t-torch.min(t)

def get_normalized_depth_t(arr, domain):
def norm_tensor(t):
t = t - torch.min(t)
t /= torch.max(t)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please correct me if I am wrong. I think min - max normalization should be: (t - min) / (max - min)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment above

@melisandeteng melisandeteng merged commit a5aad2d into master Jul 23, 2020
@vict0rsch vict0rsch deleted the depth_head branch November 2, 2020 23:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants