Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation on Synthetic-Composite Adobe Dataset #75

Open
yucornetto opened this issue Oct 5, 2020 · 0 comments
Open

Evaluation on Synthetic-Composite Adobe Dataset #75

yucornetto opened this issue Oct 5, 2020 · 0 comments

Comments

@yucornetto
Copy link

yucornetto commented Oct 5, 2020

Hi, thanks for the great work! The idea is really novel and the performance is amazing!

I have a question regarding the number reported in Table 1. in the paper. Based on Sec 4.1, "We computed a trimap for each matte through
a process of alpha matte thresholding and dilation as described in [36]." Does it mean that you generate the trimap based on ground-truth alpha matte instead of using the trimap images provided in the dataset?

Besides, in Sec 4.1, "We rescaled all images to 512 × 512 and measure the SAD and MSE error between the estimated and
ground truth (GT) alpha mattes." Does it mean the model in evaluated with 512x512 images, or only the score is calculated on the resized images? (i.e. The model is inferred on the original images or resized images).

I did not find the evaluation code relating to Adobe dataset, should I directly use the inference codes to reproduce your results?

Thanks really in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant