-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher resolutions #19
Comments
Thanks for your interests. Generally, when you feed large size image e.g.
2000x1500 to the model, it resizes the image to 320x320 and do the
inference process to predict the salient object detection map and then
resize the map back to 2000x1500. The boundary will be blurred in the
process of upsampling. If you change the input size of the model from
320x320 to 2000x1500. There will be another issue: the model was trained on
the images resized to 320x320. But, "salient" is a relative magnitude. The
model performance also relies on the relative ratio between the receptive
fields' size and the exact objects' scale. Therefore, the performance may
also degrade in this case.
Purely trying with u-net with higher resolution may not give your better
results. As I mentioned above the receptive fields may also have to be
changed, which will inevitably introduce large amounts of computation
costs. Towards salient object detection from high resolution images, you
can take a look at this paper:
http://openaccess.thecvf.com/content_ICCV_2019/papers/Zeng_Towards_High-Resolution_Salient_Object_Detection_ICCV_2019_paper.pdf
.
According to our experiences, downsampling input images to the half of its
orginal size won't degrade much of the segmentation results (e.g.
2000x1500->1000x750). Further downsampling will introduce more errors.
Actually, high resolution salient object detection is a very challenging
yet important task. We are also planning to explore more in this area.
Best of luck.
…On Sat, May 16, 2020 at 4:07 PM mediarl ***@***.***> wrote:
First of all, thank you for the excellent work.
I tried to infer some images with higher resolution (around 2000 x 1500)
and the mask generated seems blurry on the edges.
Do you think it's because of the resolution of the images used for
training ?
Do you think that by training the u-net with higher resolution images, I
would get better results?
Do you know if a library similar to DUTS exist with higher resolution
images?
Thank you!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#19>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSGORN43IXZBZO2ZXMFX5LRR4FDHANCNFSM4NDDCF4Q>
.
--
Xuebin Qin
PhD
Department of Computing Science
University of Alberta, Edmonton, AB, Canada
Homepage:https://webdocs.cs.ualberta.ca/~xuebin/
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
First of all, thank you for the excellent work.
I tried to infer some images with higher resolution (around 2000 x 1500) and the mask generated seems blurry on the edges.
Do you think it's because of the resolution of the images used for training ?
Do you think that by training the u-net with higher resolution images, I would get better results?
Do you know if a library similar to DUTS exist with higher resolution images?
Thank you!
The text was updated successfully, but these errors were encountered: