-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the preparation of the DIS dataset and the next research topic. #23
Comments
Hi, Akif,
Thanks for your interest. Yes, all the images are real natural images
annotated by us manually. That is one of the reasons why it takes us more
than a year for this paper. We do summarize some of the skills for manual
annotation. But the workload is still huge. For DIS V1.0, the average
labeling time for each image is 0.5 hours and some images take us up to 10
hours. In DIS V2.0 (unreleased one), the images' complexities are more
diversified and some images take up to 70 hours (the peacock in our github)
for labeling. We are trying to develop some semi-automatic ways and
self-supervised ways for segmenting highly accurate results. We probably
prepare a tutorial for annotating these highly accurate masks and share it
with the whole community later.
…On Tue, Aug 2, 2022 at 3:58 AM Akif Faruk Nane ***@***.***> wrote:
@xuebinqin <https://github.com/xuebinqin> Hii, do you have any chance to
tell a bit about how you prepared and annotated the data V1 and V2?
For the last 10 days, I have been annotating some high res data for my
validation dataset. I got 66 images annotated carefully. I worked on
pre-predicted masks and fixed them. Thus, the process was easier for me.
Although I did a little work, the burden was huge.
So, how did you manage to generate or annotate the data? It doesn't look
like any artificial (or rendered) data to me.
Also, do you have any more ongoing research on image segmentation topic?
What's next?
Thank you!
—
Reply to this email directly, view it on GitHub
<#23>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSGORKVF4LQV2R7N42L7XTVXD5MLANCNFSM55KWOV2Q>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Xuebin Qin
PhD
Department of Computing Science
University of Alberta, Edmonton, AB, Canada
Homepage: https://xuebinqin.github.io/
|
@xuebinqin honestly, great work both in developing a new type of AI model (IS-NET) and also annotate all te data (DIS5K)! I wonder your data annotation methods. We appreciate you offering these to the community. Thank you much for your answer! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@xuebinqin Hii, do you have any chance to tell a bit about how you prepared and annotated the data V1 and V2?
For the last 10 days, I have been annotating some high res data for my validation dataset. I got 66 images annotated carefully. I worked on pre-predicted masks and fixed them. Thus, the process was easier for me. Although I did a little work, the burden was huge.
So, how did you manage to generate or annotate the data? It doesn't look like any artificial (or rendered) data to me.
Also, do you have any more ongoing research on image segmentation topic? What's next?
Thank you!
The text was updated successfully, but these errors were encountered: