Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real world Example Feedbacks #44

Open
ynie opened this issue Jul 25, 2024 · 12 comments
Open

Real world Example Feedbacks #44

ynie opened this issue Jul 25, 2024 · 12 comments

Comments

@ynie
Copy link

ynie commented Jul 25, 2024

Hey ZhengPeng, I would like to show you some examples where I think BiRefNet can improve on these real world photos. The algorithm currently works great, but I hope these feedback can make it much better. Thank you again for your hard work.

@ynie
Copy link
Author

ynie commented Jul 25, 2024

Example:
HBY-32380616 (8)

General Use Heavy
e12a028733ae41039ba8fe5cf24764b3_5ac94f4aaee141ba997bda03c5fdad21

Photoroom iOS
Photoroom_20240725_080107 2

@ynie
Copy link
Author

ynie commented Jul 25, 2024

b98310a3818b65b040a1cb78b6df0195_2000x

General Use Heavy:
f2b48965c74f4fd8be24886121d73364_43f4a79478a64dfc815d1eca94412526

Photoroom on iOS:
Photoroom_20240724_180903

@ZhengPeng7
Copy link
Owner

Wow, thanks. I appreciate it! Improvements need to be made to the contour areas, where the predicted values seem unconfident, neither 0 nor 1. I will look deep into the reason for it. Again, many thanks :)

@ynie
Copy link
Author

ynie commented Jul 25, 2024

no problem. I will keep this issue up to date with more issues I find.

@ZhengPeng7
Copy link
Owner

Sure, I would love to see more typical samples, but I also hope it doesn't cost you too much time.
BTW do you know if there are some open projects for subject extraction in the Photoroom app? I found this ICIP 2021 work from them, of which codes seem incomplete though.

I can do anything except their private datasets to improve the quality of BiRefNet.

@ynie
Copy link
Author

ynie commented Jul 25, 2024

Not that I know of. Do you think this is a dataset issue?

@ynie
Copy link
Author

ynie commented Jul 27, 2024

Hey @ZhengPeng7 , I'm trying to send out a patch for the comfyUI node, which weight should I use for the examples above?
image

Or which one is the general use heavy on Fal?

Thanks!

@ZhengPeng7
Copy link
Owner

The largest version for general use is currently the best one for images in the wild (the first line).

@ynie
Copy link
Author

ynie commented Aug 6, 2024

Hello @ZhengPeng7, just checking in to see if what can I help with the basket example above. If you are actively working on it, do you have an ETA? Thank you so much.

@ZhengPeng7
Copy link
Owner

ZhengPeng7 commented Aug 7, 2024

Hi, ynie, there were some mistakes in the previous training for this kind of task.
Dichotomous image segmentation is different from the image matting task (samples here) -- GT values are in 0 or 1 vs in 0 ~ 1. That's why the segmentation on the hairs of your cute dog is not good enough. I want to use more matting data and increase the weights of L1 loss instead of only BCE+IoU which are both used for DIS task.
But the white boundary of the predicted results is still solved... In the latest version, results are better but that bad phenomenon still exists, I'm still thinking about it. InSPyReNet seems very good on this regions, I'm trying to learn things from it.
截屏2024-08-07 11 19 47

@ynie
Copy link
Author

ynie commented Aug 7, 2024

Hey Zhengpeng, thank you so much for the information. Will that improve the basket example above?

image

@ynie
Copy link
Author

ynie commented Aug 7, 2024

Here are two more examples:

Example 1:

Original:
3080_R - 2000 x 1800 - 220dpi

Photoroom:
Photoroom_20240806_211020

General Use Heavy:
Landscape 2

Example 2:

Original:
1141_TS - 1500x1400 - 180dpi

Photoroom:
Photoroom_20240806_210834

General Use Heavy:
Landscape 3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants