Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: GFPGAN for fixing eyes #34

Open
tandefelt opened this issue Sep 13, 2022 · 5 comments
Open

Feature request: GFPGAN for fixing eyes #34

tandefelt opened this issue Sep 13, 2022 · 5 comments

Comments

@tandefelt
Copy link

Hi,
A great tool, I agree with all prior feature request, wanted to, respectfully, add one:
GFPGAN utilization, overall, for possible scaling (yes), but.. centrally, for fixing eyes, when creating humans..

Naturally, strange eyes can be also very effective, but control over how eyes look sounds like a very welcome addition.. small thing, yet, eyes are windows to the soul.

@greggh
Copy link

greggh commented Sep 13, 2022

Seconded. this would be a great help!

@harryshawk
Copy link

Yes -- would be helpful.. My goal is images that can be used in ads, etc. Humans have to look right!

@jemueller
Copy link

this would be very good yes, thank you

@trbutler
Copy link

Perhaps the LStein fork of Stable-Diffusion offers some help on this? The web UI for it integrates both of these pretty well. Not sure how hard it would be to take its method and move it to a native MacOS app.

@tandefelt
Copy link
Author

Have to rephrase, once again... An absolutely amazing tool, thank you so much.. !!! My friends and I here in FInland have gotten into a full "flow" state for days and days, a few nights til the morning.. Anyways, my sincere humble wishlist (based on my 15 years of teaching design/tech/interface design masters thesis at Parsons MFADT: 1) gfpgan, especially eye fixing (eyes are for the most part.. blobby), 2) way higher resolutions (understanding fully the training limit), 3) maybe some super resolution solution (now using topaz tools), 4). faster rendering speeds.. 5) definitely. multiple files outputted once in a batch, once a great result is once gotten, a great option to "cook" more once.. 6) 2D-to-3D/depth map output options.. :) 7) more overall core SD adjustment options.. PS: I saw this one, likely takes some fiddling to get to work with M1/M2: https://replicate.com/cjwbw/stable-diffusion-high-resolution, anyways: thank you again, super excited about this development, literally blows the door wide open to a whole new world for tons of people. Best regards, Marko, Helsinki, Finland.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants