New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor suggestions #1
Comments
For 2, I typed in my code as |
|
Actually, has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] |
The problem is that the network can't be exactly predict R15, R16, R17, R18. Current safety checker is trained for R15 detection and nobody could promise it to work well in R18 detection. Anymore I need more test result, or train another model. |
In the end, I have reused some code from another one of my projects, which extracts image features: to build a similar tool, which reports the IDs of the bad concepts: |
By the way, I have found the semantics behind the "bad concepts": References: |
Thanks to your share! As a conclusion, safety checker works well on R15, and I would recommend nsfw_model for R18 which I'm currently using in another project. |
I wanted to try the official code at https://huggingface.co/CompVis/stable-diffusion-safety-checker with:
But I have noticed that this requires an additional input, which is
clip_input
in the code below.So I am a bit confused by this possibly text argument for now... and discovered your repository right afterwards.
Note
Edit: This explains the two arguments: https://github.com/huggingface/diffusers/blob/d486f0e84669447b178569ad499eeb86c739b99e/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L505-L517
The code runs fine, but I would have a few suggestions:
Thank you for your attention!
The text was updated successfully, but these errors were encountered: