You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I am currently researching options for a pre-trained model. I'm working on a personal project to tune a model to do text-to-image just for D&D character portraits, and curious if you've had any feedback or done any experiments on fine-tuning DALL-E Mini for a niche task.
I noticed, both in using DALL-E Mini and in your report, that the current model doesn't do particularly well with human faces. Are you aware of anyone doing experiments to fine-tune it to be better at generating faces?
Overall, do you have any data or observations from fine-tuning DALL-E Mini on small datasets or few-shot learning? That would be an awesome future report, if not. I've been amazed by how NLP models, like GPT-J, have shown such good tuning results on datasets with 20-100 examples. I wouldn't expect the same from image models, but I'm curious what the threshold is.
The text was updated successfully, but these errors were encountered:
Hello! I am currently researching options for a pre-trained model. I'm working on a personal project to tune a model to do text-to-image just for D&D character portraits, and curious if you've had any feedback or done any experiments on fine-tuning DALL-E Mini for a niche task.
The text was updated successfully, but these errors were encountered: