ComfyUI: Problems with Animations #124442
Replies: 2 comments
-
Thanks for posting in the GitHub Community, @Scharwunzel ! We’ve moved your post to our Programming Help 🧑💻 category, which is more appropriate for this type of discussion. Please review our guidelines about the Programming Help category for more information. |
Beta Was this translation helpful? Give feedback.
-
Hi @Scharwunzel, Thanks for being a part of the GitHub Community, we're glad you're here! If you're looking for help for this specific topic, you might want to try asking for help somewhere that focuses more on ComfyUI / stable diffusion, rather than Programming Help. It's possible that another GitHub user might have run into this same issue and can help, but the GitHub Community Discussions focuses primarily on topics related to GitHub itself or collaboration on project development and ideas. We want to make sure you’re getting the best support you can, but this space may not be the right place for this particular topic. Best of luck! P.S I've removed some images that may not be the most appropriate for this space. Thanks for understanding! |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Question
Body
Hi everyone,
I've been familiarizing myself with ComfyUi for a few weeks now and would now like to create my own animations. I have now tried different combinations of AnimateDiff, ControlNet and IPAdapter. But with each of these I run into problems.
The first would probably be that I always need a video or frames in advance in order to create an animation smoothly. That can't be the point, I actually want to create an animation from a photo. Is there a trick I haven't found yet?
But if I want to create a new animation from a video with the notes mentioned above, the prompts are hardly taken into account. A brunette woman does not become a blonde, a white top does not become a red one.
The quality is absolutely not convincing either. The picture just looks more unrealistic, even though I choose realistic models. Without Animatediff I also get such realistic images, but then they are completely confused and not suitable for film.
I even use the SUPIR Upscaler to improve the frames. But that only works to repair the worst. The result in the screenshots has already been processed.
To create VR-ready animations, I remove the background and insert green. This also only works partially well. In this workflow I chose a simple rembg, I also tried extensively with Yolo and other types, none of them are reliable. Is there perhaps a better way to do this?
So what am I doing wrong? Could one of my knowledgeable Github friends help me? I would be very grateful for any help.
Here are a few screenshots of my workflow:
Beta Was this translation helpful? Give feedback.
All reactions