Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3 major red flags that makes me suspicious this is fake #10

Open
FurkanGozukara opened this issue Dec 2, 2023 · 8 comments
Open

3 major red flags that makes me suspicious this is fake #10

FurkanGozukara opened this issue Dec 2, 2023 · 8 comments

Comments

@FurkanGozukara
Copy link

FurkanGozukara commented Dec 2, 2023

All woman and attractive

So good consistency. We don't have anything close even for video to animation

And of course not even code released

By the way fake means it doesn't work as it is advertised

@ReEnMikki
Copy link

What goal would they achieve by doing this ? 🤔

@ghost
Copy link

ghost commented Dec 2, 2023

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

@ggenny
Copy link

ggenny commented Dec 2, 2023

The article doesn't seem fake to me, and it doesn't seem like they have invented anything new, in the end, they made a variant of ControlNet by adding the missing temporal information.

@FurkanGozukara
Copy link
Author

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

@nickknyc
Copy link

nickknyc commented Dec 3, 2023

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.

In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

Too right... thanks for the MotionDirector tip

@ShawnFumo
Copy link

ShawnFumo commented Dec 3, 2023

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.
In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

I mean, while it is good enough to be shocking at first glance, there is still plenty of problems with it. Like that animation with the lady with the necklace, you can see how the necklace is stuck to her body instead of swinging freely. Even the slower animations have some artifacts if you look closely (often with hands, or weirdness with the eyes) and the faster animations have a ton of artifacts. And all the examples are centered bodies on static backgrounds. That can certainly still be useful in itself, but it isn't clear how easy it'll be to have this work with more advanced animations with movement in the bg, zooming, panning, etc. I'm sure people will figure that out eventually, but might need to resort at first to techniques you'd usually use to integrate real video of people on greenscreens onto virtual sets to integrate diff kinds of animation together into one video.

The paper is pretty in depth and shows how it is built on top of aspects of Stable Diffusion and AnimateDiff, even initializing parts with those weights. At least one person on twitter (who seems to know what they are doing from other posts), mentioned it was detailed enough that they'll reproduce it in code themselves if it isn't released officially.

We don't know how much cherry-picking there is in the sample videos, how well it generalizes to diff chars and motions overall (like I'm guessing it'd blow up if you tried to have someone do a handstand). But I see no reason to think it is "fake" even in the sense of way overpromising. They mention in the conclusion of the paper how there can be artifacts still, how there can be more problems with areas of chars not visible in the original image (notice only a few of the videos show the character's back). They're also coming from a different perspective in research, like they point out how their approach is slower than other non-diffusion methods. So this may seem amazing to us in terms of our SD perspective, but if you look at the two earlier techniques compared against in several of the videos, this seems more incremental quality increases to those, while also being slower.

Also, why would they even bother making a github repo if they didn't intend to release at least the code eventually?

@JuvenileLocksmith
Copy link

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.
In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

Point 1 is standard advertising fare, and with point 2 the consistency isn't perfect and has some of the qualities of current Stable Diffusion models. This model and code+results is most definitely real. Also people's clearly visibile impatience and insatiable hunger to make porn with this is so absurdly hilarious to me that I am enjoying every day it doesn't come out. Everybody needs to chill out.
In the meantime, https://github.com/showlab/MotionDirector just came out with their code and models as well as trainer.

thanks that looks like stable video. a little bit worse version

however this AnimateAnyone claimed videos are so another level . i bet they will never release code or weights

Why would the it behave any other way assuming that it has been implemented in the apparent manner? Seems plausible that it would work. No?

@FurkanGozukara
Copy link
Author

ok let me add something here

recently I made auto installer for another repo just like this one https://github.com/magic-research

and guess what about the demo results haha:)

I bet this will be same

who wants to test i made an auto installer for you : magic-research/magic-animate#44

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants