Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anime DAIN and 2/3-dupe #4

Open
CyFeng16 opened this issue May 18, 2020 · 4 comments
Open

Anime DAIN and 2/3-dupe #4

CyFeng16 opened this issue May 18, 2020 · 4 comments
Labels
help wanted Extra attention is needed

Comments

@CyFeng16
Copy link
Owner

BrokenSilence's thought is mainly correct, I am with him/her. I did some homework recently and let's talk about why anime use 2-dupe or 3-dupe first.

The hand-painting of anime is expensive, and the general (Japanese) animation company can accept the 2-dupe trade-off, which also means that we see 24fps animation is actually 12fps.

The first to use 3-dupe (search てづか おさむ plz) to further reduce the painting cost (storyboards) to adapt to the update speed, the actual frame rate of the animation further dropped to 8fps. So the problem we have to think about is to enhance the 8fps animation works to 60fps actual performance frames as we expected Is it the same?

Anyway, time continuity is not a big issue for me . Just set the DAIN rate as follows:

Let us make an assumption:

  • Input: 3-dupe anime (24fps)
  • Needed Output: no-dupe anime (60fps)
    According to the upper equation, DAIN rate is calculated 7.5 and we set it to 8. Then we evenly cut out 30 frames from the generated frames. (60*(8-7.5)) We solved the problem with slightly flawed.

Any discussion is welcome.

@CyFeng16 CyFeng16 added the help wanted Extra attention is needed label May 18, 2020
@re2cc
Copy link
Contributor

re2cc commented May 18, 2020

I honestly did not know about that, well, I guess you can always learn something new.

The approach of "We have to think about is to enhance the 8fps animation works to 60fps" is a bit alarming... It does not look at all realistic.
It would be great if the model could generate frames so efficiently, but I honestly doubt that in its current state it can do that kind of generation.
Maybe it would be more feasible to do 30 "real" fps?
I really do not know if doing that would really improve it or if it would be better to try to do the 60 frames.

Either way, any improvement or advance is something we can celebrate and applaud.

@Zotikus1001
Copy link

Zotikus1001 commented May 19, 2020

Anime is tricky to interpolate. If something is too far apart from 1 frame to the other, it wont look good.
However it can yield great results depending on the source material. Heres an example: http://palatina.feste-ip.net:10401/f/9c309add73e14e948a36/

My thoughts were like this:
Interpolate the source 24 fps anime video into the same 24 fps, removing and interpolating only the dupes. In the same video, there can be 2 or 3, or even 4 dupes in a row sometimes, its not consistent.

So, we would need to detect how many dupes there are between every 2 different frames it finds.
And adjust the time step accordingly.

  • For 1 Dupe = 0.5 ts
  • For 2 Dupes = 0.33333333333~ ts
  • For 3 Dupes = 0.25 ts
    etc.
FramesArray = []
foreach (frame in FolderX)
    FramesArray.Add(frame.name)

for(j = 0; j < FramesArray.Lenght; j++)
{
    dupes = 0;
    ts = 0;
    compare_frame = FramesArray[j];

    for(i = 1; similarity > 99%; i++)
    {
        similarity = CheckSimilarity(compare_frame, FramesArray[j+i]);
        if (similarity < 99%)
            diffFrame = FramesArray[j+i];
        else
            dupes++;
    }
    if (dupes == 1)
        ts = 0.5;
        j += 1;
    if (dupes == 2)
        ts = 0.333333333333333333333333;
         j += 2;
    if (dupes == 3)
        ts = 0.25;
         j += 3;
    if (dupes == 4)
        ts = 0.2;
         j += 4;

    InterpolateFrames(compare_frame,  diffFrame, ts)
}

Thats my logic on this.
Something like that. :P

Now after we have the 24 fps anime video without any dupes. We can finally interpolate it normally to 2X, 4X, etc..

And here's a way to compare frame similarity that I'm getting pretty good consistent results with:
https://pastebin.com/iJ6VAK3k

@CyFeng16 CyFeng16 closed this as completed Jun 4, 2020
@Zotikus1001
Copy link

Hello again,
So while trying to implement what I said above, I realized automated frame dupe detection is useless.
Because most of the time the dupes aren't 100% the same. Some dupes move the background, others move other characters on screen while updating others, etc.

So the only real way to do this is to manually extract the frames and manually delete all the frames we think are dupes.

After all dupes are deleted, I made a simple python script to find how many dupes I deleted for every frame, and rename the frame from "000001.png" to 000001_1.png"

_1 for 1 dupe
_2 for 2 dupes
_3 for 3 dupes
_4 for 4 dupes

4 is almost impossible to find in anime. In my test the max dupes in a row I found was 2.

Here is the script: https://pastebin.com/VEGZhZYz
I'm sorry for the spaghetti code. I'm still new to python.

Now simply extract a zip file containing all the frames ready for interpolation into the Input folder.
Run inference_dain. And according to _1 or _2 etc on the frames names, it changes the time_step accordingly for every frame.

The output will be 24fps like the original video, but with all the dupes gone, and the interpolated frames in its place.

Now that we got a 24 fps video with no dupes, we can interpolate it normally like any other video to 2x, or 4x.

You can see the changes I made yesterday on my fork to be able to make this work.

Everything works great while it only needs to interpolate 1 dupe everytime.

But when it tries to interpolate the first 2 dupes I get this error:

 24% 354/1460 [05:27<52:55,  2.87s/it]--------Interpolating 1 Missing Frame--------
 24% 355/1460 [05:30<52:46,  2.87s/it]--------Interpolating 1 Missing Frame--------
 25% 358/1460 [05:33<37:09,  2.02s/it]--------Interpolating 1 Missing Frame--------
 25% 361/1460 [05:36<30:04,  1.64s/it]--------Interpolating 2 Missing Frames--------
Traceback (most recent call last):
  File "vfi_helper.py", line 243, in <module>
    input_dir=args.src, output_dir=args.dst, time_step=args.time_step,
  File "vfi_helper.py", line 66, in continue_frames_insertion_helper
    time_step=newtimestep,
  File "vfi_helper.py", line 120, in frames_insertion_helper
    y_0[-1][0::2, 0::2, :] = ym_0_0[i]
IndexError: list index out of range

I guess I still need to change something in the fork, but I don't know what.

This is the colab command I used:
!python3 inference_dain.py -ts 0.5 -ifps 23.976 -aff True

@CyFeng16 CyFeng16 reopened this Jun 18, 2020
@CyFeng16
Copy link
Owner Author

@Brokensilence
Sorry mate, as you may notice, I am a little busy these days blame the pandemic. TBH, I'm kind of jealous that you have time to delve into something. :)
If you wanna put something awesome in this repository, don't hesitate to mail to me and welcome to join the collaborators' team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants