Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add frame blending option ontop of the interpolated frames? #137

Closed
deama opened this issue Apr 6, 2021 · 6 comments
Closed

Add frame blending option ontop of the interpolated frames? #137

deama opened this issue Apr 6, 2021 · 6 comments

Comments

@deama
Copy link

deama commented Apr 6, 2021

Could a feature be added wherein the next frame is put ontop of the interpolated frame, like there's an afterimage of the next frame?

@deama
Copy link
Author

deama commented Apr 6, 2021

Ok so I started to play around and managed to add my own modification to it to give it an afterimage effect.
at around line 220, or where it says "output = make_inference(I0, I1, args.exp)"
right after that line, put in:

    carry = output
    output = []
    alpha = 0.2
    beta = 1-alpha
    for mid in carry:
        mid = (((mid[0] * 255.).byte().cpu().numpy().transpose(1, 2, 0)))
        nextFrame = mid[:h, :w]
        output.append(torch.from_numpy(np.transpose((cv2.addWeighted(frame[:, :, ::-1], alpha, nextFrame[:, :, ::-1], beta, 0)[:, :, ::-1].copy()), (2,0,1))).to(device, non_blocking=True).unsqueeze(0).float() / 255.)

you can adjust the alpha value to tweak the transparency of the interpolation, higher values makes the interpolation less visible, but makes the afterimage more visible.
I've done some tests and this should make the interpolation less spazy when it comes to certain choppy scenes (anime).

@n00mkrad
Copy link

n00mkrad commented Apr 6, 2021

Don't really see the point - Why interpolate something and then hide the interpolated frames under blended frames?

@deama
Copy link
Author

deama commented Apr 6, 2021

Don't really see the point - Why interpolate something and then hide the interpolated frames under blended frames?

It doesn't hide it, it creates a type of slight afterimage, so the interpolated frame is still there, it's just slightly transparent, and you can also see a bit of the next frame, I noticed this helps with noticing interpolated frames when watching snappy movements, like with anime. Sometimes I notice thin lines becoming spazzy when they get interpolated, this helps with that also.

@BakaBTZ
Copy link

BakaBTZ commented Apr 22, 2021

Ok so I started to play around and managed to add my own modification to it to give it an afterimage effect.
at around line 220, or where it says "output = make_inference(I0, I1, args.exp)"
right after that line, put in:

    carry = output
    output = []
    alpha = 0.2
    beta = 1-alpha
    for mid in carry:
        mid = (((mid[0] * 255.).byte().cpu().numpy().transpose(1, 2, 0)))
        nextFrame = mid[:h, :w]
        output.append(torch.from_numpy(np.transpose((cv2.addWeighted(frame[:, :, ::-1], alpha, nextFrame[:, :, ::-1], beta, 0)[:, :, ::-1].copy()), (2,0,1))).to(device, non_blocking=True).unsqueeze(0).float() / 255.)

you can adjust the alpha value to tweak the transparency of the interpolation, higher values makes the interpolation less visible, but makes the afterimage more visible.
I've done some tests and this should make the interpolation less spazy when it comes to certain choppy scenes (anime).

Hey there great addition. I'am pretty new to coding in general, would you mind and tell me where in which file I have to change the code ?

@deama
Copy link
Author

deama commented Apr 23, 2021

Ok so I started to play around and managed to add my own modification to it to give it an afterimage effect.
at around line 220, or where it says "output = make_inference(I0, I1, args.exp)"
right after that line, put in:

    carry = output
    output = []
    alpha = 0.2
    beta = 1-alpha
    for mid in carry:
        mid = (((mid[0] * 255.).byte().cpu().numpy().transpose(1, 2, 0)))
        nextFrame = mid[:h, :w]
        output.append(torch.from_numpy(np.transpose((cv2.addWeighted(frame[:, :, ::-1], alpha, nextFrame[:, :, ::-1], beta, 0)[:, :, ::-1].copy()), (2,0,1))).to(device, non_blocking=True).unsqueeze(0).float() / 255.)

you can adjust the alpha value to tweak the transparency of the interpolation, higher values makes the interpolation less visible, but makes the afterimage more visible.
I've done some tests and this should make the interpolation less spazy when it comes to certain choppy scenes (anime).

Hey there great addition. I'am pretty new to coding in general, would you mind and tell me where in which file I have to change the code ?

inference_video.py

@hzwer
Copy link
Owner

hzwer commented Nov 12, 2021

#207 I think we implemented a proper processing method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants