-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can this be used to encode to allow for later playback? #55
Comments
Bump. I tried using mpv's record-file option but it looks like it just outputs the original input file resolution not the upscaled post shader player resolution. Does anyone know any tools for rendering videos with shaders? One i found that might work is https://github.com/polyfloyd/shady (is there a meta tag for issues?) |
There will be a python/tensorflow version coming soon. You will be able to use ffmpeg-python to encode video if you wish. (Even if it goes against the nature of Anime4K) |
Awesome! Yeah I know it's against the whole point but if I want to watch on my Ipad, etc (using plex) I don't know of any way to apply the shader live on those platforms. |
Thank you @bloc97 appreciate it :) |
Are there any updates on this yet? |
I can't promise a deadline due to the current circumstances, but I think I should at least be able to start working on a Tensorflow/Python port now that v3.0 is released. Edit: On a different note, you should be able to achieve similar if not even better results using the already available AVISynth filters and the numerous waifu2x encoders out there. But of course, they are pretty slow and not suitable for real-time use. |
(The following content was translated from Chinese to English using DeepL) |
Any update @bloc97 . I am sure you are busy and don't want to bug you but thought it would be good to check in a year later and see if there is any update. Thank you! |
If I remember correctly it is already possible to encode video using mpv on linux after applying shaders. I had planned to release the tensorflow model much earlier but I had no time and my training machine broke. I only recently obtained a personal machine for training and am in the process of updating the code to Tensorflow 2.5 (the old code was for tf 1.15). |
I have a 3090, 64gb ram, and an i7-9770k happy to train the model if you need. |
@arces I really appreciate your offer! My machine is working, albeit not as powerful as yours... However, even the biggest Anime4K network is very lightweight (under 20k parameters) and trains pretty much on anything. Training time is also very short by using sota training methods such as adaptive momentum, cyclical learning rates and super-convergence. It takes less than 2 hours for successful training for the UL version, and less than 10 minutes for the M version to converge. Edit: I'm working on cleaning up the code and allowing it to be run within a standard tf-gpu docker container. |
Hi @bloc97 , thanks for your excellent work. I tried it and it is really fast. I want to train the same model by my dataset. Can you please release the model structure of M version? Model structure is enough. Thanks! |
@barneymaydance The M version is tiny. It's simply five 3x3 Conv2D layers (4 features wide) with CReLU activation followed by a Depth to Space upsampling layer. The network predicts residuals instead of the full image, just like in VDSR (CVPR16). |
@barneymaydance Here's the snippet of code that I used import tensorflow as tf
def SR2Model(input_channels=1, features=4, block_depth=4):
input_shape = [None, None, input_channels]
input_lr = tf.keras.layers.Input(shape=input_shape)
upsampled_lr = tf.keras.layers.UpSampling2D(size=(2, 2), interpolation='bilinear')(input_lr)
x = input_lr
for i in range(block_depth):
x = tf.keras.layers.Conv2D(features, (3, 3), padding='same', kernel_initializer='he_normal')(x)
x = tf.nn.crelu(x)
x = tf.keras.layers.Conv2D(input_channels*4, (3, 3), padding='same', kernel_initializer=tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.001))(x)
x = tf.nn.depth_to_space(x, 2)
x = tf.keras.layers.Add()([x, upsampled_lr])
model = tf.keras.models.Model(input_lr, x)
model.summary(line_length=150)
return model |
the python version for encoding will be aviable for windows? |
Is your feature request related to a problem? Please describe.
A way to encode the video to be later viewed on something like Plex.
Describe the solution you'd like
A way to encode the video into a 4k resolution video file to be viewed later or on other players.
Describe alternatives you've considered
A have used it and it works great but only works in players that I don't normally or can't use sometimes.
Additional context
A script or even some workaround like using a video player, that currently works with the plugin, to encode and save the video file. I know you can do some encoding with VLC.
The text was updated successfully, but these errors were encountered: