Skip to content

mchaput/duralava

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

duralava

duralava is a neural network which can simulate a lava lamp in an infinite loop.

Example videos

This is not a real lava lamp but a "fake" one generated by duralava. (Might take some time to load.)

out_180, duralava neural network deep learning lava lamp sample 1 out_170, duralava neural network deep learning lava lamp sample 2 out_160, duralava neural network deep learning lava lamp sample 3

Novelty

duralava can

  • learn a physical process (a lava lamp).
  • generate an arbitrarily long sequence of output, without diverging even after hours (outputting tens of thousands of frames).

How it works

Generative Adversarial Networks (GANs) can learn to generate new samples of data. For example, a GAN can be trained to output images of a lava lamp which look as real as possible. To accomplish this, the GAN gets an input vector with normally distributed noise. For duralava this vector is of length 64. Based on this random noise vector it generates a lava lamp image. The random vector thus encodes the state of the lava lamp.

For training, the GAN is presented a real image of a lava lamp and also one of the fake lava lamp and then it learns to make the fake ones look as real as possible.

For a lava lamp, a sequence of images has to be created. This sequence should in fact be infinite since a lava lamp can run forever. Thus the GAN should learn to output an arbitrarily long sequence of lava lamp images as a video. This is achieved by using a recurrent neural network (RNN). The RNN gets the 64 element noise vector of time step t and outputs the 64 element noise vector for time stemp t+1.

The tricky part is to make sure that the state of the lava lamp (the 64 element random noise vector) remains stable. It could for example happen that over time the distribution of noise in the vector diverges from a normal distribution and that the mean becomes 10 and the standard deviation 52. In this case, the output images of the lava lamps wouldn't be correct anymore as the GAN was trained to expect the input vector to be normally distributed. To solve this problem, I make sure that in training the output of the RNN stays normally distributed. This is accomplished by adding penalization terms in the training which discourage the noise to diverge from the normal distribution.

Learning

To learn a new model run

python learn.py

Around 10 GB of combined CPU and GPU memory are required. I used python 3.9.7 and the pip requirements listed in requirements.txt.

Live mode

To generate an output video live use some of the trained weights like this:

python learn.py --weights logs/20220104-213105/weights.180 --mode live

Generating an output video

To generate an output video as an APNG animation file use some of the trained weights like this:

python learn.py --weights logs/20220104-213105/weights.180 --mode video

An APNG named out.png will be created in the current directory. For creating APNGs from a trained neural network, you need to have ffmpeg installed.

Low-hanging fruit

I trained on a MacBook Air with an M1 SoC with 16 GB of shared memory for CPU and GPU. Thus, memory was the limiting factor in my experiments.

With more memory, one could

  • Increase the resolution (currently 64x64 pixels)
  • Increase the training sequence length (currently 20)
  • Increase the batch size (currently 32)
  • Increase the size of the recurrent neural networks, which model the evolution of the lava lamp over time

Dataset

lavalamp.mov contains more than 1 hour of footage of a lava lamp at 30 fps and can be freely used for any purpose. In the frames directory there are the individual frames of the video scaled to 64x64 pixels, which I used for training the model.

About

duralava is a neural network which can simulate a lava lamp in an infinite loop.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published