TensorFlow Demo for Mathematicians and Physicists
TensorFlow demo for Harvard Machine Learning Supergroup. The goal is to get you up and running with rather bare-bones code for three different tasks.
Code written by Jordan Hoffmann (mainly Demo 2 + 3, flat-folding code for Demo 1) and Shruti Mishra (Demo 1)
Given a sheet that has been folded, can a computer tell how many times it has been folded? It is relatively straightforward for small crease numbers (code here), but as the number increases, the specific locations of the creases can greatly impact the amount of mileage that is present. Below, we show an image that shows a few examples of creased sheets at different fold numbers.
Here, I wanted to cook up a slightly more complicated example that uses a side stream in addition to the typical input. Therefore, in this demo we will be doing a PDE related problem. We are solving:
Subject to: and: . We solve it on an irregular geometry. We then try to predict the total amplitude within a small region of the solution domain some time later. That is, we randomly set X,Y, and Z for each run. However, we store these values and pass them along as a side stream for the network.
Here, the goal is to given the solution at t=6, the location of the original pulse, and it's amplitude, say something about the result at t=6.28. How different do they look? Below, we plot the solution at two times.
Not totally the same! In this demo, we train a neural network to make predictions. Here, I tried to use a more typical coding style like that in a large project. To try to quantify some aspect, here we try to predict the summed amplitude in the lower right quadrant in the second image. Training the network on 5000 examples, a small sample of which are in data_small.zip, we get the results below:
Demo problem recoloring an image given colored images. Use the left lava lamp from this video for training:
Can we take an image like that on the left, separate the two lava lamps, and then accurately recolor one of them, fed a video of the other side? Note: I ended up taking the one on the right for training.
To do this, we try using a network that uses
conv2d_transpose layers. In the figure below, we show the input (gray scale), the result from early in training, somewhere near the middle of training, and the end of training. At the bottom, we show the target image.