diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/bptt.png b/_assets/blogposts/2019-03-05-dp-vs-rl/bptt.png
new file mode 100644
index 00000000..d11f1b26
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/bptt.png differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole-flow.png b/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole-flow.png
new file mode 100644
index 00000000..dcda8ce0
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole-flow.png differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole.gif
new file mode 100644
index 00000000..90833bf1
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/cartpole.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-dp.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-dp.gif
new file mode 100644
index 00000000..1c22143f
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-dp.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-training.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-training.gif
new file mode 100644
index 00000000..dccbf1a8
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/pendulum-training.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-basic.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-basic.gif
new file mode 100644
index 00000000..21476ec2
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-basic.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-flow.png b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-flow.png
new file mode 100644
index 00000000..e577bc91
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-flow.png differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-hit.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-hit.gif
new file mode 100644
index 00000000..cf453f1f
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-hit.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-miss.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-miss.gif
new file mode 100644
index 00000000..b6c97ab9
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-miss.gif differ
diff --git a/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-wind.gif b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-wind.gif
new file mode 100644
index 00000000..fb838e15
Binary files /dev/null and b/_assets/blogposts/2019-03-05-dp-vs-rl/trebuchet-wind.gif differ
diff --git a/_assets/blogposts/2020-06-29-acclerating-flux-torch/combined_benchmarks_2.png b/_assets/blogposts/2020-06-29-acclerating-flux-torch/combined_benchmarks_2.png
new file mode 100644
index 00000000..17d0a0b3
Binary files /dev/null and b/_assets/blogposts/2020-06-29-acclerating-flux-torch/combined_benchmarks_2.png differ
diff --git a/_assets/blogposts/2020-06-29-acclerating-flux-torch/resnet101.png b/_assets/blogposts/2020-06-29-acclerating-flux-torch/resnet101.png
new file mode 100644
index 00000000..2a27e6ea
Binary files /dev/null and b/_assets/blogposts/2020-06-29-acclerating-flux-torch/resnet101.png differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/bm_metrics.png b/_assets/blogposts/2020-12-20-Flux3D/bm_metrics.png
new file mode 100644
index 00000000..1a878528
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/bm_metrics.png differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/bm_pcloud.png b/_assets/blogposts/2020-12-20-Flux3D/bm_pcloud.png
new file mode 100644
index 00000000..6d4d0cec
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/bm_pcloud.png differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/bm_trimesh.png b/_assets/blogposts/2020-12-20-Flux3D/bm_trimesh.png
new file mode 100644
index 00000000..5f2ffe21
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/bm_trimesh.png differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/fitmesh_anim.gif b/_assets/blogposts/2020-12-20-Flux3D/fitmesh_anim.gif
new file mode 100644
index 00000000..ba01ece5
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/fitmesh_anim.gif differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/visualize.png b/_assets/blogposts/2020-12-20-Flux3D/visualize.png
new file mode 100644
index 00000000..1c1ae731
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/visualize.png differ
diff --git a/_assets/blogposts/2020-12-20-Flux3D/visualize_anim.gif b/_assets/blogposts/2020-12-20-Flux3D/visualize_anim.gif
new file mode 100644
index 00000000..b86a0525
Binary files /dev/null and b/_assets/blogposts/2020-12-20-Flux3D/visualize_anim.gif differ
diff --git a/_assets/blogposts/2021-12-1-flux-numfocus/flux.png b/_assets/blogposts/2021-12-1-flux-numfocus/flux.png
new file mode 100644
index 00000000..c54befa0
Binary files /dev/null and b/_assets/blogposts/2021-12-1-flux-numfocus/flux.png differ
diff --git a/_assets/blogposts/2021-12-1-flux-numfocus/flux_numfocus.png b/_assets/blogposts/2021-12-1-flux-numfocus/flux_numfocus.png
new file mode 100644
index 00000000..e43b32a7
Binary files /dev/null and b/_assets/blogposts/2021-12-1-flux-numfocus/flux_numfocus.png differ
diff --git a/_assets/tutorialposts/2021-10-08-dcgan-mnist/cat_gan.png b/_assets/tutorialposts/2021-10-08-dcgan-mnist/cat_gan.png
new file mode 100644
index 00000000..a3b59c5a
Binary files /dev/null and b/_assets/tutorialposts/2021-10-08-dcgan-mnist/cat_gan.png differ
diff --git a/_assets/tutorialposts/2021-10-08-dcgan-mnist/output.gif b/_assets/tutorialposts/2021-10-08-dcgan-mnist/output.gif
new file mode 100644
index 00000000..33435edb
Binary files /dev/null and b/_assets/tutorialposts/2021-10-08-dcgan-mnist/output.gif differ
diff --git a/_layout/navbar.html b/_layout/navbar.html
index 9c1692a2..f0ee46ec 100644
--- a/_layout/navbar.html
+++ b/_layout/navbar.html
@@ -13,7 +13,7 @@
Getting Started
- Docs
+ Docs
Blog
diff --git a/blogposts/2019-03-05-dp-vs-rl.md b/blogposts/2019-03-05-dp-vs-rl.md
index 4491d292..bf89897f 100755
--- a/blogposts/2019-03-05-dp-vs-rl.md
+++ b/blogposts/2019-03-05-dp-vs-rl.md
@@ -10,7 +10,7 @@ We've discussed the idea of [differentiable programming](https://fluxml.ai/2019/
Differentiation is what makes deep learning tick; given a function $y = f(x)$ we use the gradient $\frac{dy}{dx}$ to figure out how a change in $x$ will affect $y$. Despite the mathematical clothing, gradients are actually a very general and intuitive concept. Forget the formulas you had to stare at in school; let's do something more fun, like throwing stuff.
-
+
When we throw things with a trebuchet, our $x$ represents a setting (say, the size of the counterweight, or the angle of release), and $y$ is the distance the projectile travels before landing. If you're trying to aim, the gradient tells you something very useful – whether a change in aim will increase or decrease the distance. To maximise distance, just follow the gradient.
@@ -31,19 +31,19 @@ Now we have that, let's do something interesting with it.
A simple way to use this is to aim the trebuchet at a target, using gradients to fine-tune the angle of release; this kind of thing is common under the name of _parameter estimation_, and we've [covered examples like it before](https://julialang.org/blog/2019/01/fluxdiffeq). We can make things more interesting by going meta: instead of aiming the trebuchet given a single target, we'll optimise a neural network that can aim it given _any_ target. Here's how it works: the neural net takes two inputs, the target distance in metres and the current wind speed. The network spits out trebuchet settings (the mass of the counterweight and the angle of release) that get fed into the simulator, which calculates the achieved distance. We then compare to our target, and _backpropagate through the entire chain_, end to end, to adjust the weights of the network. Our "dataset" is a randomly chosen set of targets and wind speeds.
-
+
A nice property of this simple model is that training it is _fast_, because we've expressed exactly what we want from the model in a fully differentiable way. Initially it looks like this:
-
+
After about five minutes of training (on a single core of my laptop's CPU), it looks like this:
-
+
If you want to try pushing it, turn up the wind speed:
-
+
It's only off by 16cm, or about 0.3%.
@@ -55,7 +55,7 @@ This is about the simplest possible control problem, which we use mainly for ill
A more recognisable control problem is [CartPole](https://gym.openai.com/envs/CartPole-v0/), the "hello world" for reinforcement learning. The task is to learn to balance an upright pole by nudging its base left or right. Our setup is broadly similar to the trebuchet case: a [Julia implementation](https://github.com/tejank10/Gym.jl) means we can directly treat the reward produced by the environment as a loss. ∂P allows us to switch seamlessly from model-free to model-based RL.
-
+
The astute reader may notice a snag. The action space for cartpole – nudge left or right – is discrete, and therefore not differentiable. We solve this by introducing a _differentiable discretisation_, defined [like so](https://github.com/FluxML/model-zoo/blob/cdda5cad3e87b216fa67069a5ca84a3016f2a604/games/differentiable-programming/cartpole/DiffRL.jl#L32):
@@ -74,22 +74,22 @@ In other words, we force the gradient to behave as if $f$ were the identity func
The results speak for themselves. Where RL methods need to train for hundreds of episodes before solving the problem, the ∂P model only needs around 5 episodes to win conclusively.
-
+
## The Pendulum & Backprop through Time
An important aim for RL is to handle _delayed reward_, when an action doesn't help us until several steps in the future. ∂P allows this too, and in a very familiar way: when the environment is differentiable, we can actually train the agent using backpropagation through time, just like a recurrent net! In this case the environmental state becomes the "hidden state" that changes between time steps.
-
+
To demonstrate this technique we looked at the [pendulum](https://github.com/openai/gym/wiki/Pendulum-v0) environment, where the task is to swing a pendulum until it stands upright, keeping it balanced with minimal effort. This is hard for RL models; after around 20 episodes of training the problem is solved, but often the route to a solution is visibly sub-optimal. In contrast, BPTT can beat the [RL leaderboard](https://github.com/openai/gym/wiki/Leaderboard#pendulum-v0) in _a single episode of training_. It's instructive to actually watch this episode unfold; at the beginning of the recording the strategy is random, and the model improves over time. The pace of learning is almost alarming.
-
+
Despite only experiencing a single episode, the model generalises well to handle any initial angle, and has something pretty close to the optimal strategy. When restarted the model looks more like this.
-
+
This is just the beginning; we'll get the real wins applying DP to environments that are too hard for RL to work with at all, where rich simulations and models already exist (as in much of engineering and the sciences), and where interpretability is an important factor (as in medicine).
diff --git a/blogposts/2020-06-29-acclerating-flux-torch.md b/blogposts/2020-06-29-acclerating-flux-torch.md
index 28c3d432..9144ce75 100755
--- a/blogposts/2020-06-29-acclerating-flux-torch.md
+++ b/blogposts/2020-06-29-acclerating-flux-torch.md
@@ -12,8 +12,8 @@ For popular object detection models - ResNet50, ResNet101 and VGG19 - we compare
~~~
-
-
+
+
~~~
diff --git a/blogposts/2020-12-20-Flux3D.md b/blogposts/2020-12-20-Flux3D.md
index 07cf95a0..ef719dba 100644
--- a/blogposts/2020-12-20-Flux3D.md
+++ b/blogposts/2020-12-20-Flux3D.md
@@ -12,7 +12,7 @@ Performing 3D vision tasks involve preparing datasets to fit a certain represent
~~~
-

+
~~~
@@ -36,9 +36,9 @@ Kaolin is a popular 3D vision library based on PyTorch. Flux3D.jl is overall fas
~~~
-
-
-
+
+
+
~~~
@@ -158,7 +158,7 @@ Additonally, 3D structures and all relevant transforms, as well as metrics, are
~~~
-

+
~~~
@@ -183,7 +183,7 @@ julia> vbox(
~~~
-

+
~~~
diff --git a/blogposts/2021-12-1-flux-numfocus.md b/blogposts/2021-12-1-flux-numfocus.md
index 83819a55..23a99983 100644
--- a/blogposts/2021-12-1-flux-numfocus.md
+++ b/blogposts/2021-12-1-flux-numfocus.md
@@ -6,7 +6,7 @@ author = "Dhairya Gandhi, Logan Kilpatrick"
~~~
-
+
~~~
diff --git a/tutorialposts/2021-10-08-dcgan-mnist.md b/tutorialposts/2021-10-08-dcgan-mnist.md
index 55533cf0..0eefb66a 100644
--- a/tutorialposts/2021-10-08-dcgan-mnist.md
+++ b/tutorialposts/2021-10-08-dcgan-mnist.md
@@ -11,7 +11,7 @@ This is a beginner level tutorial for generating images of handwritten digits us
A GAN is composed of two sub-models - the **generator** and the **discriminator** acting against one another. The generator can be considered as an artist who draws (generates) new images that look real, whereas the discriminator is a critic who learns to tell real images apart from fakes.
-
+
The GAN starts with a generator and discriminator which have very little or no idea about the underlying data. During training, the generator progressively becomes better at creating images that look real, while the discriminator becomes better at telling them apart. The process reaches equilibrium when the discriminator can no longer distinguish real images from fakes.
@@ -25,7 +25,7 @@ This tutorial demonstrates the process of training a DC-GAN on the [MNIST datase
~~~
-
+
~~~
@@ -361,7 +361,7 @@ save("./output.gif", gif_mat)
```
-
+
## Resources & References