Skip to content

Ghost---Shadow/differentiable-programming-handbook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Differentiable Programming Handbook

Differentiable implementation of common computer science algorithms.

Trying out the Tensorflow 2.0 Gradient tape during the pandemic

Motivation

These algorithms can be thought as overengineered loss functions. They are meant to be used in conjuction with deep learning networks. However, unlike loss function, they dont need to be attached to the end of the network. As demonstrated in the Bubble Sort example, it can also be interleaved with the graph of the network.

All algorithms in this repository follow these rules

  1. It must be deterministic in forward pass - Although, these are intended to be used in conjunction with deep neural networks, there must be a very clear boundary of separation of code with learnable parameters and code without learnable parameters. Therefore no stochasticity is allowed, either during run time or during the generation of code which will be executed at runtime.
  2. It must be lossless in forward pass - The algorithms must behave identically to classical algorithms when the inputs are discrete. When non-discrete data points are passed, it should produce well behaved, interpretable and continious output.
  3. It must have well definied gradients in backward pass - The algorithms should have well defined gradients with at least one of its inputs.

Contents