Skip to content

litcoderr/AutoGradCpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoGrad

Auto Gradient Computation Framework (Pytorch Mock Version)

Usage

Example code can be viewed here.

  1. Tensor node Test: Code
  2. Matrix Test: Code

Tensor

  • Tensor node.
  • Destructible node when flushing memory recursively by default.

Initialize

#include <Tensor/Tensor.hpp>

Tensor<double>& my_tensor1 = *new Tensor<double>(); // default value 0
Tensor<double>& my_tensor2 = *new Tensor<double>(3.2); // value only with default name as ""
Tensor<double>& my_tensor3 = *new Tensor<double>(3.2, "Data"); // value only and custom name

Generating Dynamic Graph

// ADD, SUBTRACT, MULTIPLY operations will automatically generate Dynamic Graph
Tensor<double>& result = my_tensor1 + my_tensor2 - my_tensor3 * my_tensor4

Back Propagation

result.backward(); // EASY. check each tensor's .grad element for gradient value

Flush Memory

Tensor<double>& head_node_of_graph;

head_node_of_graph.flush(); // will flush all child node memories

Variable

  • Inherits Tensor
  • Not Destructible node when flushing memory recursively by default. (Used for weights)

Initialize

#include <Tensor/Tensor.hpp>

Variable<double>& my_tensor1 = *new Variable<double>(); // default value 0
Variable<double>& my_tensor2 = *new Variable<double>(3.2); // value only with default name as ""
Variable<double>& my_tensor3 = *new Variable<double>(3.2, "Weights"); // value only with default name as ""

WeightMap

  • Contains weight tensors in a map
  • Tensors can be retrieved by std::string key

Initialize

#include <WeightMap.hpp>
#include <Tensor/Tensor.hpp>

// Initialize weights
Variable<double>& weight1 = *new Variable<double>(1.1, "W1"); 
Variable<double>& weight2 = *new Variable<double>(1.2, "W2"); 
Variable<double>& weight3 = *new Variable<double>(1.3, "W3");

// Initialize Weight Map
std::vector<Tensor<double>*> weight_list = {weight1, weight2, weight3};  // first make vector with type Tensor<T>*
WeightMap<double>& weightMap = *new WeightMap<double>(weight_list);  // Initialize by passing in weight list

Get Tensor

Tensor<float>& weight_i_want = weightMap.getTensor("Weight");  // use key std::string to retrieve reference

Optimizer

  • Takes in WeightMap Pointer and Learning Rate

Initialize

#include <Optimizer.hpp>

#include <WeightMap.hpp>
#include <Tensor/Tensor.hpp>
WeightMap<double>& weightMap = *new WeightMap<double>(weight_list);  // Initialize by passing in weight list
double learning_rate = 0.0001;

Optimizer<double> optim(&weightMap, learning_rate);

Update Weights

optim.step();  // EASY.

Loss

  • Loss Instance for Loss Calculation

Initialize

#include <Loss.hpp>

Loss<double> lossModule;

Forward Propagate Loss

Tensor<double>& model_output;
Tensor<double>& target;

Tensor<double>& loss = lossModule.forward(model_output, target);  // pass in model_output and target to forward method

About

Auto Gradient Computation Framework in Cpp

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published