Skip to content
Leonardo Artiles Montero edited this page May 20, 2024 · 1 revision

Sharp-Grad

Sharp-Grad is a lightweight automatic differentiation (autograd) engine written in C#. It supports basic operations and neural network components, making it suitable for research (not yet) and educational purposes.

Getting Started

Below is an example of how to use Sharp-Grad for automatic differentiation and neural network operations.

using SharpGrad;
using SharpGrad.DifEngine;
using SharpGrad.NN;

class Program
{
    static void Main(string[] args)
    {
        // Define values
        Value<float> a = new Value<float>(1.5f, "a");
        Value<float> b = new Value<float>(2.0f, "b");
        Value<float> c = new Value<float>(6.0f, "c");

        // Perform operations
        Value<float> d = (a + b * c);
        Value<float> e = d / new Value<float>(2.0f, "2");
        Value<float> f = e.Pow(new Value<float>(2.0f, "2"));
        Value<float> g = f.ReLU();

        // Backpropagation
        g.Grad = 1.0f;
        g.Backpropagate();

        // Output gradients
        Console.WriteLine($"Gradient of a: {a.Grad}");
        Console.WriteLine($"Gradient of b: {b.Grad}");
        Console.WriteLine($"Gradient of c: {c.Grad}");

        // More operations
        Value<float> j = new Value<float>(0.5f, "j");
        Value<float> k = j.Tanh();
        Value<float> l = k.Sigmoid();
        Value<float> m = l.LeakyReLU(1.0f);

        // Backpropagation
        m.Grad = 1.0f;
        m.Backpropagate();

        // Output gradient and data
        Console.WriteLine($"Gradient of j: {j.Grad}");
        Console.WriteLine($"Data of m: {m.Data}");
    }
}

Explanation

  1. Value Initialization:

    • Values a, b, and c are initialized with 1.5, 2.0, and 6.0 respectively.
  2. Operations:

    • Arithmetic operations are performed to create a computation graph.
    • d is computed as ( a + b \times c ).
    • e is computed as ( d / 2 ).
    • f is computed as ( e^2 ).
    • g is computed using the ReLU activation function on f.
  3. Backpropagation:

    • Gradients are computed by setting the gradient of g to 1.0 and calling Backpropagate.
  4. Additional Operations:

    • j is initialized with 0.5.
    • k is computed using the Tanh activation function on j.
    • l is computed using the Sigmoid activation function on k.
    • m is computed using the LeakyReLU activation function on l.
  5. Backpropagation and Outputs:

    • Gradients for j are computed by setting the gradient of m to 1.0 and calling Backpropagate.
    • The gradients and data are printed to the console.

Contributing

If you wish to contribute to Sharp-Grad, please follow the guidelines below:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes and push to the branch.
  4. Create a pull request detailing your changes.

License

Sharp-Grad is licensed under the MIT License. See the LICENSE file for more details.


### References

- [Sharp-Grad GitHub Repository](https://github.com/Leonardo16AM/Sharp-Grad)

Clone this wiki locally