Skip to content
This repository has been archived by the owner on Apr 4, 2024. It is now read-only.

Initialization Functions

Benny Nottonson edited this page Dec 29, 2023 · 5 revisions

Initializer functions are essential in setting the initial values of weights and biases in neural networks. Proper initialization can significantly impact the training process and the overall performance of the model. Here are various initializer functions implemented in Mojo's machine learning library.

Provided Functions

Function glorot_normal (Glorot Normal Initialization)

  • Function Signature:
    fn glorot_normal(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a Glorot normal distribution. Glorot normal initialization is suitable for sigmoid and hyperbolic tangent (tanh) activation functions.

Function glorot_uniform (Glorot Uniform Initialization)

  • Function Signature:
    fn glorot_uniform(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a Glorot uniform distribution. Glorot uniform initialization is suitable for ReLU and variants.

Function he_normal (He Normal Initialization)

  • Function Signature:
    fn he_normal(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a He normal distribution. He normal initialization is commonly used with ReLU and its variants.

Function he_uniform (He Uniform Initialization)

  • Function Signature:
    fn he_uniform(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a He uniform distribution. He uniform initialization is another option suitable for ReLU and its variants.

Function identity

  • Function Signature:
    fn identity(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the input tensor as an identity matrix. This is commonly used for initializing recurrent neural network (RNN) weights.

Function lecun_normal (LeCun Normal Initialization)

  • Function Signature:
    fn lecun_normal(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a LeCun normal distribution. LeCun normal initialization is suitable for sigmoid and hyperbolic tangent (tanh) activation functions.

Function lecun_uniform (LeCun Uniform Initialization)

  • Function Signature:
    fn lecun_uniform(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a LeCun uniform distribution. LeCun uniform initialization is suitable for ReLU and variants.

Function ones

  • Function Signature:
    fn ones(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes all elements of the input tensor with a value of one. This is useful for bias initialization.

Function random_normal (Random Normal Initialization)

  • Function Signature:
    fn random_normal(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a normal distribution.

Function random_uniform (Random Uniform Initialization)

  • Function Signature:
    fn random_uniform(tensor: Tensor, min: Float32, max: Float32) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a uniform distribution within the specified range.

Function truncated_normal (Truncated Normal Initialization)

  • Function Signature:
    fn truncated_normal(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes the elements of the input tensor with random values sampled from a truncated normal distribution.

Function zeros

  • Function Signature:
    fn zeros(tensor: Tensor) raises -> Tensor
  • Brief Explanation: Initializes all elements of the input tensor with a value of zero. This is often used for bias initialization.

Function constant

  • Function Signature:
    fn constant(tensor: Tensor, value: Float32) raises -> Tensor
  • Brief Explanation: Initializes all elements of the input tensor with a constant value.

Function _custom_fill

  • Function Signature:
    fn _custom_fill(tensor: Tensor, values: DynamicVector[Float32]) raises -> Tensor
  • Brief Explanation: Initializes the input tensor using custom values provided in the form of a dynamic vector of float32. This allows for a user-defined initialization strategy.