Skip to content

History

Revisions

  • Updated Would there be a problem computing the gradient of cross entropy loss for the ReLU neuron? (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    41d2884
  • Updated Would there be a problem computing the gradient of cross entropy loss for the ReLU neuron? (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    1913eb8
  • Updated Would there be a problem computing the gradient of cross entropy loss for the ReLU neuron? (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    959b723
  • Created Would there be a problem computing the gradient of cross-entropy loss for the ReLU neuron? (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    5051cc2
  • Updated Neural Network with ReLU and Sigmoid Activations (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    fc4bb91
  • Created Neural Network with ReLU and Sigmoid Activations (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    dfc0909
  • Created Power of linear neural networks (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    018fa24
  • Updated Let $G ∼ N (0, σ^{2})$ be a gaussian random variable. Let $Y = ReLU(G)$. Calculate $E[Y]$ and $Var[Y]$. (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    58b5be1
  • Created Let $G ∼ N (0, σ^{2})$ be a gaussian random variable. Let $Y = ReLU(G)$. Calculate $E[Y]$ and $Var[Y]$. (markdown)

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    c2257fc
  • Initial Home page

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    ce27d50
  • Initial Home page

    @ZicolinPower ZicolinPower committed Apr 30, 2023
    dcefa0f