From-scratch DNN with SGD, Momentum, NAG, RMSprop, Adam, Nadam optimizers; Xavier/He weight initialization; ReLU/Sigmoid/Tanh activations; cross-entropy & MSE loss; early stopping; WandB logging
machine-learning deep-learning neural-network optimization classification gradient-descent backpropagation adam rmsprop cross-entropy fashion-mnist relu weights-and-biases xavier-initialization
-
Updated
Mar 17, 2025 - Python