Skip to content

A 3-layer neural network with SGD and Adam optimizers built from scratch with numpy.

Notifications You must be signed in to change notification settings

JaeDukSeo/Adam-vs-SGD-Numpy

 
 

Repository files navigation

Adam vs SGD - On Kaggle's Titanic Dataset

A 3-layer neural network with SGD and Adam optimizers built with numpy.

Introduction

This is a response to Siraj Raval's Coding Challenge to implement the Adam Optimization Strategy. In this notebook, we are building a 3-layer neural network with numpy for the Kaggle Titanic Dataset, and comparing the performance difference between a standard Stochastic Gradient Descent and Adam.

Requirements

  • numpy
  • pandas
  • matplotlib

Usage

Run jupyter notebook in your Python 3 conda environment

With reference to:

  1. Adam: A method for Stochastic Optimization by Diederik P. Kingma, Jimmy Ba
  2. CS231: Neural Networks by Andrej Karpathy
  3. Optimizing Gradient Descent by Sebastian Ruder

About

A 3-layer neural network with SGD and Adam optimizers built from scratch with numpy.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%