"Meanwhile, hardware limitations originally made it difficult to train neural networks, which require more data and computing power than most machine learning methods. However, improvements in processor power and the advent of specialized hardware such as graphics processing units have made training neural nets much faster. Massively parallel computing has also enabled boosts in training time: for instance, Google's [AlphaZero](https://en.wikipedia.org/wiki/AlphaZero) was trained on 5,000 tensor processing units in parallel in just a few hours.\n",
"*Talk about recent achievements of NNs*\n",
"In this workshop, we will be using [Keras](https://keras.io) to build our own neural networks. Keras is a high-level interface for building ANNs, with libraries like [TensorFlow](https://www.tensorflow.org) running on the backend.\n",
"### Note: you will need to run the following code cell every time you restart this notebook"
@@ -690,9 +688,7 @@
"You could apply a plain feedforward neural network to this dataset, and you'd get decent results. \n",
"*Intro to convolutional nets*"
"You could apply a plain feedforward neural network to this dataset, and you'd get decent results. "