Together we live code (in a RISE slideshow) a fully-connected neural net from scratch via numpy, initially training it using a for-loop to demonstrate core concepts, and finally codifying it as a Scikit-learn classifier with which one can fit & predict on one’s own data. To close, I walk through a toy example which logistic regression can’t properly classify, but which our NN can.
Neural networks and deep learning are fundamental to modern machine learning, yet often appear scarier than they really are. Many users of Scikit-learn et al. can apply ML techniques (perhaps including deep learning) through these tools, but do not always "grok" fully what happens beneath the surface. Other more engineering-oriented practitioners are put off entirely by the seeming complexity of DL. I walk through a live coding practicum (in a RISE Jupyter Notebook slideshow) in which I implement a feed-forward, fully-connected neural net in numpy, initially training it via a for-loop to demonstrate core concepts, and finally codifying the NN as a Scikit-learn style classifier with which one can fit & predict on one’s own data.
The focus of this talk is on the practicum of implementing one’s own NN algorithm, though I also review the most important mathematical and theoretical components of NNs to ground the practicum for attendees. Mathematical review touches on the nature of gradients, what they are, how they relate to derivates, and how back propagation works at a high level. This talk does not include a formal derivation of the various loss functions used in NNs, nor does it require mastery of calculus. Attendees will leave the talk with a better understanding of deep learning through iterative optimization, as well as a template of their own for a from-scratch neural net in Python, should they feel this would enrich their understanding.