Thursday 1:30 PM–3:00 PM in Central Park East 6501a (6th fl)

Deep Learning from Scratch using Python

Seth Weidman

Audience level:
Intermediate

Description

Many people are using TensorFlow and Keras to build cool Deep Learning-based applications, but few understand what is really going on, and even fewer understand the math behind why this process works. In this workshop, we will build Deep Neural Nets from Scratch using Python, illustrate that these nets can solve complex problems as we'd expect, and cover the math that explains why this works.

Abstract

New applications of neural nets are constantly being conceived and built using libraries like TensorFlow and Keras. However, few people building these applications understand how neural nets work under the hood, and even fewer understand the math that explains why they work. Many tutorials out there explain some of the details here, but none both explain the math and connect the math to concrete code. In this tutorial, we'll work carefully through how to build Deep Neural Nets from Scratch using Python.

This tutorial will be split into several parts:

  1. Review linear and logistic regression in terms of gradient descent, introducing the process of a) feeding data through a model, b) computing a loss, and then c) updating parameters to reduce the loss using partial derivatives.
  2. Show how neural nets are a natural extension of this: they are nested functions, just like logistic regression is a nested function, that simply have lots of functions nested within them; the weights are updated in the same way as in logistic regression.

The second half of the workshop will be transitioning from using a function-based method of building simple neural nets to using a class-based method of building Deep Neural Networks. This section will involve:

  1. Understanding neural nets as objects that pass input forward from layer to layer to make predictions and backwards from layer to layer to update the weights.
  2. Coding up neural nets as classes that contains lists of "layers" (which are classes of their own) as objects.
  3. Coding some "tricks"--momentum, learning rate decay, dropout, and more--that can help neural nets learn better and/or faster.

Subscribe to Receive PyData Updates

Subscribe