Saturday 3:30 PM–4:00 PM in C01

Transfer Learning Using Tensorflow and Keras

Amita Kapoor

Audience level:
Intermediate

Description

Humans have great ability to generalize, we can very efficiently apply the knowledge we learned in classrooms to real world problems. Transfer learning provides a similar capability to artificial neural networks. This talk introduces "Transfer Learning” and how it can be used to reduce training times for different problem domains. We will illustrate using Keras with Tensorflow backend.

Abstract

The talk introduces the technique of 'Transfer Learning' described by Andrew Ng, the leading expert in Machine Learning as the "next driver of ML commercial success". In the last one decade, deep neural architectures, guided by supervised learning have been the major source of the success of machine learning. These deep neural networks have two key requirements immensely huge labelled data and computationally efficient hardware (GPUs). While such immensely large data exists for some tasks and domains, in most cases the data are usually proprietary or expensive. Transfer Learning, the technique to use models pre-trained on one domain for another problem domain, provides the ability to use DNNs even when the dataset is small. The outline of the talk is:

Transfer learning

What is Transfer Learning?

Why Transfer Learning?

Applications of Transfer Learning

Learning from Simulation

Adpting to new problem domains

Transfer Learning Scenarios

Pretrained Models as feature extractor

Fine tuning the pre-trained model

Train-partially the pre-trained models

Applying Transfer using Keras

Using VGG16 pre-trained model for MNIST data

Using Xception pre-trained model to identify Dog Breed

Further Research

One Shot Learning

Zero Shot Learning

Subscribe to Receive PyData Updates

Tickets

Get Now