Recurrent Neural Networks extend the applicability of deep learning into many different problems involving sequential data, such as translation, captioning, summarization, and timeseries prediction/classification. This talk will give a brief overview of how Recurrent Neural Networks work before showing you how to create and train them in Python.
Recurrent Neural Networks are a powerful class of statistical models which allow neural networks to deal with sequential data. They have recently become a powerful component within the Deep Learning community for achieving state of the art performance on tasks such as captioning, translation, and summarization. This talk will provide a brief introduction to the terminology of recurrent neural networks and then focus on how to create and train them from Python. I will show non-trivial network implementations using the most popular Python deep learning libraries (Keras, Lasagne, Blocks, Chainer) and compare their performance and extensibility.