Saturday 14:30–15:15 in LG6

How distributed representations make chatbots work (at least a bit)

Nils Hammerla

Audience level:
Intermediate

Description

Chatbots are all the hype right now, like it or not. In this talk I want to take a look under the hood and show you how simple it is today to incorporate sophisticated language understanding in your applications. The tool of choice are distributed representations, also known as word vectors, which allow us to answer the most crucial question: which of these words mean similar things?

Abstract

It is hard to go anywhere on the web these days without encountering chatbots and other natural language interfaces. But how do these bots actually understand what you say? It turns out it can be boiled down to a simple recipe: you need to know which words mean similar things! This sounds straight-forward, but efficient ways of doing this, namely distributed representations, were only just discovered in the past few years of machine learning research. They have immense potential and we are only beginning to realise what we can do with them.

In this talk I want to outline how distributed representations are used at babylon health, and how everyone can incorporate sophisticated language understanding into their applications with just a few lines of python. Furthermore I will give a glimpse of the research we are doing, some of which we just published in a paper at ICLR.

Simple outline of the talk: What are distributed representations and how do they work? What are they good for, what are they bad at? Some examples on how we use them at babylon health Some interesting results from our research

I will do my best to give practical examples throughout the talk and will provide python snippets in an accompanying repository, which should make this talk accessible to a wide audience!