Friday November 12 15:50 – Friday November 12 16:25 in Auditorium

Optimal on Paper, Broken in Reality

Vincent D. Warmerdam

Prior knowledge:
Previous knowledge expected
machine learning, gridsearch, embeddings

Summary

GridSearch can be really distracting. It sounds like a good idea, but that "optimal" metric...does that really reflect reality? And if it doesn't, how bad is that?

In this talk, I will share some anecdotes on state-of-the-art models, as well as benchmarking in general, that will hopefully make you rethink your methodology.

Outline

Description

In particular, I'll discuss:

  • a general method to calm down optimistic claims of optimal results
  • how easy it is to draw the wrong lesson when running a grid-search
  • how data is a much better proxy for interpretation than hyperparameters
  • how easy it is to find wrong labels in public datasets
  • why sentiment models are generally a bit strange
  • tricks to deal with some of these issues

There will also be a demo, and an announcement, of a new python project