Let's face it, NLP became somewhat boring: just take BERT (or some other sesame street friend) and finetune it for your task. But what if you have multiple tasks to deal with? Or even worse, what if you need to support multiple languages?
This talk will show you how you can with that using Adapters -- small neural nets capable of big things when combined with large pre-trained language models!
Large pre-trained language models (such as BERT and other sesame street friends) have been ruling in the world of NLP in the past year. One of their great advantage is that a single pre-trained model can be fine-tuned for many tasks (intent classification, similarity prediction or even question answering). But what do you do if you need to deal with multiple tasks at the same time, not to mention multiple languages? Do you just end up having multiple large models for multiple tasks? Or perhaps even one for each language/task combination?
This talk will show you how Adapters, a very simple addition to the already pre-trained language models, can help us deal with this situation, while saving a ton of training time and simplifying the deployment, as the pre-trained model's weights stay fixed (as opposed to the fine-tuning scenario). We will
The presentation is a great fit for anyone with interest in NLP, especially when multiple tasks and/or languages need to be considered. Every attendee will walk away with better understanding of the current challenges in this space and get a high-level overview of one particular solution (Adapters), with a playbook for leveraging it right after the presentation finishes.
You can find the slides at https://mareksuppa.com/