Project Nirvana : A Podcast Summariser

Pranav Kompally

Prior knowledge:
No previous knowledge expected


Project nirvana is an Abstractive Summariser that takes essentially is finetuned on the backbone of Pegasus. I, practically trained finetuned over 10 different versions of Pegasus(available on HuggingFace) on nearly 5-6 datasets. Talk discuss on the challenges of finding the models, the datasets and Audio-to-speech tech. And various hurdles while deploying it completely on open-source tech.


The talk revolves around how, transformer models such as Pegasus can be smartly and cleverly finetuned on publicly available datasets in google colab with limited resources, yet achieve state-of-the-art status, and beating them in a few cases. In simple sense, the main aim of the talk would be give an insight on how we came with the idea of Building a Podcast Summarizer, the challenges we faced while dealing with audio-to-text and finding the right and diverse datasets. After which, we'd discuss how quantisation helped us deploy the model, with the same performance intact, but reducing its memory consumption feature, for which transformers are known for. A live demo of the same will be shown and a personalised link will be shared in real-time for users to play with and explore.