Since the winning of ImageNet competition in 2012, Deep Learning dominates the domains of machine learning and AI. However, there are still many unsolved struggles of Deep Learning. In this talk, we will discuss about what capabilities Deep Learning lacks, and how to make Deep Learning more robust by combing Symbolic AI.
The talk is mainly inspired by Gary Marcus, a famous critic of Deep Learning. The works by Gary Marcus that we will cover in the talk include: - Deep Learning: A Critical Appraisal - Rebooting AI: Building Artificial Intelligence We Can Trust - The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
To understand the core problems of Deep Learning, we will introduce the concept of Deep Learning, and provide some examples to illustrate that Deep Learning does not work (e.g., GPT-2 or GPT-3). Three main drawbacks of (pure) Deep Learning we want to focus on are: - Greediness: Deep Learning often requires a massive amount of data - Opaqueness: Human-style explanation of Deep Learning systems is hard - Brittleness: Even powerful Deep Learning systems are usually easily to be fooled.
After discovering the issues of Deep Learning, we will try to find a road to improve Deep Learning: Neural-Symbolic AI methods. By showing some examples, we will see the power of Deep Learning that combines the capabilities of reasoning and cognitive models provided by Symbolic AI.
蘇嘉冠 (Su JiaKuan) is AI engineer in a startup. In past years, he focused on Deep Learning and its related applications. Now, he is a confused guy that wants to escape the restriction from Deep Learning paradigm.