There has been a long ongoing debate on whether the prediction by machine learning algorithms can be trusted or not. And no matter what, everyone has some stance on it. This stance can deeply affect how we choose to develop A.I. but if we were to look more closely there are two major problems we need to address at a top priority that could dictate how the A.I. systems around us are executed.
Over the past five years, public interest in machine learning has spiked exponentially, and obviously so, given the mindblowing things we have been able to achieve with deep learning for image classification, image segmentation, and, yes, GANs. However, there are two big problems at the heart of the technological advancements in machine learning at the moment. Firstly, can we explain the predictions that these machines are making and secondly, are they actually learning or are they blindly pattern-matching?
In this talk, we will talk about why we need to think about interpretability briefly touching upon both sides of the story- on the current limitations and failures at executing our existing prediction algorithms at scale and the interpretable systems we currently have in place along with a possible benchmark recommendation that could be a first step in regulating the uniformity and transparency across the field. In the second half, we will talk about learning systems with a quick recap on the progress in AutoML agents leaving the listener with a clear visual path to possible next steps and open problems in the field of learning systems.