In this talk, I will attempt to demystify the core ideas behind graph deep learning with lots of pictures and a minimum number of equations.
This talk will follow a four-part structure. Firstly, we will introduce graphs and how they can be represented as arrays. Then, we will walk through what message passing is, and how it also has a linear algebra interpretation. Thirdly, we will see how we can embed the message passing operation inside a neural network, thus giving us a message passing neural network. We'll also see how other network architectures come up. Finally, we will walk through learning tasks that involve graphs. In bullet point form:
The goal here is to demystify what goes on behind-the-scenes in graph neural network layers... and by doing so, free you from the confines of black box neural network layer APIs. If you're already experienced with neural networks but have not yet attempted to write a GNN before, this talk should give you enough ideas to implement your own GNN layers. For everyone else, you should walk away feeling a little better informed about guts behind the hype!