- Prior knowledge:
- No previous knowledge expected

In this talk, I will attempt to demystify the core ideas behind graph deep learning with lots of pictures and a minimum number of equations.

This talk will follow a four-part structure. Firstly, we will introduce graphs and how they can be represented as arrays. Then, we will walk through what message passing is, and how it also has a linear algebra interpretation. Thirdly, we will see how we can embed the message passing operation inside a neural network, thus giving us a message passing neural network. We'll also see how other network architectures come up. Finally, we will walk through learning tasks that involve graphs. In bullet point form:

- Graphs, networks, and their array representations
- Introduction to graphs
- How graphs can be represented as arrays

- Message passing
- Definition of the message passing operation
- Message passing operators beyond the adjacency matrix

- Embedding message passing in neural networks
- How MPNNs, graph laplacian networks, and graph attention networks are variations on a theme
- The link between message passing and convolution

- Learning tasks that involve graphs
- Graph-level learning
- Node label prediction
- Edge presence/absence prediction

The goal here is to demystify what goes on behind-the-scenes in graph neural network layers... and by doing so, free you from the confines of black box neural network layer APIs. If you're already experienced with neural networks but have not yet attempted to write a GNN before, this talk should give you enough ideas to implement your own GNN layers. For everyone else, you should walk away feeling a little better informed about guts behind the hype!