Saturday October 30 5:00 PM – Saturday October 30 5:30 PM in Talks II

An attempt at demystifying graph deep learning

Eric Ma

Prior knowledge:
No previous knowledge expected

Summary

In this talk, I will attempt to demystify the core ideas behind graph deep learning with lots of pictures and a minimum number of equations.

Description

This talk will follow a four-part structure. Firstly, we will introduce graphs and how they can be represented as arrays. Then, we will walk through what message passing is, and how it also has a linear algebra interpretation. Thirdly, we will see how we can embed the message passing operation inside a neural network, thus giving us a message passing neural network. We'll also see how other network architectures come up. Finally, we will walk through learning tasks that involve graphs. In bullet point form:

  1. Graphs, networks, and their array representations
    • Introduction to graphs
    • How graphs can be represented as arrays
  2. Message passing
    • Definition of the message passing operation
    • Message passing operators beyond the adjacency matrix
  3. Embedding message passing in neural networks
    • How MPNNs, graph laplacian networks, and graph attention networks are variations on a theme
    • The link between message passing and convolution
  4. Learning tasks that involve graphs
    • Graph-level learning
    • Node label prediction
    • Edge presence/absence prediction

The goal here is to demystify what goes on behind-the-scenes in graph neural network layers... and by doing so, free you from the confines of black box neural network layer APIs. If you're already experienced with neural networks but have not yet attempted to write a GNN before, this talk should give you enough ideas to implement your own GNN layers. For everyone else, you should walk away feeling a little better informed about guts behind the hype!