Graph Machine Learning

<p>We&nbsp;presented&nbsp;<strong>GraphGPS</strong>&nbsp;about a year ago and it is pleasing to see many ICML papers building upon our framework and expanding GT capabilities even further.</p> <p><strong>&nbsp;Exphormer</strong>&nbsp;by&nbsp;Shirzad, Velingker, Venkatachalam et al&nbsp;adds a missing piece of graph-motivated sparse attention to GTs: instead of BigBird or Performer (originally designed for sequences), Exphormer&rsquo;s attention builds upon 1-hop edges, virtual nodes (connected to all nodes in a graph), and a neat idea of&nbsp;expander edges. Expander graphs have a constant degree and are shown to approximate fully-connected graphs. All components combined, attention costs&nbsp;<em>O(V+E)</em>&nbsp;instead of&nbsp;<em>O(V&sup2;)</em>. This allows Exphormer to outperform GraphGPS almost everywhere and scale to really large graphs of up to 160k nodes. Amazing work and all chances to make Exphormer the standard sparse attention mechanism in GTs .</p> <p><strong>&nbsp;</strong>Concurrently with graph transformers, expander graphs can already be used to enhance the performance of any MPNN architecture as shown in&nbsp;Expander Graph Propagation&nbsp;by&nbsp;<em>Deac, Lackenby, and Veličković</em>.</p> <p>In a similar vein,&nbsp;Cai et al&nbsp;show that MPNNs with virtual nodes can approximate linear Performer-like attention such that even classic GCN and GatedGCN imbued with virtual nodes show pretty much a SOTA performance in long-range graph tasks (we&nbsp;released&nbsp;the&nbsp;LGRB benchmark&nbsp;last year exactly for measuring long-range capabilities of GNNs and GTs).</p> <p><a href="https://towardsdatascience.com/graph-machine-learning-icml-2023-9b5e4306a1cc">Click Here</a></p>