Search papers, labs, and topics across Lattice.
This paper introduces a novel sheaf neural network architecture that operates directly on symmetric positive definite (SPD) manifolds to learn second-order geometric representations. By leveraging the Lie group structure of SPD manifolds, the method enables well-posed sheaf operators without Euclidean projection, allowing for the propagation of matrix-valued features. The proposed SPD-Sheaf network demonstrates superior expressiveness compared to Euclidean sheaves and achieves state-of-the-art performance on several MoleculeNet benchmarks, exhibiting robustness to increasing network depth.
SPD-SheafNets learn richer geometric representations than standard GNNs by operating directly on matrix-valued features, achieving SOTA on molecular property prediction by capturing relationships between directions.
Graph neural networks face two fundamental challenges rooted in the linear structure of Euclidean vector spaces: (1) Current architectures represent geometry through vectors (directions, gradients), yet many tasks require matrix-valued representations that capture relationships between directions-such as how atomic orientations covary in a molecule. These second-order representations are naturally captured by points on the symmetric positive definite matrices (SPD) manifold; (2) Standard message passing applies shared transformations across edges. Sheaf neural networks address this via edge-specific transformations, but existing formulations remain confined to vector spaces and therefore cannot propagate matrix-valued features. We address both challenges by developing the first sheaf neural network operates natively on the SPD manifold. Our key insight is that the SPD manifold admits a Lie group structure, enabling well-posed analogs of sheaf operators without projecting to Euclidean space. Theoretically, we prove that SPD-valued sheaves are strictly more expressive than Euclidean sheaves: they admit consistent configurations (global sections) that vector-valued sheaves cannot represent, directly translating to richer learned representations. Empirically, our sheaf convolution transforms effectively rank-1 directional inputs into full-rank matrices encoding local geometric structure. Our dual-stream architecture achieves SOTA on 6/7 MoleculeNet benchmarks, with the sheaf framework providing consistent depth robustness.