Search papers, labs, and topics across Lattice.
4
0
9
17
Quantizing equivariant GNNs no longer has to break symmetry: GAQ achieves FP32 accuracy with W4A8 models, 2.39x speedup, and 4x memory reduction, all while slashing equivariance errors by 30x.
Human-robot collaboration gets a boost from a new hierarchical framework that uses decentralized MARL to explicitly integrate high-level reasoning with low-latency control.
A Transformer-based ranking model can boost e-commerce orders by 6.35% while halving latency, thanks to optimizations targeting feature sparsity and low label density.
Outperforms existing open-vocabulary object detectors like Grounding DINO and T-Rex2 while using significantly less training data and eliminating manual data curation.