Search papers, labs, and topics across Lattice.
The paper introduces Nexusformer, a Transformer variant that replaces linear Q/K/V projections with a novel Nexus-Rank layer, enabling nonlinear feature extraction and lossless structured growth. This architecture allows for stable and inheritable scaling, where new capacity can be injected without disrupting pretrained knowledge. Experiments show Nexusformer achieves comparable performance to Tokenformer with significantly reduced training compute during progressive scaling, and its growth dynamics are predictable via a derived geometric scaling law.
Forget training from scratch: Nexusformer lets you scale Transformers by nonlinearly expanding attention, inheriting knowledge and slashing compute by up to 41.5%.
Scaling Transformers typically necessitates training larger models from scratch, as standard architectures struggle to expand without discarding learned representations. We identify the primary bottleneck in the attention mechanism's linear projections, which strictly confine feature extraction to fixed-dimensional subspaces, limiting both expressivity and incremental capacity. To address this, we introduce Nexusformer, which replaces linear $Q/K/V$ projections with a Nexus-Rank layer, a three-stage nonlinear mapping driven by dual activations in progressively higher dimensional spaces. This design overcomes the linearity constraint and enables lossless structured growth: new capacity can be injected along two axes via zero-initialized blocks that preserve pretrained knowledge. Experiments on language modeling and reasoning benchmarks demonstrate that Nexusformer matches Tokenformer's perplexity using up to 41.5\% less training compute during progressive scaling (240M to 440M). Furthermore, our analysis of growth dynamics reveals that zero initialization induces a stable convergence trajectory, allowing us to derive a geometric scaling law that accurately predicts performance across expansion scales.