Search papers, labs, and topics across Lattice.
This paper introduces two structured linear recurrent network (LRNN) architectures, Higher-order Linear Recurrent Units (H-LRU) and Block-Diagonal LRUs (BD-LRU), to improve state mixing and expressivity while maintaining computational efficiency. H-LRU generalizes first-order recurrence to higher orders, mixing multiple past states, while BD-LRU enables dense intra-block channel mixing. Experiments on synthetic sequence modeling and language modeling tasks demonstrate that BD-LRU matches or exceeds the performance of linear SSMs and LSTMs, and H-LRU is parameter-efficient in compression, suggesting the importance of state mixing structure in LRNN expressivity.
Forget scaling up: smarter state mixing in linear recurrent networks lets you match LSTMs and Mamba on sequence tasks, closing the expressivity gap without sacrificing efficiency.
Linear recurrent networks (LRNNs) and linear state space models (SSMs) promise computational and memory efficiency on long-sequence modeling tasks, yet their diagonal state transitions limit expressivity. Dense and nonlinear architectures (e.g., LSTMs) on the other hand are provably more expressive, but computationally costly. Here, we explore how expressivity in LRNNs can be increased via richer state mixing across time and channels while maintaining competitive efficiency. Specifically, we introduce two structured LRNN architectures: (i) Higher-order Linear Recurrent Units (H-LRU), which generalize first-order recurrence to higher order, mixing multiple past states, and (ii) Block-Diagonal LRUs (BD-LRU), which enable dense intra-block channel mixing. Per-channel (H-LRU) or per-row (BD-LRU) L1-normalization of selective gates stabilizes training and allows for scaling window/block sizes. A parallel-scan implementation of the proposed architectures keeps the throughput competitive with diagonal LRNNs for moderate orders (H-LRU) and block sizes (BD-LRU). In synthetic sequence modeling tasks, the performance of BD-LRU matches or exceeds those of linear SSMs (Mamba), low-rank LRNNs (DeltaNet) and LSTM baselines, while H-LRU is found to be the most parameter-efficient in compression task. In both synthetic sequence modeling and language modeling, our results indicate that the structure of state mixing rather than width alone shapes expressivity of LRNNs, offering a practical route to closing the efficiency-expressivity gap in linear sequence models.