Search papers, labs, and topics across Lattice.
The paper introduces NerVE, a framework for analyzing the eigenspectrum dynamics of feed-forward networks (FFNs) in LLMs using metrics like Spectral Entropy, Participation Ratio, Eigenvalue Early Enrichment, and Jensen-Shannon divergence. It reveals that FFN nonlinearities reinject variance across eigenmodes, impacting latent dimension utilization, and that optimizer geometry significantly modulates this reinjection. Validated across various model scales, architectures, and optimizers, NerVE identifies stable spectral signatures correlating with generalization ability, offering insights for architectural and optimizer design.
LLM feed-forward networks have hidden spectral signatures that predict generalization and respond predictably to design choices, opening the door to more principled architecture and optimizer selection.
We introduce NerVE, a unified eigenspectral framework for understanding how feed-forward networks (FFNs) in large language models (LLMs) organize and regulate information flow in high-dimensional latent space. Despite FFNs dominating the parameter budget, their high-dimensional dynamics remain poorly understood. NerVE addresses this gap through lightweight, memory-efficient tracking of eigenspectrum dynamics via four complementary metrics: Spectral Entropy (dispersion), Participation Ratio (effective dimensionality), Eigenvalue Early Enrichment (top-heaviness), and Jensen-Shannon divergence (distributional shifts). Our key insight is that FFN nonlinearities reinject variance across eigenmodes, fundamentally governing latent dimension utilization, and that optimizer geometry strongly modulates the extent of this variance reinjection. We validate NerVE across model scales, and diverse architectural and optimizer configurations, each uniquely shaping FFN dynamics: normalization schemes controlling variance flow; FFN weight geometries constraining latent space; positional encoding and activation functions regulating information flow; and optimizer choices redistributing effective capacity across depth. Across these settings, NerVE consistently recovers stable spectral signatures that correlate with model's generalization ability and respond predictably to design choices, generalizing beyond transformer to MLP-Mixer architectures, providing actionable insights for architectural and optimizer choices beyond trial-and-error.