Search papers, labs, and topics across Lattice.
This paper benchmarks the computational and representational efficiency of the Mamba state space model (SSM) against the LLaMA Transformer on long-context dyadic therapy sessions. The study compares memory usage and inference speed from 512 to 8,192 tokens to assess computational efficiency, and analyzes hidden state dynamics and attention patterns to evaluate representational efficiency. Results demonstrate specific conditions where SSMs outperform Transformers in long-context applications, providing practical guidance for model selection.
Mamba SSMs can offer tangible advantages over LLaMA Transformers in long-context tasks, but the benefits are highly dependent on the specific computational and representational demands.
State Space Models (SSMs) have emerged as a promising alternative to Transformers for long-context sequence modeling, offering linear $O(N)$ computational complexity compared to the Transformer's quadratic $O(N^2)$ scaling. This paper presents a comprehensive benchmarking study comparing the Mamba SSM against the LLaMA Transformer on long-context sequences, using dyadic therapy sessions as a representative test case. We evaluate both architectures across two dimensions: (1) computational efficiency, where we measure memory usage and inference speed from 512 to 8,192 tokens, and (2) representational efficiency, where we analyze hidden state dynamics and attention patterns. Our findings provide actionable insights for practitioners working with long-context applications, establishing precise conditions under which SSMs offer advantages over Transformers.