Search papers, labs, and topics across Lattice.
STQuant is introduced as a distributed training framework that dynamically allocates precision for optimizer states across layers, variables, and training steps to reduce memory footprint in large multimodal model training. It addresses the challenges of numerical sensitivity and combinatorial search space by using a near-optimal factor selection strategy and a dynamic transition decision algorithm. Experiments on GPT-2 and ViT demonstrate that STQuant reduces optimizer-state memory by 84.4%, achieving an average bit-width of 5.1 bits, with minimal computational overhead.
Forget fixed-precision quantization: STQuant slashes optimizer memory by 84% in large model training by dynamically adapting bit-widths across layers and training steps.
Quantization is an effective way to reduce the memory cost of large-scale model training. However, most existing methods adopt fixed-precision policies, which ignore the fact that optimizer-state distributions vary significantly across layers and training steps. Such uniform designs often introduce noticeable accuracy degradation. To move beyond fixed quantization, we propose STQuant, a distributed training framework that reduces the memory footprint of optimizer states via dynamic precision allocation across layers, state variables, and training steps, while maintaining model quality. Naively applying dynamic quantization during training is challenging for two reasons. First, optimizer states are numerically sensitive, and quantization noise can destabilize quality. Second, jointly considering multiple states and layers induces a large combinatorial search space. STQuant addresses these challenges with two key techniques: 1) a provably near-optimal factor selection strategy that accurately identifies the most influential factors for precision adaptation. 2) a dynamic transition decision algorithm that reduces the search cost from exponential to linear complexity. Experiments on GPT-2 and ViT show that STQuant reduces optimizer-state memory by 84.4%, achieving an average bit-width of as low as 5.1 bits, compared with existing solutions. Moreover, STQuant incurs only O(N/K) computational overhead and requires O(1) extra space.