Search papers, labs, and topics across Lattice.
FreeScale addresses computational under-utilization in distributed training of sequence recommendation models caused by stragglers and blocking communications. It balances load by carefully distributing input samples, overlaps prioritized embedding communications with computation, and resolves GPU resource contention using SM-Free communication. Experiments on real-world workloads with 256 H100 GPUs show up to 90.3% reduction in computational bubbles.
Sequence recommendation models can achieve near-perfect scaling efficiency in distributed training, slashing wasted GPU cycles by up to 90%.
Modern industrial Deep Learning Recommendation Models typically extract user preferences through the analysis of sequential interaction histories, subsequently generating predictions based on these derived interests. The inherent heterogeneity in data characteristics frequently result in substantial under-utilization of computational resources during large-scale training, primarily due to computational bubbles caused by severe stragglers and slow blocking communications. This paper introduces FreeScale, a solution designed to (1) mitigate the straggler problem through meticulously load balanced input samples (2) minimize the blocking communication by overlapping prioritized embedding communications with computations (3) resolve the GPU resource competition during computation and communication overlapping by communicating through SM-Free techniques. Empirical evaluation demonstrates that FreeScale achieves up to 90.3% reduction in computational bubbles when applied to real-world workloads running on 256 H100 GPUs.