Search papers, labs, and topics across Lattice.
The paper introduces MaRI, a matrix re-parameterization framework to accelerate ranking model inference in large-scale recommendation systems without sacrificing accuracy. MaRI addresses redundancy in user-side computation within feature fusion matrix multiplication by structurally re-parameterizing the model. Experiments demonstrate that MaRI achieves lossless acceleration, complementing existing lightweighting and knowledge distillation techniques.
Achieve lossless acceleration of ranking models by structurally re-parameterizing feature fusion matrix multiplication, sidestepping the accuracy drop common in lightweighting and distillation.
Ranking models, i.e., coarse-ranking and fine-ranking models, serve as core components in large-scale recommendation systems, responsible for scoring massive item candidates based on user preferences. To meet the stringent latency requirements of online serving, structural lightweighting or knowledge distillation techniques are commonly employed for ranking model acceleration. However, these approaches typically lead to a non-negligible drop in accuracy. Notably, the angle of lossless acceleration by optimizing feature fusion matrix multiplication, particularly through structural reparameterization, remains underexplored. In this paper, we propose MaRI, a novel Matrix Re-parameterized Inference framework, which serves as a complementary approach to existing techniques while accelerating ranking model inference without any accuracy loss. MaRI is motivated by the observation that user-side computation is redundant in feature fusion matrix multiplication, and we therefore adopt the philosophy of structural reparameterization to alleviate such redundancy.