Search papers, labs, and topics across Lattice.
The paper introduces W2T, a method to map LoRA updates to a canonical form using QR decomposition and SVD to resolve factorization ambiguity. This canonical representation allows training a Transformer to produce weight-space embeddings that capture the behavior and performance of the LoRA adapter. Experiments on language and vision LoRA collections demonstrate that W2T achieves strong results in attribute classification, performance prediction, and adapter retrieval, indicating that LoRA weights reliably encode model behavior.
Unlock the secrets hidden within LoRA weights: a novel method reveals that these weights already encode adapter behavior and performance, enabling accurate predictions without running the base model or accessing training data.
Each LoRA checkpoint compactly stores task-specific updates in low-rank weight matrices, offering an efficient way to adapt large language models to new tasks and domains. In principle, these weights already encode what the adapter does and how well it performs. In this paper, we ask whether this information can be read directly from the weights, without running the base model or accessing training data. A key obstacle is that a single LoRA update can be factorized in infinitely many ways. Without resolving this ambiguity, models trained on the factors may fit the particular factorization rather than the underlying update. To this end, we propose \methodfull, which maps each LoRA update to a provably canonical form via QR decomposition followed by SVD, so that all equivalent factorizations share the same representation. The resulting components are then tokenized and processed by a Transformer to produce a weight-space embedding. Across language and vision LoRA collections, W2T achieves strong results on attribute classification, performance prediction, and adapter retrieval, demonstrating that LoRA weights reliably indicate model behavior once factorization ambiguity is removed. Code is available at https://github.com/xiaolonghan2000/Weight2Token.