Search papers, labs, and topics across Lattice.
1
0
3
0
A fine-tuned RoBERTa model with only 125M parameters can match the CVE-to-CWE classification accuracy of models 64x larger, proving that strategic fine-tuning and data curation can close the gap between small and large language models.