Search papers, labs, and topics across Lattice.
The authors pretrain a 3B parameter language model, daVinci-LLM, on 8T tokens with full transparency to address the under-exploration of the pretraining phase. They introduce a "Data Darwinism" framework for data processing, using a L0-L9 taxonomy for filtering and synthesis. Through 200+ ablations, they find that processing depth, domain-specific saturation dynamics, and compositional balance are critical factors in pretraining, and that evaluation protocols significantly impact understanding of pretraining progress.
Pretraining isn't just about scaling data volume; daVinci-LLM's ablations reveal that data processing depth, domain-specific strategies, and compositional balance are equally critical for unlocking LLM capabilities.
The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.