Search papers, labs, and topics across Lattice.
OneComp is introduced as an open-source framework that automates post-training quantization of generative AI models, addressing the challenges of deploying large models under resource constraints. It automatically plans mixed-precision assignments and executes progressive quantization stages, using the first quantized checkpoint as a deployable pivot to ensure consistent quality improvement. The framework aims to bridge the gap between compression research and practical deployment by providing a reproducible and hardware-aware pipeline.
Automating the messy process of post-training quantization, OneComp lets you compress generative AI models with a single line of code.
Deploying foundation models is increasingly constrained by memory footprint, latency, and hardware costs. Post-training compression can mitigate these bottlenecks by reducing the precision of model parameters without significantly degrading performance; however, its practical implementation remains challenging as practitioners navigate a fragmented landscape of quantization algorithms, precision budgets, data-driven calibration strategies, and hardware-dependent execution regimes. We present OneComp, an open-source compression framework that transforms this expert workflow into a reproducible, resource-adaptive pipeline. Given a model identifier and available hardware, OneComp automatically inspects the model, plans mixed-precision assignments, and executes progressive quantization stages, ranging from layer-wise compression to block-wise refinement and global refinement. A key architectural choice is treating the first quantized checkpoint as a deployable pivot, ensuring that each subsequent stage improves the same model and that quality increases as more compute is invested. By converting state-of-the-art compression research into an extensible, open-source, hardware-aware pipeline, OneComp bridges the gap between algorithmic innovation and production-grade model deployment.