Search papers, labs, and topics across Lattice.
Apriel-1.5-15B-Thinker, a 15B parameter multimodal model, achieves competitive performance through a three-stage training methodology involving depth upscaling, staged continual pre-training with synthetic data for enhanced visual reasoning, and high-quality text-only supervised fine-tuning with reasoning traces. The model attains a score of 52 on the Artificial Analysis Intelligence Index, matching DeepSeek-R1-0528, and performs comparably to Gemini-2.5-Flash and Claude Sonnet-3.7 on image benchmarks, demonstrating that targeted training can bridge capability gaps without relying on massive scale or reinforcement learning. This work highlights the effectiveness of data-centric continual pre-training for multimodal reasoning, particularly for organizations with limited computational resources.
Frontier-level multimodal reasoning is now within reach for organizations with limited infrastructure, thanks to a 15B parameter model that rivals much larger models through clever training design, not brute force scaling.
We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.