Search papers, labs, and topics across Lattice.
The paper introduces DeepVision-103K, a large-scale dataset designed to improve the visual reasoning capabilities of Large Multimodal Models (LMMs) through Reinforcement Learning with Verifiable Rewards (RLVR). The dataset encompasses diverse K12 mathematical topics, knowledge points, and visual elements, addressing limitations in existing datasets regarding diversity and coverage. Experiments demonstrate that models trained on DeepVision-103K exhibit enhanced performance on multimodal mathematical benchmarks and improved generalization to broader multimodal reasoning tasks.
Forget small, curated datasets: DeepVision-103K unlocks stronger multimodal reasoning in LMMs via diverse, verifiable visual math problems.
Reinforcement Learning with Verifiable Rewards (RLVR) has been shown effective in enhancing the visual reflection and reasoning capabilities of Large Multimodal Models (LMMs). However, existing datasets are predominantly derived from either small-scale manual construction or recombination of prior resources, which limits data diversity and coverage, thereby constraining further gains in model performance. To this end, we introduce \textbf{DeepVision-103K}, a comprehensive dataset for RLVR training that covers diverse K12 mathematical topics, extensive knowledge points, and rich visual elements. Models trained on DeepVision achieve strong performance on multimodal mathematical benchmarks, and generalize effectively to general multimodal reasoning tasks. Further analysis reveals enhanced visual perception, reflection and reasoning capabilities in trained models, validating DeepVision's effectiveness for advancing multimodal reasoning. Data: \href{https://huggingface.co/datasets/skylenage/DeepVision-103K}{this url}.