Search papers, labs, and topics across Lattice.
ChemVLR, a new chemical Vision-Language Model (VLM), is introduced to prioritize reasoning by explicitly identifying granular chemical descriptors like functional groups before answering questions. A cross-modality reverse-engineering strategy and filtering pipeline were used to create a large-scale reasoning-and-captioning dataset of 760k samples. ChemVLR achieves state-of-the-art performance on chemical visual understanding tasks, outperforming both proprietary and open-source models due to its three-stage training framework.
Chemical VLMs can achieve SOTA performance by prioritizing reasoning through fine-grained analysis of visual inputs, such as identifying functional groups, before generating answers.
While Vision-Language Models (VLMs) have demonstrated significant potential in chemical visual understanding, current models are predominantly optimized for direct visual question-answering tasks. This paradigm often results in"black-box"systems that fail to utilize the inherent capability of Large Language Models (LLMs) to infer underlying reaction mechanisms. In this work, we introduce ChemVLR, a chemical VLM designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems. To facilitate this methodology, we implement a cross-modality reverse-engineering strategy, combined with a rigorous filtering pipeline, to curate a large-scale reasoning-and-captioning dataset comprising 760k high-quality samples across molecular and reaction tasks. Furthermore, we adopt a three-stage training framework that systemically builds model perception and reasoning capacity. Experiments demonstrate that ChemVLR achieves state-of-the-art (SOTA) performance, surpassing both leading proprietary models and domain-specific open-source baselines. We also provide comprehensive ablation studies to validate our training strategy and data generation designs. Code and model weights will be available at https://github.com/xxlllz/ChemVLR.