Search papers, labs, and topics across Lattice.
MOON3.0, a novel MLLM, explicitly models fine-grained product attributes for e-commerce by addressing challenges in long-context reasoning, SFT limitations, and detail attenuation. It uses a multi-head modality fusion, joint contrastive and reinforcement learning, and fine-grained residual enhancement. Experiments on a new large-scale benchmark, MBE3.0, and public datasets show state-of-the-art zero-shot performance in downstream tasks.
E-commerce product understanding gets a boost: MOON3.0 leverages reasoning-aware multimodal learning to outperform existing methods in zero-shot tasks by explicitly modeling fine-grained attributes.
With the rapid growth of e-commerce, exploring general representations rather than task-specific ones has attracted increasing attention. Although recent multimodal large language models (MLLMs) have driven significant progress in product understanding, they are typically employed as feature extractors that implicitly encode product information into global embeddings, thereby limiting their ability to capture fine-grained attributes. Therefore, we argue that leveraging the reasoning capabilities of MLLMs to explicitly model fine-grained product attributes holds significant potential. Nevertheless, achieving this goal remains non-trivial due to several key challenges: (i) long-context reasoning tends to dilute the model's attention to salient information in the raw input; (ii) supervised fine-tuning (SFT) primarily encourages rigid imitation, limiting the exploration of effective reasoning strategies; and (iii) fine-grained details are progressively attenuated during forward propagation. To address these issues, we propose MOON3.0, the first reasoning-aware MLLM-based model for product representation learning. Our method (1) employs a multi-head modality fusion module to adaptively integrate raw signals; (2) incorporates a joint contrastive and reinforcement learning framework to autonomously explore more effective reasoning strategies; and (3) introduces a fine-grained residual enhancement module to progressively preserve local details throughout the network. Additionally, we release a large-scale multimodal e-commerce benchmark MBE3.0. Experimentally, our model demonstrates state-of-the-art zero-shot performance across various downstream tasks on both our benchmark and public datasets.