Search papers, labs, and topics across Lattice.
The paper introduces Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning (AFMRL) to improve fine-grained semantic comprehension in e-commerce product retrieval. AFMRL leverages MLLMs to generate key product attributes from images and text, then uses these attributes to guide contrastive learning (AGCL) and reinforce MLLM attribute generation (RAR) via retrieval performance feedback. Experiments on large-scale datasets show AFMRL achieves state-of-the-art performance in downstream retrieval tasks, demonstrating the effectiveness of generative models for fine-grained representation learning.
Forget generic image-text embeddings – teaching models to generate and reason about product *attributes* unlocks SOTA e-commerce retrieval.
Multimodal representation is crucial for E-commerce tasks such as identical product retrieval. Large representation models (e.g., VLM2Vec) demonstrate strong multimodal understanding capabilities, yet they struggle with fine-grained semantic comprehension, which is essential for distinguishing highly similar items. To address this, we propose Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning (AFMRL), which defines product fine-grained understanding as an attribute generation task. It leverages the generative power of Multimodal Large Language Models (MLLMs) to extract key attributes from product images and text, and enhances representation learning through a two-stage training framework: 1) Attribute-Guided Contrastive Learning (AGCL), where the key attributes generated by the MLLM are used in the image-text contrastive learning training process to identify hard samples and filter out noisy false negatives. 2) Retrieval-aware Attribute Reinforcement (RAR), where the improved retrieval performance of the representation model post-attribute integration serves as a reward signal to enhance MLLM's attribute generation during multimodal fine-tuning. Extensive experiments on large-scale E-commerce datasets demonstrate that our method achieves state-of-the-art performance on multiple downstream retrieval tasks, validating the effectiveness of harnessing generative models to advance fine-grained representation learning.