Search papers, labs, and topics across Lattice.
The paper introduces MemeLens, a unified multilingual and multitask Vision Language Model (VLM) designed for comprehensive meme understanding across various tasks and languages. To facilitate this, the authors consolidated and relabeled 38 public meme datasets into a shared taxonomy of 20 tasks covering harm, targets, intent, and affect. Experiments demonstrate the necessity of multimodal training for robust meme understanding and highlight the challenges of avoiding over-specialization when fine-tuning on individual datasets.
Training VLMs on a unified, multilingual, multitask meme dataset reveals that robust meme understanding requires multimodal training and is highly sensitive to dataset-specific overfitting.
Memes are a dominant medium for online communication and manipulation because meaning emerges from interactions between embedded text, imagery, and cultural context. Existing meme research is distributed across tasks (hate, misogyny, propaganda, sentiment, humour) and languages, which limits cross-domain generalization. To address this gap we propose MemeLens, a unified multilingual and multitask explanation-enhanced Vision Language Model (VLM) for meme understanding. We consolidate 38 public meme datasets, filter and map dataset-specific labels into a shared taxonomy of $20$ tasks spanning harm, targets, figurative/pragmatic intent, and affect. We present a comprehensive empirical analysis across modeling paradigms, task categories, and datasets. Our findings suggest that robust meme understanding requires multimodal training, exhibits substantial variation across semantic categories, and remains sensitive to over-specialization when models are fine-tuned on individual datasets rather than trained in a unified setting. We will make the experimental resources and datasets publicly available for the community.