Search papers, labs, and topics across Lattice.
The Micro-Expression Grand Challenge (MEGC) 2026 introduces two novel tasks: Micro-Expression Video Question Answering (ME-VQA) and Micro-Expression Long-Video Question Answering (ME-LVQA), designed to evaluate the ability of multimodal large language models (MLLMs) and large vision-language models (LVLMs) to understand and reason about micro-expressions. ME-VQA focuses on short video sequences, while ME-LVQA extends the challenge to long-duration videos, requiring temporal reasoning and subtle micro-expression detection. The challenge aims to leverage the advancements in MLLMs and LVLMs to enhance ME analysis, with results submitted to a public leaderboard.
Can multimodal LLMs spot the fleeting signs of suppressed emotion in video?
Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, typically found in a high-stakes environment. In recent years, substantial advancements have been made in the areas of ME recognition, spotting, and generation. The emergence of multimodal large language models (MLLMs) and large vision-language models (LVLMs) offers promising new avenues for enhancing ME analysis through their powerful multimodal reasoning capabilities. The ME grand challenge (MEGC) 2026 introduces two tasks that reflect these evolving research directions: (1) ME video question answering (ME-VQA), which explores ME understanding through visual question answering on relatively short video sequences, leveraging MLLMs or LVLMs to address diverse question types related to MEs; and (2) ME long-video question answering (ME-LVQA), which extends VQA to long-duration video sequences in realistic settings, requiring models to handle temporal reasoning and subtle micro-expression detection across extended time periods. All participating algorithms are required to submit their results on a public leaderboard. More details are available at https://megc2026.github.io.