Search papers, labs, and topics across Lattice.
This paper explores various prompting strategies and ensemble methods using commercial LLMs for the SemEval-2026 Task 5, which involves rating the plausibility of word senses in narratives. The authors experimented with zero-shot, Chain-of-Thought, and comparative prompting, finding that comparative prompting consistently improved performance. Ensembling LLM predictions further enhanced alignment with human judgments, achieving state-of-the-art results on the task.
LLM ensembles, especially when combined with comparative prompting, can achieve human-level performance on subjective semantic evaluation tasks involving substantial inter-annotator disagreement.
We describe our system for SemEval-2026 Task 5, which requires rating the plausibility of given word senses of homonyms in short stories on a 5-point Likert scale. Systems are evaluated by the unweighted average of accuracy (within one standard deviation of mean human judgments) and Spearman Rank Correlation. We explore three prompting strategies using multiple closed-source commercial LLMs: (i) a baseline zero-shot setup, (ii) Chain-of-Thought (CoT) style prompting with structured reasoning, and (iii) a comparative prompting strategy for evaluating candidate word senses simultaneously. Furthermore, to account for the substantial inter-annotator variation present in the gold labels, we propose an ensemble setup by averaging model predictions. Our best official system, comprising an ensemble of LLMs across all three prompting strategies, placed 4th on the competition leaderboard with 0.88 accuracy and 0.83 Spearman's rho (0.86 average). Post-competition experiments with additional models further improved this performance to 0.92 accuracy and 0.85 Spearman's rho (0.89 average). We find that comparative prompting consistently improved performance across model families, and model ensembling significantly enhanced alignment with mean human judgments, suggesting that LLM ensembles are especially well suited for subjective semantic evaluation tasks involving multiple annotators.