Search papers, labs, and topics across Lattice.
The paper introduces ProbCOPA, a dataset of 210 probabilistic inferences annotated with human likelihood judgments, to evaluate LLMs on open-ended reasoning. Comparing human annotations with eight state-of-the-art reasoning LLMs, the study reveals that LLMs consistently fail to produce human-like probability distributions for these inferences. Analysis of LLM reasoning chains suggests a common, but ultimately divergent, reasoning pattern used by the models.
LLMs may ace logic puzzles, but they still struggle to reason like humans when it comes to the messy, probabilistic inferences we make every day.
Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given the premise. While reasoning LLMs have demonstrated strong performance on logical and mathematical tasks, their behavior on such open-ended, non-deterministic inferences remains largely unexplored. We introduce ProbCOPA, a dataset of 210 handcrafted probabilistic inferences in English, each annotated for inference likelihood by 25--30 human participants. We find that human responses are graded and varied, revealing probabilistic judgments of the inferences in our dataset. Comparing these judgments with responses from eight state-of-the-art reasoning LLMs, we show that models consistently fail to produce human-like distributions. Finally, analyzing LLM reasoning chains, we find evidence of a common reasoning pattern used to evaluate such inferences. Our findings reveal persistent differences between humans and LLMs, and underscore the need to evaluate reasoning beyond deterministic settings.