Search papers, labs, and topics across Lattice.
2
0
3
Reasoning in LLMs isn't just for complex tasks: it can unlock surprisingly better recall of simple facts, but beware – hallucinated reasoning steps can backfire and increase overall hallucination.
LLMs like GPT-5 and Gemini-3 already "know" almost everything (95-98% factual encoding), but struggle to recall it, suggesting that future gains in factuality depend more on better memory retrieval than on simply scaling up.