Search papers, labs, and topics across Lattice.
Scale AI
1
0
3
LLMs can't reliably predict scientific experiment outcomes, and more worryingly, they have no idea when they're wrong, unlike human experts whose accuracy skyrockets when they feel confident.