Search papers, labs, and topics across Lattice.
1
0
3
LLM judges of disinformation risk are internally consistent, but consistently misaligned with actual human readers, raising serious questions about their validity as evaluation proxies.