Search papers, labs, and topics across Lattice.
1
0
3
19
LLMs can annotate harmful social media content related to elections with comparable agreement to humans, achieving up to 0.90 recall on speculation, opening the door to scalable content moderation.