Search papers, labs, and topics across Lattice.
National University of Defense Technology, Key Laboratory of Cyberspace Security Situation Awareness and Evaluation
2
0
4
Uncover the hidden causes of LLM hallucinations by actively intervening on their internal reasoning, rather than passively observing their outputs.
LLMs are far more alike than you think: shared biases and failure modes mean that ensembling them is less effective than you'd hope.