Search papers, labs, and topics across Lattice.
3
0
6
Vision-language models can learn to correct their own systematic errors by explicitly modeling confusion patterns between similar categories, leading to a 50% reduction in misclassifications.
MLLMs encode conflicting knowledge signals as linearly separable features in mid-to-late layers, revealing a distinct processing stage for conflict resolution.
LLM factual errors aren't just about missing knowledge; sometimes the model *knows* the truth but deliberately says otherwise.