Search papers, labs, and topics across Lattice.
1
0
3
2
LLM explanations are far more sensitive to the task being performed than the context or learned classes, highlighting a critical instability in current interpretability methods.