Search papers, labs, and topics across Lattice.
2
7
3
9
Uncovers the inner workings of transformer-based time series classifiers, revealing how specific attention heads and timesteps causally drive correct classifications.
Chain-of-thought reasoning isn't just window dressing: swapping CoT-related features into smaller LLMs reveals a scale threshold where it actually steers the model towards correct answers.