Search papers, labs, and topics across Lattice.
2
0
6
4
LLMs are far more alike than you think: shared biases and failure modes mean that ensembling them is less effective than you'd hope.
Distilling foundation models with a novel instance-aware contrastive loss yields smaller segmentation models that surprisingly outperform their larger teachers, even with limited labeled data.