Search papers, labs, and topics across Lattice.
Birla Institute of Technology and Science
2
0
4
Preference-based refinement with DPO can dramatically improve recall in polarization detection without requiring additional human annotation.
LLMs hit a hard wall in algebraic reasoning, choking on problems with just 20-30 parallel branches regardless of model size, suggesting an architectural bottleneck, not just a capacity issue.