Search papers, labs, and topics across Lattice.
2
0
5
A confidence-based gating mechanism lets a 14B parameter reward model outperform 70B parameter models, achieving a new accuracy-efficiency Pareto frontier.
A single tokenizer, UniWeTok, now handles both high-fidelity image reconstruction and complex semantic understanding for multimodal LLMs, outperforming existing methods with far less training data.