Search papers, labs, and topics across Lattice.
HKUST (Guangzhou)
1
0
3
2
Semantic filtering with LLMs doesn't have to be a slow, linear slog: this new clustering-sampling-voting approach slashes LLM calls by up to 355x without sacrificing accuracy.