Search papers, labs, and topics across Lattice.
The Chinese University of Hong Kong
1
0
3
1
Semantic filtering with LLMs doesn't have to be a slow, linear slog: this new clustering-sampling-voting approach slashes LLM calls by up to 355x without sacrificing accuracy.