Search papers, labs, and topics across Lattice.
Beijing Institute of Technology
1
0
3
15
Semantic filtering with LLMs doesn't have to be a slow, linear slog: this new clustering-sampling-voting approach slashes LLM calls by up to 355x without sacrificing accuracy.