Search papers, labs, and topics across Lattice.
4
0
10
Fragmented medical data hurts MLLM performance: this paper shows how a hierarchical medical knowledge graph can be used to engineer training data that substantially improves MLLM accuracy on complex clinical tasks.
Forget hand-tuning layer configurations: LayerTracer reveals the precise layers where LLMs learn and break, paving the way for automated architecture optimization.
LLM-based digital twins can be made 50-90% more accurate with a lightweight calibration framework inspired by causal inference.
Forget static relevance labels – RRPO uses LLM feedback to train RAG rerankers, boosting generation quality without expensive human annotations.