Search papers, labs, and topics across Lattice.
5
0
7
Euclidean distance isn't the best way to measure gradient staleness in asynchronous federated learning: alternative distance metrics can significantly improve convergence and stability.
Forget inspecting final outputs: LLMs telegraph their reward-hacking intentions internally, early in the generation process, via distinctive activation patterns.
Gradient norm thresholding can significantly boost the robustness and performance of carbon-efficient Federated Learning by filtering out noisy client data that loss-based selection methods often miss.
Off-the-shelf root cause analysis tools fall flat when applied to LLM inference stacks, demanding a new generation of observability techniques.
Train LLMs on otherwise-wasted renewable energy and slash operational emissions by up to 95% without sacrificing model quality.