Search papers, labs, and topics across Lattice.
3
0
6
Euclidean distance isn't the best way to measure gradient staleness in asynchronous federated learning: alternative distance metrics can significantly improve convergence and stability.
Forget inspecting final outputs: LLMs telegraph their reward-hacking intentions internally, early in the generation process, via distinctive activation patterns.
Gradient norm thresholding can significantly boost the robustness and performance of carbon-efficient Federated Learning by filtering out noisy client data that loss-based selection methods often miss.