Search papers, labs, and topics across Lattice.
1
0
2
3
A differentiable zero-one loss approximation closes the generalization gap in large-batch training by imposing geometric consistency on output logits.