Search papers, labs, and topics across Lattice.
1
0
2
Smooth activation functions unlock smoothness adaptivity in constant-depth neural networks, achieving minimax-optimal statistical rates without needing depth increases like ReLU networks.