Search papers, labs, and topics across Lattice.
3
0
6
Quantizing your medical image classifier can disproportionately hurt underrepresented groups, but FairQuant offers a way to compress models while boosting worst-group accuracy.
Data frugality isn't just ethical, it's effective: coreset selection slashes training energy while boosting accuracy and fairness.
Trained neural networks aren't just smaller after compression, they're fundamentally *simpler*, and this paper proves it by building models from a "mosaic" of repeating weight patterns.