Search papers, labs, and topics across Lattice.
This paper investigates the calibration of conditional risk in prediction models, establishing its equivalence to standard regression tasks across classification and regression settings. It highlights the relationship between conditional risk calibration and individual probability calibration, providing theoretical insights into performance metrics. Empirical validation within the learning to defer (L2D) framework demonstrates the practical significance of these findings for uncertainty-aware decision-making.
Conditional risk calibration reveals a unique perspective on uncertainty quantification that could transform how we approach decision-making in machine learning.
We introduce and study the problem of calibrating conditional risk, which involves estimating the expected loss of a prediction model conditional on input features. We analyze this problem in both classification and regression settings and show that it is fundamentally equivalent to a standard regression task. For classification settings, we further establish a connection between conditional risk calibration and individual/conditional probability calibration, and develop theoretical insights for the performance metric. This reveals that while conditional risk calibration is related to existing uncertainty quantification problems, it remains a distinct and standalone machine learning problem. Empirically, we validate our theoretical findings and demonstrate the practical implications of conditional risk calibration in the learning to defer (L2D) framework. Our systematic experiments provide both qualitative and quantitative assessments, offering guidance for future research in uncertainty-aware decision-making.