Search papers, labs, and topics across Lattice.
This paper introduces a taxonomy of uncertainty in sequential decision-making, categorizing it into model, feedback, and prediction uncertainty, to address fairness concerns in online ML applications. They formalize model and feedback uncertainty using counterfactual logic and reinforcement learning, demonstrating how ignoring the unobserved space can harm both decision-makers and subjects. Through algorithmic examples and experiments on simulated biased data, they show that uncertainty-aware exploration can reduce outcome variance for disadvantaged groups while maintaining institutional objectives.
Ignoring uncertainty in sequential decision-making disproportionately harms disadvantaged groups, but accounting for it can improve fairness without sacrificing institutional goals.
Fair machine learning (ML) methods help identify and mitigate the risk that algorithms encode or automate social injustices. Algorithmic approaches alone cannot resolve structural inequalities, but they can support socio-technical decision systems by surfacing discriminatory biases, clarifying trade-offs, and enabling governance. Although fairness is well studied in supervised learning, many real ML applications are online and sequential, with prior decisions informing future ones. Each decision is taken under uncertainty due to unobserved counterfactuals and finite samples, with dire consequences for under-represented groups, systematically under-observed due to historical exclusion and selective feedback. A bank cannot know whether a denied loan would have been repaid, and may have less data on marginalized populations. This paper introduces a taxonomy of uncertainty in sequential decision-making -- model, feedback, and prediction uncertainty -- providing shared vocabulary for assessing systems where uncertainty is unevenly distributed across groups. We formalize model and feedback uncertainty via counterfactual logic and reinforcement learning, and illustrate harms to decision makers (unrealized gains/losses) and subjects (compounding exclusion, reduced access) of policies that ignore the unobserved space. Algorithmic examples show it is possible to reduce outcome variance for disadvantaged groups while preserving institutional objectives (e.g. expected utility). Experiments on data simulated with varying bias show how unequal uncertainty and selective feedback produce disparities, and how uncertainty-aware exploration alters fairness metrics. The framework equips practitioners to diagnose, audit, and govern fairness risks. Where uncertainty drives unfairness rather than incidental noise, accounting for it is essential to fair and effective decision-making.