ai · 8 min read · Apr 24, 2026

Fairness in sequential ML requires accounting for unequal uncertainty

Lee et al. show how model, feedback, and prediction uncertainty compound disadvantage in online decision systems, and propose uncertainty-aware methods to reduce disparities.

Source: arxiv/cs.AI · Michelle Seng Ah Lee, Kirtan Padh, David Watson, Niki Kilbertus, Jatinder Singh · open original ↗

Uncertainty in sequential decision-making systems distributes unevenly across groups, amplifying historical exclusion; accounting for it is necessary for fair outcomes.

  • Three uncertainty types—model, feedback, prediction—each harm disadvantaged groups differently in online ML.
  • Unobserved counterfactuals (e.g., denied loan repayment) and sparse data on marginalized populations compound exclusion.
  • Selective feedback loops mean systems learn less about underrepresented groups, worsening future decisions.
  • Ignoring uncertainty creates compounding harms: reduced access, unrealized gains for subjects, unrealized losses for institutions.
  • Uncertainty-aware exploration can reduce outcome variance for disadvantaged groups without sacrificing institutional objectives.
  • Fairness audits must diagnose whether uncertainty or incidental noise drives disparities.
  • Framework enables practitioners to govern fairness risks in real-world sequential decision systems.

Astrobobo tool mapping

  • Knowledge Capture Record the three uncertainty types (model, feedback, prediction) as a checklist for each decision system. Capture concrete examples of unobserved counterfactuals and feedback gaps per group.
  • Focus Brief Summarize the fairness-uncertainty trade-off for your system: what institutional objective conflicts with reducing uncertainty for disadvantaged groups? Use this to frame conversations with stakeholders.
  • Reading Queue Queue follow-up papers on counterfactual estimation and active learning in fairness. This paper assumes you can estimate counterfactuals; the next step is learning how.

Frequently asked

  • Bias is systematic preference for one outcome over another; uncertainty is lack of information. A system can be unbiased in intent but unfair in practice if it has less data on a group. Lee et al. argue that fairness requires addressing both. Uncertainty-aware fairness means actively reducing information gaps, not just removing statistical correlations.
Share X LinkedIn
cite
APA
Michelle Seng Ah Lee, Kirtan Padh, David Watson, Niki Kilbertus, Jatinder Singh. (2026, April 24). Fairness in sequential ML requires accounting for unequal uncertainty. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/fairness-in-sequential-ml-requires-accounting-for-unequal-uncertainty-23b48a
MLA
Michelle Seng Ah Lee, Kirtan Padh, David Watson, Niki Kilbertus, Jatinder Singh. "Fairness in sequential ML requires accounting for unequal uncertainty." Astrobobo Content Engine, 24 Apr 2026, https://astrobobo-content-engine.vercel.app/article/fairness-in-sequential-ml-requires-accounting-for-unequal-uncertainty-23b48a. Based on "arxiv/cs.AI", https://arxiv.org/abs/2604.21711.
BibTeX
@misc{astrobobo_fairness-in-sequential-ml-requires-accounting-for-unequal-uncertainty-23b48a_2026,
  author       = {Michelle Seng Ah Lee, Kirtan Padh, David Watson, Niki Kilbertus, Jatinder Singh},
  title        = {Fairness in sequential ML requires accounting for unequal uncertainty},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/fairness-in-sequential-ml-requires-accounting-for-unequal-uncertainty-23b48a},
  note         = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2604.21711},
}

Related insights