Five Configurations of Human-AI Decision-Making Leadership
Jadad's spectrum model helps leaders recognize where actual decision authority lies in human-AI teams, from pure human to pure AI control.
Leaders must map five distinct human-AI decision configurations to avoid misrecognizing where authority actually resides.
- — Five positions span Pure Human, Centaur (human-led with AI input), Co-equal, Minotaur (AI-led with human input), Pure AI.
- — Decision authority shifts based on who frames problems, redirects work, and bears accountability.
- — Misrecognition occurs when leaders maintain human-centered narratives after authority has moved elsewhere.
- — Oversight can become ceremonial while leaders believe it remains meaningful and protective.
- — Co-adaptability measures how well human and AI participants adjust together within a configuration.
- — Heterogeneous teams vary by count, substrate, model type, capability, speed, memory, and participation form.
- — Configuration recognition must happen early to preserve governability and organizational trust.
- — Framework helps leaders judge whether a configuration fits the decision's stakes and constraints.
Astrobobo tool mapping
- Decision Log Record each significant decision with its configuration position (Pure Human, Centaur, Co-equal, Minotaur, Pure AI) and note any shifts observed during execution.
- Accountability Matrix List decision stages (frame, redirect, answer) and mark who holds authority at each stage. Expose misalignment between perceived and actual responsibility.
- Configuration Audit Quarterly review of 5–10 decisions to detect whether configurations drift toward AI dominance while oversight narratives remain unchanged.
- Co-Adaptability Check After each decision, ask: could humans have redirected the outcome at any point? If no, the configuration may be Pure AI masquerading as Minotaur.
Frequently asked
- Centaur places humans in the lead role with AI providing input; the human frames the problem and can redirect work. Minotaur places AI in the lead with humans in a review or override role; the AI frames the problem and humans intervene only if needed. The distinction matters because it determines who bears primary accountability and whether human judgment shapes the decision or merely constrains it.
cite ▸
Alejandro R. Jadad. (2026, May 2). Five Configurations of Human-AI Decision-Making Leadership. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/five-configurations-of-human-ai-decision-making-leadership-ed5fbc
Alejandro R. Jadad. "Five Configurations of Human-AI Decision-Making Leadership." Astrobobo Content Engine, 2 May 2026, https://astrobobo-content-engine.vercel.app/article/five-configurations-of-human-ai-decision-making-leadership-ed5fbc. Based on "arxiv/cs.AI", https://arxiv.org/abs/2604.27392.
@misc{astrobobo_five-configurations-of-human-ai-decision-making-leadership-ed5fbc_2026,
author = {Alejandro R. Jadad},
title = {Five Configurations of Human-AI Decision-Making Leadership},
year = {2026},
url = {https://astrobobo-content-engine.vercel.app/article/five-configurations-of-human-ai-decision-making-leadership-ed5fbc},
note = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2604.27392},
}