ai · 8 min read · Apr 17, 2026

LLMs show human-like trust bias toward people, with demographic blind spots

Study of 43,200 experiments reveals language models develop trust patterns similar to humans, including susceptibility to age, religion, and gender bias in financial decisions.

Source: arxiv/cs.AI · Valeria Lerman, Yaniv Dover · open original ↗

Language models form trust in humans using competence, benevolence, and integrity cues, but exhibit demographic biases similar to human decision-makers.

  • LLMs assess human trustworthiness through three dimensions: competence, benevolence, integrity.
  • Trust formation in models mirrors human behavioral patterns across most tested scenarios.
  • Demographic variables (age, religion, gender) skew LLM trust judgments, especially in finance.
  • Different model architectures show varying sensitivity to trustworthiness and demographic signals.
  • Biases emerge more consistently in newer models and common benchmark scenarios.
  • Trust-sensitive applications (loans, hiring) require explicit monitoring of AI-to-human trust dynamics.
  • Trustworthiness alone does not always predict LLM trust; context and model type matter.

Astrobobo tool mapping

  • Knowledge Capture Record the three trustworthiness dimensions (competence, benevolence, integrity) as a checklist for auditing your own AI prompts and decision logic.
  • Focus Brief Create a one-page summary of which demographic variables bias your specific use case (finance vs. hiring vs. content moderation) and share with your compliance team.
  • Reading Queue Queue related papers on AI fairness and behavioral economics to deepen understanding of why models inherit these patterns.

Frequently asked

  • The study measures implicit trust through model outputs in simulated scenarios—whether an LLM recommends approving a loan or hiring a candidate based on human input. Whether this constitutes genuine trust or statistical correlation is philosophical; the practical concern is that models behave *as if* they trust, and that behavior is biased by demographics. For decision-making purposes, the distinction matters less than the measurable bias.
Share X LinkedIn
cite
APA
Valeria Lerman, Yaniv Dover. (2026, April 17). LLMs show human-like trust bias toward people, with demographic blind spots. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/llms-show-human-like-trust-bias-toward-people-with-demographic-blind-spots-a3b933
MLA
Valeria Lerman, Yaniv Dover. "LLMs show human-like trust bias toward people, with demographic blind spots." Astrobobo Content Engine, 17 Apr 2026, https://astrobobo-content-engine.vercel.app/article/llms-show-human-like-trust-bias-toward-people-with-demographic-blind-spots-a3b933. Based on "arxiv/cs.AI", https://arxiv.org/abs/2504.15801.
BibTeX
@misc{astrobobo_llms-show-human-like-trust-bias-toward-people-with-demographic-blind-spots-a3b933_2026,
  author       = {Valeria Lerman, Yaniv Dover},
  title        = {LLMs show human-like trust bias toward people, with demographic blind spots},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/llms-show-human-like-trust-bias-toward-people-with-demographic-blind-spots-a3b933},
  note         = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2504.15801},
}

Related insights