ai · 8 min read · Apr 27, 2026

Poisoning attacks on recommender systems gain potency through worst-case modeling

Researchers propose SharpAP, a method that optimizes fake user injection attacks by targeting worst-case model structures, improving cross-system transferability.

Source: arxiv/cs.LG · Junsong Xie, Yonghui Yang, Pengyang Shao, Le Wu · open original ↗

SharpAP improves fake-profile attacks on recommender systems by optimizing against worst-case victim models rather than fixed surrogates.

  • Existing poisoning attacks assume fake data crafted for one model transfers to others; this assumption breaks under structural differences.
  • SharpAP uses sharpness-aware minimization to identify approximate worst-case victim models during the attack process.
  • The method formulates attack as a tri-level optimization problem: minimize poisoning effect, maximize victim model loss, minimize surrogate loss.
  • Poisoned data optimized for worst-case models shows reduced sensitivity to model architecture shifts.
  • Experiments on three real-world datasets show SharpAP significantly increases attack success across diverse recommender architectures.
  • Attackers typically lack knowledge of deployed victim systems, forcing reliance on surrogate models as proxies.
  • Overfitting to a single surrogate model degrades attack performance when the actual victim uses different structures.

Astrobobo tool mapping

  • Knowledge Capture Record the tri-level optimization framework (min-max-min) and sharpness-aware minimization principle as a reusable attack pattern for future threat modeling.
  • Focus Brief Summarize the key vulnerability: surrogate-based attacks fail when victim models differ structurally. Use this as a checklist item for security reviews.
  • Reading Queue Queue related papers on adversarial transferability and federated learning robustness to build context for defense strategies.

Frequently asked

  • Sharpness-aware poisoning (SharpAP) optimizes fake user profiles not just for a single surrogate model, but for a worst-case victim model identified through sharpness-aware minimization. Standard attacks assume poisoned data crafted for one model transfers directly to others. SharpAP improves transferability by making poisoning robust to structural differences between models, reducing overfitting to the surrogate.
Share X LinkedIn
cite
APA
Junsong Xie, Yonghui Yang, Pengyang Shao, Le Wu. (2026, April 27). Poisoning attacks on recommender systems gain potency through worst-case modeling. Astrobobo Content Engine (rewrite of arxiv/cs.LG). https://astrobobo-content-engine.vercel.app/article/poisoning-attacks-on-recommender-systems-gain-potency-through-worst-case-modelin-3075b0
MLA
Junsong Xie, Yonghui Yang, Pengyang Shao, Le Wu. "Poisoning attacks on recommender systems gain potency through worst-case modeling." Astrobobo Content Engine, 27 Apr 2026, https://astrobobo-content-engine.vercel.app/article/poisoning-attacks-on-recommender-systems-gain-potency-through-worst-case-modelin-3075b0. Based on "arxiv/cs.LG", https://arxiv.org/abs/2604.22170.
BibTeX
@misc{astrobobo_poisoning-attacks-on-recommender-systems-gain-potency-through-worst-case-modelin-3075b0_2026,
  author       = {Junsong Xie, Yonghui Yang, Pengyang Shao, Le Wu},
  title        = {Poisoning attacks on recommender systems gain potency through worst-case modeling},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/poisoning-attacks-on-recommender-systems-gain-potency-through-worst-case-modelin-3075b0},
  note         = {Astrobobo rewrite of arxiv/cs.LG, https://arxiv.org/abs/2604.22170},
}

Related insights