engineering · 4 min read · Apr 20, 2026

Dual Transformers Improve Bug Assignment Accuracy by 10%+

TriagerX uses two transformer models and developer interaction history to recommend the right engineer for bug fixes, outperforming single-model approaches.

Source: arxiv/cs.AI · Md Afif Al Mamun, Gias Uddin, Lan Xia, Longyu Zhang · open original ↗

TriagerX pairs two transformers with interaction-based ranking to assign bugs to developers more accurately than single-model baselines.

  • Dual-transformer design extracts recommendations from six layers total, reducing noise from irrelevant tokens.
  • Interaction-based ranking layer refines assignments by examining developer history with similar resolved bugs.
  • Achieves 10%+ improvement in Top-1 and Top-3 accuracy across five public datasets.
  • Deployed in production at large software company for both developer and component assignment.
  • Outperforms nine existing transformer-based methods including published state-of-the-art systems.
  • Component recommendations improved 10%; developer recommendations improved 54% in production setting.
  • Addresses limitation of PLMs attending to irrelevant tokens in unstructured bug reports.

Astrobobo tool mapping

  • Knowledge Capture Document your team's current bug assignment process (who decides, what signals matter). Note pain points: slow triage, wrong assignments, developer overload. This becomes your baseline for measuring TriagerX impact.
  • Focus Brief Create a one-page summary of TriagerX's dual-transformer + interaction ranking logic. Share with your engineering lead to assess fit for your bug workflow and data availability.
  • Reading Queue Queue the full TriagerX paper and the referenced SOTA baselines (e.g., GraphCodeBERT, CodeBERT) to understand what single-transformer approaches miss and why ensemble helps.

Frequently asked

  • TriagerX uses two transformers instead of one, each contributing recommendations from its last three layers (six total). This ensemble approach reduces the impact of attending to irrelevant tokens in bug reports. The dual design captures complementary semantic views, improving robustness. Single-transformer baselines rely on one model's attention pattern, which can miss nuance in unstructured bug text.
Share X LinkedIn
cite
APA
Md Afif Al Mamun, Gias Uddin, Lan Xia, Longyu Zhang. (2026, April 20). Dual Transformers Improve Bug Assignment Accuracy by 10%+. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/dual-transformers-improve-bug-assignment-accuracy-by-10-47ff38
MLA
Md Afif Al Mamun, Gias Uddin, Lan Xia, Longyu Zhang. "Dual Transformers Improve Bug Assignment Accuracy by 10%+." Astrobobo Content Engine, 20 Apr 2026, https://astrobobo-content-engine.vercel.app/article/dual-transformers-improve-bug-assignment-accuracy-by-10-47ff38. Based on "arxiv/cs.AI", https://arxiv.org/abs/2508.16860.
BibTeX
@misc{astrobobo_dual-transformers-improve-bug-assignment-accuracy-by-10-47ff38_2026,
  author       = {Md Afif Al Mamun, Gias Uddin, Lan Xia, Longyu Zhang},
  title        = {Dual Transformers Improve Bug Assignment Accuracy by 10%+},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/dual-transformers-improve-bug-assignment-accuracy-by-10-47ff38},
  note         = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2508.16860},
}

Related insights