ai · arxiv/cs.LG · 4 min
Selective-Update RNNs Match Transformers While Using Less Memory
A new RNN architecture learns when to update internal state, preserving memory across long sequences and reducing computational waste on redundant input.
May 3, 2026 Read →