Tag
#unlearning
2 insights
- ai · arxiv/cs.LG · 8 min
Simpler Optimizers Make LLM Unlearning More Robust
Research shows that using lower-order optimization methods during LLM unlearning produces forgetting that resists post-training attacks better than sophisticated gradient-based approaches.
Apr 21, 2026 Read → - ai · arxiv/cs.AI · 5 min
Verifiable model unlearning on edge devices without retraining
ZK-APEX combines sparse masking and zero-knowledge proofs to let providers verify that personalized models forget targeted data while preserving local utility.
Apr 17, 2026 Read →