ai · 5 min read · Apr 17, 2026

Verifiable model unlearning on edge devices without retraining

ZK-APEX combines sparse masking and zero-knowledge proofs to let providers verify that personalized models forget targeted data while preserving local utility.

Source: arxiv/cs.AI · Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi · open original ↗

ZK-APEX enables providers to verify personalized model unlearning on edge devices using zero-knowledge proofs without accessing private data.

  • Personalized models on edge devices resist deletion requests; providers cannot verify compliance without seeing parameters or data.
  • ZK-APEX applies sparse masking on provider side and Group OBS compensation on client side using blockwise Fisher matrix.
  • Halo2 zero-knowledge proofs allow verification that unlearning occurred without revealing private data or personalized weights.
  • Vision Transformer tasks recover nearly all personalization accuracy; OPT125M code model recovers ~70% of original accuracy.
  • Proof generation completes in ~2 hours, 10 million times faster than retraining-based verification, using <1GB memory.
  • Framework addresses real deployment scenario where clients may ignore or falsely claim compliance with deletion requests.
  • Verification remains lightweight on edge devices, critical for practical adoption in distributed ML systems.

Astrobobo tool mapping

  • Knowledge Capture Record the core mechanism: sparse masking + Fisher-based compensation + Halo2 proofs. Sketch the three-party flow (provider, client, verifier) to internalize how zero-knowledge preserves privacy during verification.
  • Focus Brief Summarize the accuracy-verification tradeoff for your domain. Vision Transformer: ~100% recovery. OPT125M: ~70%. Use this to decide if the method fits your tolerance for utility loss.
  • Reading Queue Queue the full arxiv paper and related work on federated unlearning (e.g., machine unlearning surveys) to understand how ZK-APEX compares to prior verification approaches.

Frequently asked

  • Machine unlearning removes the influence of specific data points from a trained model to satisfy privacy or copyright requests. Verification is hard because providers cannot access edge device parameters or private data, yet must confirm that the targeted information was actually forgotten. Traditional retraining-based checks are slow and expensive, making lightweight cryptographic verification essential.
Share X LinkedIn
cite
APA
Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi. (2026, April 17). Verifiable model unlearning on edge devices without retraining. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/verifiable-model-unlearning-on-edge-devices-without-retraining-c3b09f
MLA
Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi. "Verifiable model unlearning on edge devices without retraining." Astrobobo Content Engine, 17 Apr 2026, https://astrobobo-content-engine.vercel.app/article/verifiable-model-unlearning-on-edge-devices-without-retraining-c3b09f. Based on "arxiv/cs.AI", https://arxiv.org/abs/2512.09953.
BibTeX
@misc{astrobobo_verifiable-model-unlearning-on-edge-devices-without-retraining-c3b09f_2026,
  author       = {Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi},
  title        = {Verifiable model unlearning on edge devices without retraining},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/verifiable-model-unlearning-on-edge-devices-without-retraining-c3b09f},
  note         = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2512.09953},
}

Related insights