ai · 5 min read · Apr 27, 2026

Fast Entropic Approximations cut entropy computation by 37x

Horenko et al. propose non-singular rational approximations of Shannon entropy and KL divergence that preserve mathematical properties while reducing computation cost and improving ML model training.

Source: arxiv/cs.AI · Illia Horenko, Davide Bassetti, Luk\'a\v{s} Posp\'i\v{s}il · open original ↗

Rational approximations of entropy measures reduce computation cost 2–37× while preserving mathematical properties and eliminating gradient singularities.

  • Fast Entropic Approximations (FEA) replace Shannon entropy and KL divergence with non-singular rational functions.
  • FEA requires 5–7 elementary operations versus tens for standard logarithm-based schemes.
  • Mean absolute error around 10⁻³, 10–20× better than existing approximation methods.
  • Non-singular gradients improve robustness and convergence speed in optimization.
  • On feature selection benchmarks, FEA trains models 1000× faster than LASSO with better quality.
  • Mathematical properties of original measures (symmetry, convexity) are preserved in approximations.
  • Applicable to physics, information theory, machine learning, and quantum computing workflows.

Astrobobo tool mapping

  • Knowledge Capture Document the FEA formula (rational approximation coefficients) and the three key properties it preserves (symmetry, convexity, non-singularity) in your ML reference library for quick lookup.
  • Focus Brief Create a one-page summary: when to use FEA (high-dimensional feature selection, entropy regularization) vs. standard entropy (low-dimensional, offline analysis where speed is not critical).
  • Reading Queue Queue the full arxiv paper for deeper study of the mathematical derivation and benchmark details; flag the LASSO comparison for your feature selection workflow.

Frequently asked

  • Fast Entropic Approximation (FEA) replaces Shannon entropy and Kullback-Leibler divergence with rational functions that compute in 5–7 operations instead of tens, while preserving mathematical properties like symmetry and convexity. It eliminates gradient singularities near zero, which cause numerical instability in optimization. This is critical for machine learning workflows that compute entropy millions of times during training.
Share X LinkedIn
cite
APA
Illia Horenko, Davide Bassetti, Luk\'a\v{s} Posp\'i\v{s}il. (2026, April 27). Fast Entropic Approximations cut entropy computation by 37x. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/fast-entropic-approximations-cut-entropy-computation-by-37x-5c7dc8
MLA
Illia Horenko, Davide Bassetti, Luk\'a\v{s} Posp\'i\v{s}il. "Fast Entropic Approximations cut entropy computation by 37x." Astrobobo Content Engine, 27 Apr 2026, https://astrobobo-content-engine.vercel.app/article/fast-entropic-approximations-cut-entropy-computation-by-37x-5c7dc8. Based on "arxiv/cs.AI", https://arxiv.org/abs/2505.14234.
BibTeX
@misc{astrobobo_fast-entropic-approximations-cut-entropy-computation-by-37x-5c7dc8_2026,
  author       = {Illia Horenko, Davide Bassetti, Luk\'a\v{s} Posp\'i\v{s}il},
  title        = {Fast Entropic Approximations cut entropy computation by 37x},
  year         = {2026},
  url          = {https://astrobobo-content-engine.vercel.app/article/fast-entropic-approximations-cut-entropy-computation-by-37x-5c7dc8},
  note         = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2505.14234},
}

Related insights