doi:10.48550/arXiv.2101.11075>.">

madgrad: 'MADGRAD' Method for Stochastic Optimization (original) (raw)

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <doi:10.48550/arXiv.2101.11075>.

Version: 0.1.0
Imports: torch (≥ 0.3.0), rlang
Suggests: testthat (≥ 3.0.0)
Published: 2021-05-10
DOI: 10.32614/CRAN.package.madgrad
Author: Daniel Falbel [aut, cre, cph], RStudio [cph], MADGRAD original implementation authors. [cph]
Maintainer: Daniel Falbel
License: MIT + file
NeedsCompilation: no
Materials: README
CRAN checks: madgrad results

Documentation:

Downloads:

Linking:

Please use the canonical formhttps://CRAN.R-project.org/package=madgradto link to this page.