A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets (original) (raw)
Related papers
Semi-stochastic coordinate descent
Optimization Methods and Software, 2017
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems
arXiv (Cornell University), 2020
Enhancing the efficiency of the stochastic method by using non- smooth and non-convex optimization
2020
Stochastic first order methods in smooth convex optimization
2011
Large-scale nonconvex stochastic optimization by Doubly Stochastic Successive Convex approximation
2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
2019
Stochastic Coordinate Descent for Nonsmooth Convex Optimization
arXiv (Cornell University), 2017
2018 IEEE Conference on Decision and Control (CDC)
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
SIAM Journal on Optimization, 2020
Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization
2017
Expectigrad: Fast Stochastic Optimization with Robust Convergence Properties
ArXiv, 2020
Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives
arXiv (Cornell University), 2020
Randomized Smoothing Variance Reduction Method for Large-Scale Non-smooth Convex Optimization
Operations Research Forum, 2021
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
Second-Order Stochastic Optimization for Machine Learning in Linear Time
arXiv (Cornell University), 2016
Accelerated randomized stochastic optimization
The Annals of Statistics, 2003
arXiv (Cornell University), 2021
Stochastic quasi-Newton methods for non-strongly convex problems: Convergence and rate analysis
2016 IEEE 55th Conference on Decision and Control (CDC), 2016
Robust accelerated gradient methods for machine learning
Massachusetts Institute of Technology, 2019
Stochastic gradient methods for principled estimation with massive data sets
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
arXiv (Cornell University), 2021
An Optimal Multistage Stochastic Gradient Method for Minimax Problems
2020 59th IEEE Conference on Decision and Control (CDC)
Randomized Smoothing SVRG for Large-scale Nonsmooth Convex Optimization
ArXiv, 2018
Global convergence of the Heavy-ball method for convex optimization
2015 European Control Conference (ECC), 2015
On Adaptive Stochastic Gradient and Subgradient Methods
netfiles.uiuc.edu
Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization
arXiv (Cornell University), 2020
pbSGD: Powered Stochastic Gradient Descent Methods for Accelerated Non-Convex Optimization
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020
Random Gradient-Free Minimization of Convex Functions
Foundations of Computational Mathematics, 2015
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
2020
Fast optimization of non-convex Machine Learning objectives
MSc Thesis, University of Edinburgh, 2012
Convergence of stochastic proximal gradient algorithm
Approximate and Stochastic Greedy Optimization
Cornell University - arXiv, 2017
Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition
2012
DTN: A Learning Rate Scheme with Convergence Rate of O(1/t) for SGD
2019