A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets (original) (raw)

Semi-stochastic coordinate descent

Qu Zheng

Optimization Methods and Software, 2017

View PDFchevron_right

Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems

Parvin Nazari

arXiv (Cornell University), 2020

View PDFchevron_right

Enhancing the efficiency of the stochastic method by using non- smooth and non-convex optimization

anjani singha

2020

View PDFchevron_right

Stochastic first order methods in smooth convex optimization

Olivier Devolder

2011

View PDFchevron_right

Large-scale nonconvex stochastic optimization by Doubly Stochastic Successive Convex approximation

Alec Koppel

2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017

View PDFchevron_right

A Universally Optimal Multistage Accelerated Stochastic Gradient Method

alireza fallah

2019

View PDFchevron_right

Stochastic Coordinate Descent for Nonsmooth Convex Optimization

Anand Rangarajan

View PDFchevron_right

On stochastic and deterministic quasi-Newton methods for non-Strongly convex optimization: Asymptotic convergence and rate analysis

Uday V Shanbhag

arXiv (Cornell University), 2017

View PDFchevron_right

A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization

Uday V Shanbhag

2018 IEEE Conference on Decision and Control (CDC)

View PDFchevron_right

Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions

alireza fallah

SIAM Journal on Optimization, 2020

View PDFchevron_right

Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization

Tongzhou Mu

2017

View PDFchevron_right

Expectigrad: Fast Stochastic Optimization with Robust Convergence Properties

Christopher Amato

ArXiv, 2020

View PDFchevron_right

Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives

Zixuan Wu

arXiv (Cornell University), 2020

View PDFchevron_right

Randomized Smoothing Variance Reduction Method for Large-Scale Non-smooth Convex Optimization

Wenjie Huang

Operations Research Forum, 2021

View PDFchevron_right

SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization

Qu Zheng

View PDFchevron_right

Second-Order Stochastic Optimization for Machine Learning in Linear Time

Brian Bullins

arXiv (Cornell University), 2016

View PDFchevron_right

Accelerated randomized stochastic optimization

Jürgen Dippon

The Annals of Statistics, 2003

View PDFchevron_right

CSG: A stochastic gradient method for a wide class of optimization problems appearing in a machine learning or data-driven context

Michael Stingl

arXiv (Cornell University), 2021

View PDFchevron_right

Stochastic quasi-Newton methods for non-strongly convex problems: Convergence and rate analysis

Uday V Shanbhag

2016 IEEE 55th Conference on Decision and Control (CDC), 2016

View PDFchevron_right

Robust accelerated gradient methods for machine learning

alireza fallah

Massachusetts Institute of Technology, 2019

View PDFchevron_right

Stochastic gradient methods for principled estimation with massive data sets

Edoardo Airoldi

View PDFchevron_right

Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent

Reza Babanezhad

arXiv (Cornell University), 2021

View PDFchevron_right

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

alireza fallah

2020 59th IEEE Conference on Decision and Control (CDC)

View PDFchevron_right

Randomized Smoothing SVRG for Large-scale Nonsmooth Convex Optimization

Wenjie Huang

ArXiv, 2018

View PDFchevron_right

Global convergence of the Heavy-ball method for convex optimization

Euhanna Ghadimi

2015 European Control Conference (ECC), 2015

View PDFchevron_right

On Adaptive Stochastic Gradient and Subgradient Methods

Angelia Nedich

netfiles.uiuc.edu

View PDFchevron_right

Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization

Sattar Vakili

arXiv (Cornell University), 2020

View PDFchevron_right

pbSGD: Powered Stochastic Gradient Descent Methods for Accelerated Non-Convex Optimization

Beitong Zhou

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

View PDFchevron_right

Random Gradient-Free Minimization of Convex Functions

Yurii Nesterov

Foundations of Computational Mathematics, 2015

View PDFchevron_right

Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent

Yiming Ying

2020

View PDFchevron_right

Fast optimization of non-convex Machine Learning objectives

Nikolaos (Nikos) Nikolaou

MSc Thesis, University of Edinburgh, 2012

View PDFchevron_right

Convergence of stochastic proximal gradient algorithm

SILVIA CATERINE MORENO VILLA

View PDFchevron_right

Approximate and Stochastic Greedy Optimization

Peter Bartlett

Cornell University - arXiv, 2017

View PDFchevron_right

Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition

Nicolas Le Roux

2012

View PDFchevron_right

DTN: A Learning Rate Scheme with Convergence Rate of O(1/t) for SGD

Jayant Kalagnanam

2019

View PDFchevron_right