ε-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization

Research output: Contribution to journalArticlepeer-review

Abstract

This brief considers constrained nonconvex stochastic finite-sum and online optimization in deep neural networks. Adaptive-learning-rate optimization algorithms (ALROAs), such as Adam, AMSGrad, and their variants, have widely been used for these optimizations because they are powerful and useful in theory and practice. Here, it is shown that the ALROAs are ε-approximations for these optimizations. We provide the learning rates, mini-batch sizes, number of iterations, and stochastic gradient complexity with which to achieve ε-approximations of the algorithms.

Original languageEnglish
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusAccepted/In press - 2022

Keywords

  • ε-approximation
  • adaptive-learning-rate optimization algorithm (ALROA)
  • Convergence
  • Deep learning
  • deep neural network
  • Learning systems
  • Linear programming
  • Neural networks
  • nonconvex stochastic optimization
  • Optimization
  • stochastic gradient complexity.
  • Stochastic processes

Fingerprint

Dive into the research topics of 'ε-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization'. Together they form a unique fingerprint.

Cite this