Skip to content

Towards stability and optimality in stochastic gradient descent

Notifications You must be signed in to change notification settings

airoldilab/ai-sgd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stability and optimality in stochastic gradient descent

This is the accompanying code implementation of the methods and algorithms for a paper in progress.

Maintainer

References

  • Francis Bach and Eric Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n). Advances in Neural Information Processing Systems, 2013.
  • Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1-22, 2010.
  • Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in Neural Information Processing Systems, 2013.
  • David Ruppert. Efficient estimations from a slowly convergent robbins-monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
  • Wei Xu. Towards optimal one pass large scale learning with averaged stochastic gradient descent. arXiv preprint arXiv:1107.2490, 2011.

About

Towards stability and optimality in stochastic gradient descent

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published