Norges billigste bøker

Gradient Descent, Stochastic Optimization, and Other Tales

av Jun Lu
Om Gradient Descent, Stochastic Optimization, and Other Tales

The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn't shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms. Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.

Vis mer
  • Språk:
  • Engelsk
  • ISBN:
  • 9789994981557
  • Bindende:
  • Paperback
  • Sider:
  • 96
  • Utgitt:
  • 22. juli 2022
  • Dimensjoner:
  • 152x229x5 mm.
  • Vekt:
  • 141 g.
  Gratis frakt
Leveringstid: 2-4 uker
Forventet levering: 27. desember 2024
Utvidet returrett til 31. januar 2025

Beskrivelse av Gradient Descent, Stochastic Optimization, and Other Tales

The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn't shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms.
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.

Brukervurderinger av Gradient Descent, Stochastic Optimization, and Other Tales



Gjør som tusenvis av andre bokelskere

Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.