Stochastic gradient Langevin dynamics (original) (raw)

About DBpedia

Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribu

thumbnail

Property Value
dbo:abstract Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems. (en)
dbo:thumbnail wiki-commons:Special:FilePath/Non-Convex_Objective_Function.gif?width=300
dbo:wikiPageID 58878004 (xsd:integer)
dbo:wikiPageLength 9230 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID 1107781335 (xsd:integer)
dbo:wikiPageWikiLink dbr:Bayesian_inference dbr:Metropolis–Hastings_algorithm dbc:Computational_statistics dbr:Deep_learning dbr:Convex_set dbr:Critical_point_(mathematics) dbr:Mathematical_optimization dbr:Gradient_descent dbr:Condition_number dbr:Objective_function dbr:Hamiltonian_Monte_Carlo dbc:Optimization_algorithms_and_methods dbr:Langevin_dynamics dbr:Lattice_field_theory dbr:Molecular_dynamics dbr:Stochastic_gradient_descent dbc:Stochastic_optimization dbr:Artificial_neural_network dbc:Gradient_methods dbr:Differentiable_function dbr:Total_variation dbr:Robbins–Monro_algorithm dbr:File:Non-Convex_Objective_Function.gif
dbp:wikiPageUsesTemplate dbt:Citation_needed dbt:Orphan
dct:subject dbc:Computational_statistics dbc:Optimization_algorithms_and_methods dbc:Stochastic_optimization dbc:Gradient_methods
rdfs:comment Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribu (en)
rdfs:label Stochastic gradient Langevin dynamics (en)
owl:sameAs wikidata:Stochastic gradient Langevin dynamics https://global.dbpedia.org/id/9NGyb
prov:wasDerivedFrom wikipedia-en:Stochastic_gradient_Langevin_dynamics?oldid=1107781335&ns=0
foaf:depiction wiki-commons:Special:FilePath/Non-Convex_Objective_Function.gif
foaf:isPrimaryTopicOf wikipedia-en:Stochastic_gradient_Langevin_dynamics
is dbo:wikiPageRedirects of dbr:Stochastic_Gradient_Langevin_Dynamics
is dbo:wikiPageWikiLink of dbr:Stochastic_Gradient_Langevin_Dynamics
is foaf:primaryTopic of wikipedia-en:Stochastic_gradient_Langevin_dynamics