Distribution learning theory (original) (raw)

About DBpedia

The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robert Schapire and in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view.

Property Value
dbo:abstract The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robert Schapire and in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields like machine learning, approximation algorithms, applied probability and statistics. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view. (en)
dbo:wikiPageID 44655565 (xsd:integer)
dbo:wikiPageLength 22325 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID 1083045386 (xsd:integer)
dbo:wikiPageWikiLink dbr:Probability_distribution dbr:Robert_Schapire dbr:Michael_Kearns_(computer_scientist) dbc:Computational_learning_theory dbr:Applied_probability dbr:Constantinos_Daskalakis dbr:Conditional_probability_distribution dbr:Dana_Ron dbr:Approximation_algorithms dbr:Leslie_Valiant dbr:Machine_learning dbr:Statistics dbr:Cluster_analysis dbr:Computational_learning_theory dbr:Kolmogorov–Smirnov_test dbr:Kullback-Leibler_divergence dbr:Ronitt_Rubinfeld dbr:Total_variation dbr:Statistical_learning_theory dbr:Total_variation_distance_of_probability_measures dbr:PAC-learning dbr:Gautam_Kamath dbr:Linda_Sellie dbr:S._Dasgupta dbr:Yishay_Mansour
dct:subject dbc:Computational_learning_theory
gold:hypernym dbr:Framework
rdf:type dbo:Software
rdfs:comment The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robert Schapire and in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view. (en)
rdfs:label Distribution learning theory (en)
owl:sameAs freebase:Distribution learning theory wikidata:Distribution learning theory https://global.dbpedia.org/id/2MPa9
prov:wasDerivedFrom wikipedia-en:Distribution_learning_theory?oldid=1083045386&ns=0
foaf:isPrimaryTopicOf wikipedia-en:Distribution_learning_theory
is dbo:wikiPageWikiLink of dbr:Computational_learning_theory dbr:Adam_Tauman_Kalai dbr:Outline_of_machine_learning
is foaf:primaryTopic of wikipedia-en:Distribution_learning_theory