Bayesian interpretation of kernel regularization (original) (raw)

About DBpedia

Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of v

Property Value
dbo:abstract Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning. A mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is finite-dimensional. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together. (en)
dbo:wikiPageID 35867897 (xsd:integer)
dbo:wikiPageLength 17543 (xsd:nonNegativeInteger)
dbo:wikiPageRevisionID 1109518986 (xsd:integer)
dbo:wikiPageWikiLink dbr:Bayesian_linear_regression dbr:Bayesian_probability dbr:Bayesian_statistics dbr:Kernel_methods dbr:Regularized_least_squares dbr:Reproducing_kernel_Hilbert_space dbr:Estimator dbr:Gaussian_process dbr:Gramian_matrix dbr:Multi-task_learning dbr:Multivariate_normal_distribution dbr:Likelihood_function dbr:Machine_learning dbr:Kernel_methods_for_vector_output dbc:Machine_learning dbr:Positive-definite_function dbr:Posterior_probability dbr:Prior_probability dbr:Regularization_(mathematics) dbr:Hilbert_space dbc:Bayesian_statistics dbr:Support_vector_machine dbr:Symmetry_in_mathematics dbr:Tikhonov_regularization dbr:Supervised_learning dbr:Gaussian_processes
dbp:wikiPageUsesTemplate dbt:Further dbt:NumBlk dbt:Reflist dbt:Technical dbt:EquationRef dbt:EquationNote
dcterms:subject dbc:Machine_learning dbc:Bayesian_statistics
rdfs:comment Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of v (en)
rdfs:label Bayesian interpretation of kernel regularization (en)
owl:sameAs freebase:Bayesian interpretation of kernel regularization wikidata:Bayesian interpretation of kernel regularization https://global.dbpedia.org/id/4XAJr
prov:wasDerivedFrom wikipedia-en:Bayesian_interpretation_of_kernel_regularization?oldid=1109518986&ns=0
foaf:isPrimaryTopicOf wikipedia-en:Bayesian_interpretation_of_kernel_regularization
is dbo:wikiPageRedirects of dbr:Bayesian_interpretation_of_regularization
is dbo:wikiPageWikiLink of dbr:Bayesian_linear_regression dbr:Bayesian_interpretation_of_regularization dbr:List_of_things_named_after_Thomas_Bayes dbr:Outline_of_machine_learning
is foaf:primaryTopic of wikipedia-en:Bayesian_interpretation_of_kernel_regularization