Second-Order Belief Hidden Markov Models (original) (raw)
Related papers
Hidden Markov Models for Pattern Recognition
Markov Model - Theory and Applications
Hidden Markov Models (HMMs) are the most popular recognition algorithm for pattern recognition. Hidden Markov Models are mathematical representations of the stochastic process, which produces a series of observations based on previously stored data. The statistical approach in HMMs has many benefits, including a robust mathematical foundation, potent learning and decoding techniques, effective sequence handling abilities, and flexible topology for syntax and statistical phonology. The drawbacks stem from the poor model discrimination and irrational assumptions required to build the HMMs theory, specifically the independence of the subsequent feature frames (i.e., input vectors) and the first-order Markov process. The developed algorithms in the HMM-based statistical framework are robust and effective in real-time scenarios. Furthermore, Hidden Markov Models are frequently used in real-world applications to implement gesture recognition and comprehension systems. Every state of the m...
Partially-hidden Markov models
Belief Functions: Theory and Applications, 2012
This paper addresses the problem of Hidden Markov Models (HMM) training and inference when the training data are composed of feature vectors plus uncertain and imprecise labels. The "soft" labels represent partial knowledge about the possible states at each time step and the "softness" is encoded by belief functions. For the obtained model, called a Partially-Hidden Markov Model (PHMM), the training algorithm is based on the Evidential Expectation-Maximisation (E2M) algorithm. The usual HMM model is recovered when the belief functions are vacuous and the obtained model includes supervised, unsupervised and semi-supervised learning as special cases.
Classification with hidden Markov model
Applied Mathematical Sciences, 2014
Classification and statistical learning by hidden markov model has achieved remarkable progress in the past decade. They have been applied in many areas like speech recognition and handwriting recognition. However, learning by Hidden Markov Model (HMM) is still restricted to supervised problems. In this paper, we propose a new learning method
Generalized hidden Markov models. I. Theoretical frameworks
IEEE Transactions on Fuzzy Systems, 2000
This is the first paper in a series of two papers describing a novel generalization of classical hidden Markov models using fuzzy measures and fuzzy integrals. In this paper, we present the theoretical framework for the generalization and, in the second paper, we describe an application of the generalized hidden Markov models to handwritten word recognition. The main characteristic of the generalization is the relaxation of the usual additivity constraint of probability measures. Fuzzy integrals are defined with respect to fuzzy measures, whose key property is monotonicity with respect to set inclusion. This property is far weaker than the usual additivity property of probability measures. As a result of the new formulation, the statistical independence assumption of the classical hidden Markov models is relaxed. An attractive property of this generalization is that the generalized hidden Markov model reduces to the classical hidden Markov model if we used the Choquet fuzzy integral and probability measures. Another interesting property of the generalization is the establishment of a relation between the generalized hidden Markov model and the classical nonstationary hidden Markov model in which the transitional parameters vary with time.
An introduction to hidden Markov models
IEEE Signal Processing Magazine, 1986
The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.
On the structure of hidden Markov models
Pattern Recognition Letters, 2004
This paper attempts to overcome the local convergence problem of the Expectation Maximization (EM) based training of the Hidden Markov Model (HMM) in speech recognition. We propose a hybrid algorithm, Simulated Annealing Stochastic version of EM (SASEM), combining Simulated Annealing with EM that reformulates the HMM estimation process using a stochastic step between the EM steps and the SA. The stochastic processes of SASEM inside EM can prevent EM from converging to a local maximum and find improved estimation for HMM using the global convergence properties of SA. Experiments on the TIMIT speech corpus show that SASEM obtains higher recognition accuracies than the EM.
Applications of Hidden Markov Model: state-of-the-art
This paper performs a state-of-the-art literature review to classify and interpret the ongoing and emerging issues associated with the Hidden Markov Model (HMM) in the last decade. HMM is a commonly used method in many scientific areas. It is a temporal probabilistic model in which the state of the process is described by a single discrete ran-dom variable. The theory of HMMs was developed in the late 1960s. Now, it is especially known for its application in temporal pattern recognition, i.e. speech, handwriting, and bioinformatics. After a brief description of the study methodology, this paper comprehensively compares the most impor-tant HMM publications by field of interest, most cited authors, authors' nationalities, and scientific journals. The comparison is based on papers in-dexed in the Institute for Scientific Information (ISI) Web of Knowledge and ScienceDirect data-bases.
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition
Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications. Second the models, when applied properly, work very well in practice for several important applications. In this paper we attempt to carefully and methodically review the theoretical aspects of this type of statistical modeling and show how they have been applied to selected problems in machine recognition of speech.