Genetic Algorithms Principles Towards Hidden Markov Model (original) (raw)

An Improved Hybrid Algorithm for Optimizing the Parameters of Hidden Markov Models

Asian Journal of Research in Computer Science

Hidden Markov Models (HMMs) have become increasingly popular in the last several years due to the fact that, the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications. Various algorithms have been proposed in literature for optimizing the parameters of these models to make them applicable in real-life. However, the performance of these algorithms has remained computationally challenging largely due to slow/premature convergence and their sensitivity to preliminary estimates. In this paper, a hybrid algorithm comprising the Particle Swarm Optimization (PSO), Baum-Welch (BW), and Genetic Algorithms (GA) is proposed and implemented for optimizing the parameters of HMMs. The algorithm not only overcomes the shortcomings of the slow convergence speed of the PSO but also helps the BW escape from local optimal solution whilst improving the performance of GA despite the increase in the search space. Detailed experimen...

Learning genetic algorithm parameters using hidden Markov models

European Journal of Operational Research, 2006

Genetic algorithms (GAs) are routinely used to search problem spaces of interest. A lesser known but growing group of applications of GAs is the modeling of so-called ''evolutionary processes'', for example, organizational learning and group decision-making. Given such an application, we show it is possible to compute the likely GA parameter settings given observed populations of such an evolutionary process. We examine the parameter estimation process using estimation procedures for learning hidden Markov models, with mathematical models that exactly capture expected GA behavior. We then explore the sampling distributions relevant to this estimation problem using an experimental approach.

Evolutionary training of hybrid systems of recurrent neural networks and hidden Markov models

International Journal of Applied Mathematics and …

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Automatic Estimation of Di erential Evolution Parameters using Hidden Markov Models

2018

Di erential Evolution (DE) has been successful in solving practical optimization problem. However, similar to other optimization algorithms, the search performance of DE depends on the e cacy of the adopted search operators. The ability to adapt these operators within an evolutionary run enhances their ability to nd better quality solutions. This adaptation process requires learning algorithms capable of compressing the information embedded within a population into meaningful estimates to adapt the search operators. Hidden Markov Models (HMMs) are learning algorithms designed to estimate parameters by compressing information collected from on a state space. In this paper, we use HMMs to compress the information within a population and use the model for adapting the DE parameters. The resultant DE-HMM algorithm dynamically adjusts the two basic parameters of DE. After a thorough testing of this method and conducting an extensive comparison of its performance on the CEC2005 and CEC201...

Hidden Markov Models Training Using Population-based Metaheuristics

2008

In this chapter, we consider the issue of Hidden Markov Model (HMM) training. First, HMMs are introduced and then we focus on the particular HMMs training problem. We emphasize the difficulty of this problem and present various criteria that can be considered. Many different adaptations of metaheuristics have already been used but, until now, a few extensive comparisons have been performed on this problem. We then propose to compare 3 population based metaheuristics (genetic algorithm, ant algorithm and particle swarm optimization) with and without the help of a local optimizer. Those algorithms make use of solutions that can be taken in three different kinds of search space (a constrained space, a discrete space and a vector space). We study these algorithms both from a theoretical and an experimental perspective: parameter settings are fully studied on a reduced set of data and performances of algorithms are compared on different sets of real data.

Neuroevolution Mechanism for Hidden Markov Model

Hidden Markov Model (HMM) is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.

An Evolutionary survey on Markov Models

Web mining concern with the extraction of knowledge or information from the world wide web. One of the problem in web mining is to predict user's next web page request. Several techniques for this prediction purpose are employed and some of these are rough set clustering , support vector machine, fuzzy logic, neural network etc [1][2][3][4]. Besides these techniques Markov models are used in web page request prediction. In this paper we discuss about some Markov models and their technique of prediction. Keywords : World wide web , Markov model. 1 Background 1.1 Markov model Model is basically a representation of real world or the view of reality.Markov process is a process to be in more than one state and making transactions among these states.Markov models depicts the maokov process.Markov models are widely used for the purpose of predicting user's web page access request on the basis of previous web history different order markov models are used for this purpose firstly, f...

Training HMM structure with genetic algorithm for biological sequence analysis

Bioinformatics, 2004

Summary: Hidden Markov models (HMMs) are widely used for biological sequence analysis because of their ability to incorporate biological information in their structure. An automatic means of optimizing the structure of HMMs would be highly desirable. However, this raises two important issues; first, the new HMMs should be biologically interpretable, and second, we need to control the complexity of the HMM so that it has good generalization performance on unseen sequences. In this paper, we explore the possibility of using a genetic algorithm (GA) for optimizing the HMM structure. GAs are sufficiently flexible to allow incorporation of other techniques such as Baum–Welch training within their evolutionary cycle. Furthermore, operators that alter the structure of HMMs can be designed to favour interpretable and simple structures. In this paper, a training strategy using GAs is proposed, and it is tested on finding HMM structures for the promoter and coding region of the bacterium Camp...

Training hidden Markov models using populationbased learning

1999

Hidden Markov Models are commonly trained using algorithms derived from gradient-based methods such as the Baum-Welch procedure. We describe a new representation of discrete observation HMMs that permits them to be trained using Population-Based Incremental Learning (PBIL), a variant of genetic learning that combines evolutionary optimization and hill-climbing Baluja and Caruana, 1995]. In this paper we examine the recognition performance of PBIL-trained HMMs on two tasks: hand-drawn shape recognition and spoken digit recognition. We demonstrate that HMMs can be maximized via PBIL using either a maximum likelihood or a maximum mutual information tness function and achieve results comparable to those achieved using the Baum-Welch procedure. We argue that the PBIL algorithm has the advantage of easy extension to applications like speech intelligibility assessment that lack a di erentiable or analytical optimization function.