Estimation of Distribution Algorithms Research Papers (original) (raw)
2025
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected... more
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.
2025
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given... more
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. In this paper we compare three different adaptive algorithms that include strategies to modify the mutation probability without external control. One adaptive strategy uses the genetic diversity present in the population to update the mutation probability. Other strategy is based on the ideas of reinforcement learning and the last one varies the probabilities of mutation depending on the fitness values of the solution. All these strategies eliminate a very expensive computational phase related to the pre-tuning of the algorithmic parameters. The empirical comparisons show that if the genetic algorithm uses the genetic diversity, as the strategy for adapting the mutation probability outperforms the other two strategies.
2025, HAL (Le Centre pour la Communication Scientifique Directe)
2025
Ficha catalográfica: Biblioteca Profº Mário Werneck, Escola de Engenharia da UFMG "O assunto mais importante do mundo pode ser simplificado até ao ponto em que todos possam apreciá-lo e compreendê-lo. Isso é -ou deveria ser -a mais... more
Ficha catalográfica: Biblioteca Profº Mário Werneck, Escola de Engenharia da UFMG "O assunto mais importante do mundo pode ser simplificado até ao ponto em que todos possam apreciá-lo e compreendê-lo. Isso é -ou deveria ser -a mais elevada forma de arte.
2025
Em primeiro lugar agradeço a Deus, quem guiou os meus passos desde a minha infância, quando ainda nem pensava em me formar em engenharia elétrica, e muito menos, em defender um título de mestre numa universidade tão conceituada quanto a... more
Em primeiro lugar agradeço a Deus, quem guiou os meus passos desde a minha infância, quando ainda nem pensava em me formar em engenharia elétrica, e muito menos, em defender um título de mestre numa universidade tão conceituada quanto a UFMG. Agradeço sinceramente pelas inúmeras portas que foram abertas, e também pelas oportunidades que certamente virão. Agradeço ao meu pai João Batista, quem sempre me proporcionou muitas alegrias, além de me mostrar as grandes virtudes da paciência e mansidão, e à minha mãe Maria Clarisberte, quem há muito vem me ensinando a lutar pelos sonhos tão almejados. Estas duas vidas são os principais responsáveis pelo meu caráter, e sei que mesmo diante de alguns desentendimentos, eles sempre torceram por mim. Sou grato também aos meus familiares pelo apoio e carinho, principalmente à minha tia Airam e às minhas irmãs Paula e Arielly, as quais sempre me ajudaram e animaram. Agradeço especialmente aos meus tios Jaci e Solange, os quais nunca me desampararam e sempre se mostraram grandes amigos. Agradeço ao meu orientador Jaime A. Ramírez, quem direcionou meus primeiros passos no campo da otimização evolucionária. Além de ter se mostrado um amigo, representa um dos maiores responsáveis pela concretização deste trabalho. Sou grato ainda aos grandes professores que ajudaram na minha formação, principalmente Oriane Magela,
2025
Les modèles à équations structurelles à variables latentes (SEM) sont utilisés pour représenter des relations de causalité dans les données, de telle façon que la structure de corrélation des variables observées est résumée dans la... more
Les modèles à équations structurelles à variables latentes (SEM) sont utilisés pour représenter des relations de causalité dans les données, de telle façon que la structure de corrélation des variables observées est résumée dans la structure de corrélation de variables latentes construites à cet effet. Ce papier propose une analyse bayésienne des modèles SEM reposant sur l'analyse de la matrice de covariance des variables latentes utilisant l'expansion paramétrique pour surmonter les problèmes d'identifiabilité. Ce papier est appliqué à l'estimation d'un modèle structurel schématisant un processus de mesure de polluants de l'eau.
2025
The management of the Dutch national airline company KLM intends to bring a sufficient amount of water on board of all flights to fulfill customer’s demand. On the other hand, the surplus of water after a flight should be kept to a... more
The management of the Dutch national airline company KLM intends to bring a sufficient amount of water on board of all flights to fulfill customer’s demand. On the other hand, the surplus of water after a flight should be kept to a minimum to reduce fuel costs. The service to passengers is measured with a service level. The objective of this research is to develop models, which can be used to minimize the amount of water on board of flights such that a predefined service level is met. The difficulty that has to be overcome is the fact that most of the available data of water consumption on flights are rounded off to the nearest eighth of the water tank. For wide-body aircrafts this rounding may correspond to about two hundred litres of water. Part of the problem was also to define a good service level. The use of a service level as a model parameter would give KLM a better control of the water surplus. The available data have been analyzed to examine which aspects we had to take int...
2025
The management of the Dutch national airline company KLM intends to bring a sufficient amount of water on board of all flights to fulfill customer's demand. On the other hand, the surplus of water after a flight should be kept to a... more
The management of the Dutch national airline company KLM intends to bring a sufficient amount of water on board of all flights to fulfill customer's demand. On the other hand, the surplus of water after a flight should be kept to a minimum to reduce fuel costs. The service to passengers is measured with a service level. The objective of this research is to develop models, which can be used to minimize the amount of water on board of flights such that a predefined service level is met. The difficulty that has to be overcome is the fact that most of the available data of water consumption on flights are rounded off to the nearest eighth of the water tank. For wide-body aircrafts this rounding may correspond to about two hundred litres of water. Part of the problem was also to define a good service level. The use of a service level as a model parameter would give KLM a better control of the water surplus. The available data have been analyzed to examine which aspects we had to take into consideration. Next, a general framework has been developed in which the service level has been defined as a Quality of Service for each flight: The probability that a sufficient amount of water is available on a given flight leg. Three approaches will be proposed to find a probability distribution function for the total water consumption on a flight. The first approach tries to fit a distribution for the water consumption based on the available data, without any assumptions on the underlying shape of the distribution. The second approach assumes normality for the total water consumption on a flight and the third approach uses a binomial distribution. All methods are validated and numerically illustrated. We recommend KLM to
2025, Proceedings of the Genetic and Evolutionary Computation Conference Companion
2025, Studies in Computational Intelligence
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms. In this paper the major design issues of EDA's are discussed using an interdisciplinary framework, the minimum relative entropy (MinRel)... more
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms. In this paper the major design issues of EDA's are discussed using an interdisciplinary framework, the minimum relative entropy (MinRel) approximation. We assume that the function to be optimized is additively decomposed (ADF). The interaction graph GADF of the ADF is used to create exact or approximate factorizations of the Boltzmann distribution. The relation between the Factorized Distribution Algorithm FDA and the MinRel approximation is shown. We present a new algorithm, derived from the Bethe-Kikuchi approach developed in statistical physics. It minimizes the relative entropy KLD(q|p β ) to the Boltzmann distribution p β by solving a difficult constrained optimization problem. We present in detail the concave-convex minimization algorithm CCCP to solve the optimization problem. The two algorithms are compared using popular benchmark problems (2-d grid problems, 2-d Ising spin glasses, Kaufman's nk function.) We use instances up to 900 variables.
2025, Connection science
This paper introduces a evolutionary computation method that applies Bayesian classifiers to optimization problems. This approach is based on Estimation of Distribution Algorithms (EDAs) in which Bayesian or Gaussian networks are applied... more
This paper introduces a evolutionary computation method that applies Bayesian classifiers to optimization problems. This approach is based on Estimation of Distribution Algorithms (EDAs) in which Bayesian or Gaussian networks are applied to the evolution of a population of individuals (i.e. potential solutions to the optimization problem) in order to improve the quality of the individuals of the next generation. Our new approach, called Evolutionary Bayesian Classifier-based Optimization Algorithm (EBCOA), employs Bayesian classifiers instead of Bayesian or Gaussian networks in order to evolve individuals to a fitter population. In brief, EBCOAs are characterized by applying Bayesian classification techniques -usually applied to supervised classification problems -to optimization in continuous domains. We propose and review in this paper different Bayesian classifiers for implementing our EBCOA method, focusing particularly on EBCOAs applying naïve Bayes, semi-naïve Bayes, and tree augmented naïve Bayes classifiers. This work presents a deep study on the behavior of these algorithms with classical optimiztion problems in continuous domains. The different parameters used for tuning the performance of the algorithms are discussed, and a comprehensive overview of their influence is provided. We also present experimental results to compare this new method with other state of the art approaches of the evolutionary computation field for continuous domains such as Evolutionary Strategies (ES) and Estimation of Distribution Algorithms (EDAs).
2025, 2007 IEEE Congress on Evolutionary Computation
This paper introduces exact learning of Bayesian networks in estimation of distribution algorithms. The estimation of Bayesian network algorithm (EBNA) is used to analyze the impact of learning the optimal (exact) structure in the search.... more
This paper introduces exact learning of Bayesian networks in estimation of distribution algorithms. The estimation of Bayesian network algorithm (EBNA) is used to analyze the impact of learning the optimal (exact) structure in the search. By applying recently introduced methods that allow learning optimal Bayesian networks, we investigate two important issues in EDAs. First, we analyze the question of whether learning more accurate (exact) models of the dependencies implies a better performance of EDAs. Second, we are able to study the way in which the problem structure is translated into the probabilistic model when exact learning is accomplished.
2025
Evolutionary computation is a discipline that has been emerging for at least 40 or 50 years. All methods within this discipline are characterized by maintaining a set of possible solutions (individuals) to make them successively evolve to... more
Evolutionary computation is a discipline that has been emerging for at least 40 or 50 years. All methods within this discipline are characterized by maintaining a set of possible solutions (individuals) to make them successively evolve to fitter solutions generation after generation. Examples of evolutionary computation paradigms are the broadly known Genetic Algorithms (GAs) and Estimation of Distribution Algorithms (EDAs). This paper contributes to the further development of this discipline by introducing a new evolutionary computation method based on the learning and later simulation of a Bayesian classifier in every generation. In the method we propose, at each iteration the selected group of individuals of the population is divided into different classes depending on their respective fitness value. Afterwards, a Bayesian classifier-either naive Bayes, seminaive Bayes, tree augmented naive Bayes or a similar one-is learned to model the corresponding supervised classification problem. The simulation of the latter Bayesian classifier provides individuals that form the next generation. Experimental results are presented to compare the performance of this new method with different types of EDAs and GAs. The problems chosen for this purpose are combinatorial optimization problems which are commonly used in the literature.
2025, IFIP International Federation for Information Processing
Rough Sets Theory has opened new trends for the development of the Incomplete Information Theory. Inside this one, the notion of reduct is a very significant one, but to obtain a reduct in a decision system is an expensive computing... more
Rough Sets Theory has opened new trends for the development of the Incomplete Information Theory. Inside this one, the notion of reduct is a very significant one, but to obtain a reduct in a decision system is an expensive computing process although very important in data analysis and knowledge discovery. Because of this, it has been necessary the development of different variants to calculate reducts. The present work look into the utility that offers Rough Sets Model and Information Theory in feature selection and a new method is presented with the purpose of calculate a good reduct. This new method consists of a greedy algorithm that uses heuristics to work out a good reduct in acceptable times. In this paper we propose other method to find good reducts, this method combines elements of Genetic Algorithm with Estimation of Distribution Algorithms. The new methods are compared with others which are implemented inside Pattern Recognition and Ant Colony Optimization Algorithms and the results of the statistical tests are shown.
2024, IFAC-PapersOnLine
This paper presents a model based on the rigid water column (RWC) theory to describe the flow and the decay of chlorine in water distribution networks (WDNs), which can be used for developing tools to diagnose leaks and estimate chlorine... more
This paper presents a model based on the rigid water column (RWC) theory to describe the flow and the decay of chlorine in water distribution networks (WDNs), which can be used for developing tools to diagnose leaks and estimate chlorine concentrations. The model includes the continuity equation for each node of the network such that i) the relation of the flow rates entering and leaving the nodes is explicit, and ii) the computation of pressures and flow rates can be simultaneously done. The chlorine decay in each node and in each pipeline section of the WDN is predicted from the computed flow rates by using the third order accurate Warming-Kutler-Lomax (WKL) method. At the end of this paper, it is shown that the chlorine decay rate is well predicted by using the WKL method according to a comparison with simulations results obtained by using the EPANET-MSX software. Furthermore, it is shown that several single leak-diagnosis scenarios can be successfully solved by using an improved sensitivity matrix method together with the proposed model.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
Les graphes bipartis peuvent être utilisés pour modéliser une grande variété d'informations dyadiques telles que les paires utilisateur-score, documentterme et gène-conditions expérimentales. Le biclustering est une extension du... more
Les graphes bipartis peuvent être utilisés pour modéliser une grande variété d'informations dyadiques telles que les paires utilisateur-score, documentterme et gène-conditions expérimentales. Le biclustering est une extension du clustering au graphe biparti sous-jacent induit par ce type de données. Dans cet article, nous tirons parti du transport optimal (OT), qui s'est popularisé dans la communauté de l'apprentissage automatique, pour proposer un nouveau modèle de biclustering efficace qui généralise plusieurs approches classiques de biclustering. Nous réalisons des expériences approfondies pour montrer l'intérêt de notre approche par rapport à d'autres algorithmes de biclustering de type OT.
2024, Journal of Hydrology
Anastomosing rivers form a subset of the anabranching family of river types and provide considerable challenges to modelling of their streamflow because of complex flow patterns across greatly varying floodplain widths. Estimates of... more
Anastomosing rivers form a subset of the anabranching family of river types and provide considerable challenges to modelling of their streamflow because of complex flow patterns across greatly varying floodplain widths. Estimates of distributed flow data are required for catchment management purposes and ecological studies of these rivers but are hindered by a paucity of measured discharge data. A grid-based, semi-distributed, conceptual model structure is applied to a 330 km reach of the arid zone Diamantina River of central Australia. Model complexity is constrained by data availability with only a single gauging station located at the downstream end of the reach to provide discharge data for model calibration. The model uses a simple conceptual bucket structure and accounts for exceptionally high transmission losses as well as flow patterns and wave speeds that vary with discharge within the reach. The intricate flow patterns across the floodplains widths of up to 50 km are simulated using a grid-based structure that required the following features: (i) cell connections that are explicitly defined using a code that allows for multi-directional flow from a cell; and (ii) each cell having a binary flow pattern, with the second connection pattern being triggered when the surface storage of the cell exceeds a calibrated level for a given land-type. Satellite images were used to define the flow paths, and hence cell connection patterns, utilised by various sized floods. The model was able to provide acceptable simulation of large floods but with decreasing model performance in the simulation of small to medium sized floods. Simulation suggested that incorrectly defined flow paths for the smaller floods were a major factor in this decreased performance. The capability of the model would be improved by further detailed mapping, using satellite imagery, of spatial patterns of inundation as discharge varies.
2024, 2012 IEEE Congress on Evolutionary Computation
Gene Regulatory Networks (GRNs) describe the interactions between different genes. One of the most important tasks in biology is to find the right regulations in a GRN given observed data. The problem, is that the data is often noisy and... more
Gene Regulatory Networks (GRNs) describe the interactions between different genes. One of the most important tasks in biology is to find the right regulations in a GRN given observed data. The problem, is that the data is often noisy and scarce, and we have to use models robust to noise and scalable to hundreds of genes. Recently, Recursive Neural Networks (RNNs) have been presented as a viable model for GRNs, which is robust to noise and can be scaled to larger networks. In this paper, to optimize the parameters of the RNN, we implement a classic Population Based Incremental Learning (PBIL), which in certain scenarios has outperformed classic GA and other evolutionary techniques like Particle Swarm Optimization (PSO). We test this implementation on a small and a large artificial networks. We further study the optimal tunning parameters and discuss the advantages of the method.
2024, Biometrical Journal
We consider lifetime data involving pairs of study individuals with more than one possible cause of failure for each individual. Non-parametric estimation of cause-specific distribution functions is considered under independent censoring.... more
We consider lifetime data involving pairs of study individuals with more than one possible cause of failure for each individual. Non-parametric estimation of cause-specific distribution functions is considered under independent censoring. Properties of the estimators are discussed and an illustration of their application is given.
2024
SS-CCEA results on LeadingOnes problem with constant mutation. .. . 4.3 SS-CCEA results on s • LeadingOnes − OneMax problem. .. .. .. .. 4.4 Linkage bias results of SS-CCEA on s • LeadingOnes − OneMax. .. .. 4.5 Linkage bias results for... more
SS-CCEA results on LeadingOnes problem with constant mutation. .. . 4.3 SS-CCEA results on s • LeadingOnes − OneMax problem. .. .. .. .. 4.4 Linkage bias results of SS-CCEA on s • LeadingOnes − OneMax. .. .. 4.5 Linkage bias results for the SS-CCEA on CLOB 2,2. .. .. .. .. .. .. . 4.6 Collaboration credit assignment results for SS-CCEA on CLOB 2,2. .. . .
2024
This work presents Multiobjective Optimization Hybrid Artificial Immune Systems (MO-HAIS), a framework for the development of immune-inspired algorithms that, in addition to traditional immune operators, offers two hybrid operators for... more
This work presents Multiobjective Optimization Hybrid Artificial Immune Systems (MO-HAIS), a framework for the development of immune-inspired algorithms that, in addition to traditional immune operators, offers two hybrid operators for hypermutation and recombination. Two algorithms have been implemented with MOHAIS: a traditional, without using the hybrid operators, and another one that uses both proposed hybrid operators. The proposed hybrid operators and the two implementations were evaluated using 17 benchmark problems with a set of three experiments and over 900 tests. These experiments showed that the performance of the algorithm even when compared with other algorithms do not immuno-inspired. The algorithm based on hybrid operators was also applied in a case study for the allocation of automatic teller machines. The case study is particularly relevant not only for multiobjective optimization research, due the needs to reconcile cost variables, idleness and availability at the point of sale, as well as it is the first time this problem is addressed as a multiobjective problem. Again the results of experiments showed the potential of the multiobjective approach proposed for achieving the optimized objectives , and also confirmed the good performance of the proposed hybrid operators, not only by the results itself but also by the algorithm's ability to adapt to a problem with many goals. Finally, this work proposes six extension points for MOHAIS and the construction of an interdisciplinary research with management and finance researchers for the further development of the case study.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
-L'algorithme Expectation-Maximization (EM) est un algorithme itératif, notamment utilisé pour estimer les maximum de vraisemblance de données issues d'un modèle de mélange gaussien (GMM). Lorsque la taille de l'échantillon des données... more
-L'algorithme Expectation-Maximization (EM) est un algorithme itératif, notamment utilisé pour estimer les maximum de vraisemblance de données issues d'un modèle de mélange gaussien (GMM). Lorsque la taille de l'échantillon des données est proche de leur dimension, les estimations successives de la matrice de covariance peuvent être singulières ou mal conditionnées, entraînant une baisse des performances. Nous présentons dans ce papier une nouvelle version régularisée de l'algorithme EM adapté au cas où le rapport entre taille de l'échantillon et la dimension est faible. Cette méthode maximise une version pénalisée de la vraisemblance de l'algorithme EM-GMM qui assure que les covariances soient toujours définies positives. Des tests sur données réelles illustrent enfin l'intérêt de cette approche pour un problème de clustering.
2024, IFIP International Federation for Information Processing
Rough Sets Theory has opened new trends for the development of the Incomplete Information Theory. Inside this one, the notion of reduct is a very significant one, but to obtain a reduct in a decision system is an expensive computing... more
Rough Sets Theory has opened new trends for the development of the Incomplete Information Theory. Inside this one, the notion of reduct is a very significant one, but to obtain a reduct in a decision system is an expensive computing process although very important in data analysis and knowledge discovery. Because of this, it has been necessary the development of different variants to calculate reducts. The present work look into the utility that offers Rough Sets Model and Information Theory in feature selection and a new method is presented with the purpose of calculate a good reduct. This new method consists of a greedy algorithm that uses heuristics to work out a good reduct in acceptable times. In this paper we propose other method to find good reducts, this method combines elements of Genetic Algorithm with Estimation of Distribution Algorithms. The new methods are compared with others which are implemented inside Pattern Recognition and Ant Colony Optimization Algorithms and the results of the statistical tests are shown.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
Nous proposons un co-clustering par blocs via une approche ensemble qui fusionne plusieurs co-clusterings élémentaires en une matrice d'affinité consensus structurée. Les co-clusterings de base sont issus des mêmes données textuelles et... more
Nous proposons un co-clustering par blocs via une approche ensemble qui fusionne plusieurs co-clusterings élémentaires en une matrice d'affinité consensus structurée. Les co-clusterings de base sont issus des mêmes données textuelles et générés par la même méthode de co-clustering. Ce processus de fusion renforce la qualité individuelle des co-clusterings par blocs au sein d'une seule matrice consensus. Notre approche permet un co-clustering complètement non supervisé, où le nombre de co-clusters est automatiquement déduit d'un critère de modularité non trivial généralisé. La fonction objective associée permet l'apprentissage conjoint de l'agrégation des co-clusterings élémentaires et du coclustering consensus. Les résultats expérimentaux sur plusieurs jeux de données réelles démontrent l'intérêt de notre approche comparée à des méthodes compétitives de co-clustering (Affeldt et al., 2020).
2024, 2011 Sixth International Conference on Bio-Inspired Computing: Theories and Applications
Estimation of Distribution Algorithms (EDAs) are a class of probabilistic model-building evolutionary algorithms, which are characterized by learning and sampling the probability distribution of the selected individuals. This paper... more
Estimation of Distribution Algorithms (EDAs) are a class of probabilistic model-building evolutionary algorithms, which are characterized by learning and sampling the probability distribution of the selected individuals. This paper proposes a modified EDA (mEDA) for digital filter design. mEDA uses a novel sampling method, called centro-individual sampling, and a fuzzy Cmeans clustering technique to improve its performance. Extensive experiments conducted on a set of benchmark functions show that mEDA outperforms seven algorithms reported in the literature, in terms of the quality of solutions. Four types of digital infinite impulse response (IIR) filters are designed by using mEDA and the results show that mEDA can obtain better filter performance than four state-of-the-art methods.
2024
The problem of finding the median of permutations is a fundamental and complex task in combinatorial optimization with wide applications in various fields, including computer science, statistics, and decision-making processes. In this... more
The problem of finding the median of permutations is a fundamental and complex task in combinatorial optimization with wide applications in various fields, including computer science, statistics, and decision-making processes. In this research project, we explore the subtleties of this problem, aiming to develop innovative algorithms and methodologies to efficiently calculate the permutation median while considering its many practical applications. The median of permutations, also known as the Kemeny-Young median, is crucial for preference aggregation, ranking different sets of elements, and group decision-making.
However, it involves significant computational challenges, particularly with large volumes of data or in the case of complex constraints. This project aims to overcome these computational obstacles by creating a Python library dedicated to the implementation of various algorithms and heuristic strategies for efficient median estimation. By enriching our understanding of the permutation median problem and providing applicable approaches, this research effort intends to contribute to the optimization of decision-making processes, the improvement of recommendation systems, and the evolution of data analysis techniques across multiple sectors. It also aims to facilitate the emergence of advanced computational solutions by leveraging machine learning and artificial intelligence models through the developed library.
2024, Control Engineering Practice
Two methods, both based on the concept of combustion net torque, for estimation of combustion properties using measurements of crankshaft torque data are investigated in this work. The first of the proposed methods estimates entire burned... more
Two methods, both based on the concept of combustion net torque, for estimation of combustion properties using measurements of crankshaft torque data are investigated in this work. The first of the proposed methods estimates entire burned mass fraction traces from corresponding combustion net torque traces. This is done by solving a convex optimization problem that is based on a derived analytical relation between the two quantities. The other proposed estimation method estimates the well established combustion phasing measure referred to as 50% burned mass fraction directly from combustion net torque using a nonlinear black-box mapping. The methods are assessed using both simulations and experimental data gathered from a 5-cylinder light-duty diesel engine equipped with a crankshaft torque sensor and cylinder pressure sensors that are used for reference measurements. The results indicate that both methods work well but the method that estimates entire burned mass fraction traces is more sensitive to torque data quality. Based on the experimental crankshaft torque data, the direct combustion phasing estimation method delivers estimates with a bias of less than 1 CAD and a cycle-to-cycle standard deviation of less than 2.7 CAD for all cylinders.
2024, Annals of Operations Research
In this paper we introduce model-based search as a unifying framework accommodating some recently proposed metaheuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and... more
In this paper we introduce model-based search as a unifying framework accommodating some recently proposed metaheuristics for combinatorial optimization such as ant colony optimization, stochastic gradient ascent, cross-entropy and estimation of distribution methods. We discuss similarities as well as distinctive features of each method and we propose some extensions.
2024
Microarray technology permit us to study the expression levels of thousands of genes simultaneously. The technique has a wide range of applications including identification of genes that change their expression in cells due to disease or... more
Microarray technology permit us to study the expression levels of thousands of genes simultaneously. The technique has a wide range of applications including identification of genes that change their expression in cells due to disease or drug stimuli. The dissertation is addressing statistical methods for the selection of differentially expressed genes in two experimental conditions. We propose two different methods
2024, REST Publisher
Self-Management. People with good self-management skills can effectively regulate and control their feelings, thoughts, and actions in a range of situations. Strong self-management skills allow employees to set goals on their own while... more
Self-Management. People with good self-management skills can effectively regulate and control their feelings, thoughts, and actions in a range of situations. Strong self-management skills allow employees to set goals on their own while making every effort to achieve them. Such employees understand how important it is to control their emotions and behavior at work. Employers like people with high self-management skills since it can be challenging for someone to control their emotions and thoughts. As a result, they might end up shouting at a client and saying harsh things to their coworkers. Self-management skills enable people to exercise more self-control, which leads to wiser professional judgements. Important jobs and activities, as well as fulfilling research goals, call for little diversion and a good capacity for concentration (4). The most crucial tasks are identified, prioritized, and concentrated on by researchers, who also try to avoid distractions. In order to efficiently handle their time, simplify daily tasks, and keep crucial information and things close at hand, researchers create organizational systems. Finally, they get ready to start the job at hand. They arrive early for classes, meetings, and presentations. After leaving for the day, they make plans for the next day. A novel and effective MCDM is designed based on the separation to the mean solution evaluations (EDAS). In this manner, alternatives are chosen depending on how much they deviate analysis, the EDAS approach (Avg evaluation adjusted for distance from solution) is the best choice. The solution with the greatest separation from the ideal is short range and negative, although the comparison between these distances is insignificant. Alternative: Overall sample n 605, 6th graders in 309, 8th graders in 296. Self-management strategies, perceived barriers, Perceived barriers, Outcome expectancy. Results: From the result it is seen that Outcome expectancy is got the first rank where as is the perceived barriers is having the lowest rank.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
We identify simultaneously storage and hydraulic transmissivity coefficients in groundwater flow governed by a linear parabolic equation. Both parameters are assumed to be functions piecewise constant in space. The unknowns are the... more
We identify simultaneously storage and hydraulic transmissivity coefficients in groundwater flow governed by a linear parabolic equation. Both parameters are assumed to be functions piecewise constant in space. The unknowns are the coefficient values as well as the geometry of the zones where these coefficients are constant. This problem is formulated as minimizing a least-square function calculating the difference between measurements and the corresponding quantities computed with the current parameters values. The main point of this paper is to construct an adaptative parameterization technique guided by refinement indicators.
2024, Journal of Inverse and Ill-posed Problems
L'algorithme des indicateurs de raffinement multidimensionel pour une paramétrisation optimale Résumé : L'estimation de paramètres distribués dans des équations aux dérivées partielles (EDP) à partir de mesures de la solution de l'EDP... more
L'algorithme des indicateurs de raffinement multidimensionel pour une paramétrisation optimale Résumé : L'estimation de paramètres distribués dans des équations aux dérivées partielles (EDP) à partir de mesures de la solution de l'EDP peut mener à des problèmes de sous-détermination. Le choix d'une paramétrisation est un moyen usuel pour ajouter de l'information a priori en réduisant le nombre d'inconnues en relation avec la physique du problème. L'algorithme des indicateurs de raffinement fourni une technique de paramétrisation adaptative fructueuse qui ouvre parcimonieusement les degrés de liberté de façon itérative. Nous présentons une nouvelle forme générale de l'algorithme des indicateurs de raffinement qui s'applique à l'estimation des paramètres multi-dimensionnels dans toute EDP. Dans le cas linéaire, nous établissons le lien entre l'indicateur de raffinement et la décroissance de la fonction objectif des moindres carrés quantifiant l'erreur aux données. Nous donnons des résultats numériques pour le cas simple du modèle identité, et cette application permet de voir l'algorithme des indicateurs de raffinement comme une technique de segmentation d'image.
2024, 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC)
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on... more
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on the non-dependency model at each iteration and sends probability vectors to slaves. The slaves use the vector to generate new initial population. Our approach employs the simplest probability model but compensates for the accuracy problems by applying GAs to the solutions sampled from the simplest model. Moreover, our method can be incorporated with searching strategy and also easily parallelized. Computer experiment shows some effectiveness of our method.
2024
Na sociedade moderna marcada pela globalizacao e pelo consumo, o uso racional e sustentavel dos recursos naturais e a constante busca por maior qualidade e reducao de custos nos processos de producao estao recebendo cada vez mais... more
Na sociedade moderna marcada pela globalizacao e pelo consumo, o uso racional e sustentavel dos recursos naturais e a constante busca por maior qualidade e reducao de custos nos processos de producao estao recebendo cada vez mais importância. Em ambos os aspectos a utilizacao de ferramentas de otimizacao e salutar. Nesse cenario se destaca a tecnica dos algoritmos geneticos, que sao algoritmos de busca baseado na teoria da evolucao de Darwin e da mutacao genetica de Mendel. Os algoritmos geneticos trabalham com um conjunto de possiveis solucoes (individuos) para o problema, que evoluem segundo alguns criterios geneticos para, idealmente, convergir para a melhor solucao. Apesar de fornecerem solucoes satisfatorias mesmo para problemas complexos e, talvez por isso, serem amplamente utilizados, essa tecnica pode sofrer com problemas de convergencia. Em muitas situacoes os individuos do conjunto podem se concentrar em determinadas regioes do espaco de busca que nao contem a melhor soluc...
2024, Proceedings of the 18th IFAC World Congress
In this paper it is studied the classical problem of target tracking by a new approach consisting in the treatment of the classical nonlinear measurement process in a form amenable for polynomial filtering without the need of the measure... more
In this paper it is studied the classical problem of target tracking by a new approach consisting in the treatment of the classical nonlinear measurement process in a form amenable for polynomial filtering without the need of the measure map linearization, as required by other standard sub-optimal algorithms. The main idea is to transfer the nonlinearity of the measure map into a modification of the noise sequence distribution in a nongaussian white sequence. This is indeed the property required for Kalman filtering which, although non more optimal, remains to be the optimal linear filtering algorithm. Conditions for polynomial filtering are also satisfied, allowing to face the nongaussian nature of the modified noise sequence. Simulations show high performances of the proposed algorithm.
2024, 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC)
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on... more
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on the non-dependency model at each iteration and sends probability vectors to slaves. The slaves use the vector to generate new initial population. Our approach employs the simplest probability model but compensates for the accuracy problems by applying GAs to the solutions sampled from the simplest model. Moreover, our method can be incorporated with searching strategy and also easily parallelized. Computer experiment shows some effectiveness of our method.
2024, International journal of statistics and applications
The problem of nonparametric estimation of finite population distribution function using multiplicative bias correction technique is considered in this paper. A robust estimator of the finite population distribution function based on... more
The problem of nonparametric estimation of finite population distribution function using multiplicative bias correction technique is considered in this paper. A robust estimator of the finite population distribution function based on multiplicative bias correction is derived with the aid of a super population model. The properties of the estimator are developed and comparative study with the existing model based and design based estimators is carried to assess the performance of the estimator developed using the simulated sets of data. It is observed that the estimator is asymptotically unbiased and statistically consistent when certain conditions are satisfied. It has been shown that when the model-based estimators are used in estimating the finite population total, there exists bias-variance trade-off along the boundary. The multiplicative bias corrected estimator has recorded better results in estimating the finite population distribution function by correcting the boundary probl...
2024
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given... more
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. In this paper we compare three different adaptive algorithms that include strategies to modify the mutation probability without external control. One adaptive strategy uses the genetic diversity present in the population to update the mutation probability. Other strategy is based on the ideas of reinforcement learning and the last one varies the probabilities of mutation depending on the fitness values of the solution. All these strategies eliminate a very expensive computational phase related to the pre-tuning of the algorithmic parameters. The empirical comparisons show that if the genetic algorithm uses the genetic diversity, as the strategy for adapting the mutation probability outperforms the other two strategies.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
Cet article étend le modèle de Cox au cas des données multivoie, c'est-à-dire aux données où chaque individu est décrit par plusieurs modalités de la même covariable. Imposer aux coefficients de régression une structure tensorielle... more
Cet article étend le modèle de Cox au cas des données multivoie, c'est-à-dire aux données où chaque individu est décrit par plusieurs modalités de la même covariable. Imposer aux coefficients de régression une structure tensorielle identique à celle des données permet d'une part de restreindre le nombre de coefficients à estimer et donc la complexité calculatoire et d'autre part d'éviter le phénomène de sur-apprentissage. Cette nouvelle approche est évaluée et validée sur données simulées. Mots-clés. Modèle de survie, modèle de Cox, données multivoie.
2024, Procedings do XV Simpósio Brasileiro de Automação Inteligente
Transmission expansion planning (TEP) is an important study among the various electrical power system activities. However, solving the optimization problem resulting from TEP studies for real large electrical systems is a complex task,... more
Transmission expansion planning (TEP) is an important study among the various electrical power system activities. However, solving the optimization problem resulting from TEP studies for real large electrical systems is a complex task, which involves the analysis of a large space of solutions. In this sense, the present work proposes the investigation of diversity strategies combined with the meta-heuristic Genetic Algorithm (GA) to solve the TEP problem. The Deterministic Crowding and K-means techniques are used to create different versions of GA with population diversity. A real and present-day electrical system, which corresponds to the geoelectric region of southern Brazil, is used to carry out performance studies and analyze the use of different diversity strategies. Resumo: O planejamento da expansão da transmissão (PET) configura um importante estudo dentre as diversas atividades realizadas em um sistema elétrico de potência. No entanto, solucionar o problema de otimização decorrente dos estudos PET para sistemas elétricos reais de grande porteé uma tarefa complexa, que envolve a análise de um grande espaço de soluções. Neste sentido, o presente trabalho propõe a investigação de estratégias de diversidade combinadasà metaheurística Algoritmo Genético (AG) para solução eficiente do problema PET. As técnicas Deterministic Crowding e K-means são utilizadas para criar diferentes versões do AG com diversidade populacional. Um sistema elétrico real e atual, que correspondeà região geoelétrica sul do Brasil,é utilizado para realizar os estudos de desempenho e analisar o emprego de diferentes estratégias de diversidade.
2024, Biopolymers
Using an iterative approach, we have placed monovalent ("solvated") and divalent (both solvated and " unsolvated") ions around a 20 base pair sequence, (dC-dG),,, in standard B and ZI conformations. The molecule with its attendant ions in... more
Using an iterative approach, we have placed monovalent ("solvated") and divalent (both solvated and " unsolvated") ions around a 20 base pair sequence, (dC-dG),,, in standard B and ZI conformations. The molecule with its attendant ions in the various conformations is subjected to energy minimization using the program AMBER. In the presence of solvated cations (both monovalent as well as divalent) the B form is more stable than the 2 form. However, direct binding with the unsolvated divalent cations makes the Z form more stable. Groove-binding provides some insight into the facility with which the B to Z transition occurs with higher charged cations. In the presence of unsolvated divalent cations, the Z form binds more charges at the groove through more ligands, compared to the B form. The orientation around the CpG phosphates in the minor groove of the Z form is found ideal for ion binding. Detailed molecular models for the ion binding have been developed. In general, phosphate groups dominate the ion binding. Large perturbations are seen mostly in the angles that control the phosphate orientation.
2024, IEEE Transactions on Medical Imaging
Recently, advances have been made in continuous, normal- distribution-based Estimation-of-Distribution Algorithms (EDAs) by scaling the variance up from the maximum-like- lihood estimate. When done properly, such scaling has been shown to... more
Recently, advances have been made in continuous, normal- distribution-based Estimation-of-Distribution Algorithms (EDAs) by scaling the variance up from the maximum-like- lihood estimate. When done properly, such scaling has been shown to prevent premature convergence on slope-like re- gions of the search space. In this paper we specifically fo- cus on one way of scaling that was previously introduced as Adaptive
2024, 2012 IEEE Congress on Evolutionary Computation
Estimation of Distribution Algorithms (EDAs) focus on explicitly modelling dependencies between solution variables. A Gaussian distribution over continuous variables is commonly used, with several different covariance matrix structures... more
Estimation of Distribution Algorithms (EDAs) focus on explicitly modelling dependencies between solution variables. A Gaussian distribution over continuous variables is commonly used, with several different covariance matrix structures ranging from diagonal i.e. Univariate Marginal Distribution Algorithm (UMDA c) to full i.e. Estimation of Multivariate Normal density Algorithm (EMNA). A diagonal covariance model is simple but is unable to directly represent covariances between problem variables. On the other hand, a full covariance model requires estimation of (more) parameters from the selected population. In practice, numerical issues can arise with this estimation problem. In addition, the performance of the model has been shown to be sometimes undesirable. In this paper, a modified Gaussian-based continuous EDA is proposed, called sEDA, that provides a mechanism to control the amount of covariance parameters estimated within the Gaussian model. To achieve this, a simple variable screening technique from experimental design is adapted and combined with an idea inspired by the Pareto-front in multi-objective optimization. Compared to EMNA global , the algorithm provides improved numerical stability and can use a smaller selected population. Experimental results are presented to evaluate and compare the performance of the algorithm to UMDA c and EMNA global .
2024, 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC)
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on... more
This paper proposes a hybrid method of estimation of distribution algorithms (ED As) and genetic algorithms (GAs) based on master/slave cooperation. The master process estimates the probability distribution of the search space based on the non-dependency model at each iteration and sends probability vectors to slaves. The slaves use the vector to generate new initial population. Our approach employs the simplest probability model but compensates for the accuracy problems by applying GAs to the solutions sampled from the simplest model. Moreover, our method can be incorporated with searching strategy and also easily parallelized. Computer experiment shows some effectiveness of our method.
2024, IEEE Transactions on Information Theory
In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is... more
In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, three problems are considered: goodness-of-fit (or identity) testing, process classification, and the change point problem. For each of the problems a test is constructed that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of distributional distance.
2024
We address three problems of statistical analysis of time series: goodness-of-fit (or identity) testing, process discrimination, and the change point problem. For each of the problems we construct a test that is asymptotically accurate... more
We address three problems of statistical analysis of time series: goodness-of-fit (or identity) testing, process discrimination, and the change point problem. For each of the problems we construct a test that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. All problems are solved in a similar way by using empirical estimates of the distributional distance between the processes.
2024, arXiv (Cornell University)
We address three problems of statistical analysis of time series: goodness-of-fit (or identity) testing, process discrimination, and the change point problem. For each of the problems we construct a test that is asymptotically accurate... more
We address three problems of statistical analysis of time series: goodness-of-fit (or identity) testing, process discrimination, and the change point problem. For each of the problems we construct a test that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. All problems are solved in a similar way by using empirical estimates of the distributional distance between the processes.
2024, Hydrological Processes
Regression tree models have been shown to provide the most accurate estimates of distributed snow water equivalent (SWE) when intensive field observations are available. This work presents a comparison of regression tree models using... more
Regression tree models have been shown to provide the most accurate estimates of distributed snow water equivalent (SWE) when intensive field observations are available. This work presents a comparison of regression tree models using different source digital elevation models (DEMs) and different combinations of independent variables. Different residual interpolation techniques are also compared. The analysis was performed in the 19Ð1 km 2 Tokopah Basin, located in the southern Sierra Nevada of California. Snow depth, the dependent variable of the statistical models, was derived from three snow surveys (April, May and June 1997), with an average of 328 depth measurements per survey. Estimates of distributed SWE were derived from the product of the snow depth surfaces, the average snow density (54 measurements on average) and the fractional snow covered area (obtained from the Landsat Thematic Mapper and the Airborne Visible/Infrared Imaging Spectrometer). Independent variables derived from the standard US Geological Survey DEM yielded the lowest overall model deviance and lowest error in snow depth prediction. Simulations using the Shuttle Radar Topography Mission DEM and the National Elevation Dataset DEM were improved when northness was substituted for solar radiation in five of six cases. Co-kriging with maximum upwind slope and elevation proved to be the best method for distributing residuals for April and June, respectively. Inverse distance weighting was the best residual distribution method for May.
2024
Daily (one-day) and k-days precipitation totals maxima might gain a rising tendency in the future in consequence of the global warming. It is well known that heavy rains or intensive rains of several days durations could cause very... more
Daily (one-day) and k-days precipitation totals maxima might gain a rising tendency in the future in consequence of the global warming. It is well known that heavy rains or intensive rains of several days durations could cause very significant problems in economical and social spheres of the countries. The important damages could be suffered to the environment. In the contribution
2024, Springer eBooks
The intrinsic feature of Estimation of Distribution Algorithms lies in their ability to learn and employ probabilistic models over the input spaces. Discovery of the appropriate model usually implies a computationally expensive... more
The intrinsic feature of Estimation of Distribution Algorithms lies in their ability to learn and employ probabilistic models over the input spaces. Discovery of the appropriate model usually implies a computationally expensive comprehensive search, where many models are proposed and evaluated in order to find the best value of some model discriminative scoring metric. This chapter presents how simple pairwise interaction variable data can be extended and used to efficiently guide the model search, decreasing the number of model evaluations by several orders of magnitude or even facilitate the finding of richer, qualitatively better models. As case studies, first the O(n 3) model building of the Extended Compact Genetic Algorithm is successfully replaced by a correlation guided search of linear complexity, which infers the perfect problem structures on the test suites. In a second study, a search technique is proposed for finding Bayesian network structures, capable of modeling complicated multivariate interactions, like the one exemplified by the parity function.