Simulated Annealing Research Papers - Academia.edu (original) (raw)

Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia... more

Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more ...

A genetic algorithm and a simulated annealing approach is presented for the guidance of a cellular automaton toward optimal configurations. The algorithm is applied to a problem of groundwater allocation in a rectangular area consisting... more

A genetic algorithm and a simulated annealing approach is presented for the guidance of a cellular automaton toward optimal configurations. The algorithm is applied to a problem of groundwater allocation in a rectangular area consisting of adjacent land blocks and modeled as a cellular automaton. The new algorithm is compared to a more conventional genetic algorithm and its efficiency is clearly demonstrated. Also, comparison is made to a simulated annealing scheme. Finally, the proposed genetic algorithm is combined with simulated annealing to yield a new hybrid. The presented cell - based algorithm is different from related algorithms of the literature, as it relies on local interactions among land blocks. Moreover, it offers a framework for application to more general and detailed problems.

This paper explores the use of simulated annealing (SA) for solving arbitrary combinatorialoptimisation problems. It reviews an existing code called GPSIMAN for solving0-1 problems, and evaluates it against a commercial branch-and-bound... more

This paper explores the use of simulated annealing (SA) for solving arbitrary combinatorialoptimisation problems. It reviews an existing code called GPSIMAN for solving0-1 problems, and evaluates it against a commercial branch-and-bound code, OSL ...

School timetabling is a classical combinatorial optimization problem, which consists in assigning lessons to time slots, satisfying a set of constraints of various kinds. Due mostly to the constraints this problem falls in the category of... more

School timetabling is a classical combinatorial optimization problem, which consists in assigning lessons to time slots, satisfying a set of constraints of various kinds. Due mostly to the constraints this problem falls in the category of NP-Complete problems. In this paper we try to show an implementation of a decision support system that solves real timetabling problems from various schools in Portugal. This implementation is based on the Simulated Annealing meta-heuristic. The constraints we use were obtained after inquiries made to several schools in Portugal. We show the results on three schools from different levels of teaching.

Discovery of the optimal best possibility of location for facilities is the central problem associated in logistics management. The optimal places for the distribution centres (DCs) can be based on the selected attributes that are crucial... more

Discovery of the optimal best possibility of location for facilities is the central problem associated in logistics management. The optimal places for the distribution centres (DCs) can be based on the selected attributes that are crucial to locate the best possible locations to increase the speed of the facility service and thus reduce the overall transport cost and time and to provide best service. The major task is to identifying and locating the required number of DCs and its optimum locations are considered as the important goals for the design of any logistics network. The number of DCs will clearly depends upon many factors like population, capacity of the facility, type of facility etc. but locating the optimum locations of DCs will reduce the overall cost. But, for solving such a wide problem space, the powerful tools are the soft computing based approaches and that are well suited and find a meaningful solution in finite time. In this work, we are going to find the optimum locations of DCs for logistics using various soft computing methods. .

Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change... more

Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change the design and geometric parameters to satisfy a given heat duty and constraints. Although well proven, this kind of approach is time consuming and may not lead to cost effective design as no cost criteria are explicitly accounted for. The present study explores the use of nontraditional optimization technique: called simulated annealing (SA), for design optimization of shell and tube heat exchangers from economic point of view. The optimization procedure involves the selection of the major geometric parameters such as tube diameters, tube length, baffle spacing, number of tube passes, tube layout, type of head, baffle cut etc and minimization of total annual cost is considered as design target. The presented simulated annealing technique is simple ...

In this paper, we present an improved hybrid optimization algorithm, which was applied to the hard combinatorial optimization problem, the quadratic assignment problem (QAP). This is an extended version of the earlier hybrid heuristic... more

In this paper, we present an improved hybrid optimization algorithm, which was applied to the hard combinatorial optimization problem, the quadratic assignment problem (QAP). This is an extended version of the earlier hybrid heuristic approach proposed by the author. The new algorithm is distinguished for the further exploitation of the idea of hybridization of the well‐known efficient heuristic algorithms, namely, simulated annealing (SA) and tabu search (TS). The important feature of our algorithm is the so‐called “cold restart mechanism”, which is used in order to avoid a possible “stagnation” of the search. This strategy resulted in very good solutions obtained during simulations with a number of the QAP instances (test data). These solutions show that the proposed algorithm outperforms both the “pure” SA/TS algorithms and the earlier author's combined SA and TS algorithm. Key words: hybrid optimization, simulated annealing, tabu search, quadratic assignment problem, simulat...

The irregular shape packing problem is a combinatorial optimization problem that consists of arranging items on a container in such way that no item overlaps. In this paper we adopt a solution that places the items sequentially, touching... more

The irregular shape packing problem is a combinatorial optimization problem that consists of arranging items on a container in such way that no item overlaps. In this paper we adopt a solution that places the items sequentially, touching the already placed items or the container. To place a new item without overlaps, the collision free region for the new item is robustly computed using non manifold Boolean operations. A simulated annealing algorithm controls the items sequence of placement, the item's placement and orientation. In this work, the placement occurs at collision free region's vertices. Several results with benchmark datasets obtained from the literature are reported. Some of them are the best already reported in the literature. To improve the computational cost performance of the algorithm, a parallelization method to determine the collision free region is proposed. We demonstrated two possible algorithms to compute the collision free region, and only one of them can be parallelized. The results showed that the parallelized version is better than the sequential approach only for datasets with very large number of items. The computational cost of the non manifold Boolean operation algorithm is strongly dependent on the number of vertices of the original polygons.

In this paper we study a model to estimate the probability that a target group of an advertising campaign is reached by a commercial message a given number of times. This contact frequency distribution is known to be computationally... more

In this paper we study a model to estimate the probability that a target group of an advertising campaign is reached by a commercial message a given number of times. This contact frequency distribution is known to be computationally difficult to calculate because of dependence between the viewing probabilities of advertisements. Our model calculates good estimates of contact frequencies in a very short time based on data that is often available. A media planning model that optimizes effective reach as a function of contact frequencies demonstrates the usefulness of the model. Several local search procedures such as taboo search, simulated annealing and genetic algorithms are applied to find a good media schedule. The results show that local search methods are flexible, fast and accurate in finding media schedules for media planning models based on contact frequencies. The contact frequency model is a potentially useful new tool for media planners.

266 Abstract—The estimation accuracy has been focused in various formal estimation models in recent research initiatives. The formal estimation models were developed to measure lines of code and function points in the software projects... more

266 Abstract—The estimation accuracy has been focused in various formal estimation models in recent research initiatives. The formal estimation models were developed to measure lines of code and function points in the software projects but most of them failed to improve accuracy in estimation. The concept of reusability in software development in estimating effort using artificial neural network is focused in this paper. Incorporation of reusability metrics in COCOMO II may yield better results. In COCOMO II it is very difficult to find the values of size parameters. A new model called COREAN has been proposed in this paper for better effort estimation accuracy and reliability. The proposed model has focused on two components of COCOMO II. First, instead of using RUSE cost driver, three new reuse cost drivers are introduced. Second, In order to reduce the project cost, three cost drivers such as PEXE, AEXE, LTEX are combined into single cost driver Personnel Experience (PLEX). Final...

This paper aims to model a consumer goods cross-docking problem, which is solved using metaheuristics to minimize makespan and determine the capacity in terms of inbound and outbound docks. The consumer-goods cross-docking problem is... more

This paper aims to model a consumer goods cross-docking problem, which is solved using metaheuristics to minimize makespan and determine the capacity in terms of inbound and outbound docks. The consumer-goods cross-docking problem is represented through inbound and outbound docks, customer orders (products to be delivered to customers), and metaheuristics as a solution method. Simulated annealing (SA) and particle swarm optimization (PSO) are implemented to solve the cross-docking problem. Based on the results of statistical analysis, it was identified that the two-way interaction effect between inbound and outbound docks, outbound docks and items, and items and metaheuristics are the most statistically significant on the response variable. The best solution provides the minimum makespan of 973.42 minutes considering nine inbound docks and twelve outbound docks. However, this study detected that the combination of six inbound docks and nine outbound docks represents the most efficie...

In this paper, an enhanced ant colony optimization (EACO) is proposed for capacitated vehicle routing problem. The capacitated vehicle routing problem is to service customers with known demands by a homogeneous fleet of fixed capacity... more

In this paper, an enhanced ant colony optimization (EACO) is proposed for capacitated vehicle routing problem. The capacitated vehicle routing problem is to service customers with known demands by a homogeneous fleet of fixed capacity vehicles starting from a depot. It plays a major role in the field of logistics and belongs to NP-hard problems. Therefore, it is difficult to solve the capacitated vehicle routing problem directly when solutions increase exponentially with the number of serviced customers. The framework of this paper is to develop an enhanced ant colony optimization for the capacitated vehicle routing problem. It takes the advantages of simulated annealing and ant colony optimization for solving the capacitated vehicle routing problem. In the proposed algorithm, simulated annealing provides a good initial solution for ant colony optimization. Furthermore, an information gain based ant colony optimization is used to ameliorate the search performance. Computational results show that the proposed algorithm is superior to original ant colony optimization and simulated annealing separately reported on fourteen small-scale instances and twenty large-scale instances.

The Capacitated Lot Sizing Problem (CLSP) consists of planning the lot sizes of multiple items over a planning horizon with the objective of minimizing setup and inventory holding costs. In each period that an item is produced a setup... more

The Capacitated Lot Sizing Problem (CLSP) consists of planning the lot sizes of multiple items over a planning horizon with the objective of minimizing setup and inventory holding costs. In each period that an item is produced a setup cost is incurred. Capacity is limited and ...

Specification tests using stochastic bottleneck models of airport congestion investigate whether dominant airlines internalize or ignore self-imposed delays at twenty-seven major US airports. Data on flight times determine the airport's... more

Specification tests using stochastic bottleneck models of airport congestion investigate whether dominant airlines internalize or ignore self-imposed delays at twenty-seven major US airports. Data on flight times determine the airport's landing and takeoff delays for every minute of operation during peak travel days. Dynamic congestion functions based on stochastic-queuing theory separately identify delays that aircraft experience directly, impose internally on their airline's other aircraft, or impose externally on other airlines. Specification tests largely reject internalization and fail to reject non-internalization by dominant airlines. Optimal pricing should value all time using non-dominant aircraft time values and treat all delays as external.