Optimization using neural networks (original) (raw)

Neural Networks and Optimization Problems

Citeseer

An optimization problem can be defined as a pair of an objective function and a set of constraints on the variables of the function. The goal is to find out the values of the variables that lead to an optimal value of the function (either minimum or maximum), while satisfying all the constraints. During the last decades, an alternative model of computation has been explored, namely the neural network model. It turned out that several Hopfield-type networks can be employed successfully to provide approximate (near-optimal or even optimal) solutions to hard optimization problems. This is due to the property of reducing their "energy function" during evolution, leading to a local or global minimum. In this report, the general methodology of the approach is described as well as the different network models usually employed as optimizers. Then, a case study involving the Minimum Cost Spare Allocation Problem (or equivalently Vertex Cover in bipartite graphs) is presented. Finally, the experimental results (using a simulation implemented in C) demonstrate clearly the advantages and the limitations of the approach in terms of solution quality and computation time.

Neural Networks, Artificial Intelligence, and Optimization

Artificial intelligence research is a rapidly growing field. Here we provide a brief survey of artificial neural networks, an essential component of AI research. We briefly introduce the way neural networks work, along with several types of neural networks currently used. Then, we explore an evolutionary method of creating neural networks. Several of these are implemented in silico, and results are analyzed.

System optimization with artificial neural networks: parallel implementation using transputers

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks, 1992

In the recent paper, a neural network with a three-layer feedback topology for solving continuous optimization problems has been proposed. In this paper, a parallel implementation of the proposed neural network is presented. The implementation described here uses a transputer system, which enables to solve problems with several variables. Results from this implementation and comparisons with the sequential implementation's results are also presented.

Solving combinatorial optimisation problems using neural networks

University of Melbourne, Melbourne, …, 1996

The tasks planning in the transport domain is a difficult problem which requires the use of analytical techniques and modelling methods resulting from the operational research, the distributed Artificial Intelligence (multiagents systems), the decision analysis, and many other disciplines. Our contribution to this problem consists on one hand of proposing a modelling of the transport system by a multi-agents system (MAS) based on a classification model of agents to manage our different agent Subsystems (supervision subsystem, planning subsystem, ergonomic subsystem) at the same time, while keeping a global structure; and on the other hand of deploying a Fuzzy Hopfield-Neural Network model to solve the routing and scheduling problems within our planning subsystem. The computational experiments were carried out on an extended set of 300 Routing problems with 21 customers. The results demonstrate that the connexionist approach is highly competitive in term of computing time, providing the best solutions to 56% of all test instances within reasonable computing times. The power of our algorithm is confirmed by the results obtained on 21 customer problems from the literature.

Missile defense and interceptor allocation by neuro-dynamic programming

IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 2000

The purpose of this paper is to propose a solution methodology for a missile defense problem involving the sequential allocation of defensive resources over a series of engagements. The problem is cast as a dynamic programming/Markovian decision problem, which is computationally intractable by exact methods because of its large number of states and its complex modeling issues. We have employed a neuro-dynamic programming (NDP) framework, whereby the cost-to-go function is approximated using neural network architectures that are trained on simulated data. We report on the performance obtained using several different training methods, and we compare this performance with the optimal.

Engineering Optimization using Artificial Neural Network

Neural network is one of the important components in Artificial Intelligence (AI). It is a computational model for storing and retrieving acquired knowledge .It has been studied for many years in the hope of achieving human-like performance in many fields, such as speech and image recognition as well as information retrieval. ANNs consists of dense interconnected computing units that are simple models for complex neurons in biological systems. The knowledge is acquired during a learning process and is stored in the synaptic weights of the inter-nodal connections. The main advantage of neural network is their ability to represent complex input/output relationships. The performance of an ANN is measured according to an error between target and actual output, complexity of ANN, training time etc. The topologicalstructure of an ANN can be characterized by the arrangement of the layers and neurons, the nodal connectivity, and the nodal transfer function. This paper contains development of a simple graphical user interface in MATLAB that uses neural network algorithm for prediction of output for a set of inputs according to the learning example given. The developed tool is used to predict material removal rate and tool ware by taking speed, feed and depth of cut as input parameter from a CNC milling machine. This tool is also used for predicting volume and stress of an inboard fixed trailing edge rib for different combination of thickness parameters. The predicted result is then compared with results obtained from ANSYS for calculation of error percentage. Higher value of learning rate is used for prediction of large number of data.

A multi-objective approach for dynamic missile allocation using artificial neural networks for time sensitive decisions

2021

In this study, we develop a new solution approach for the dynamic missile allocation problem of a naval task group (TG). The approach considers the rescheduling of the surface-to-air missiles (SAMs), where a set of them have already been scheduled to a set of attacking anti-ship missiles (ASMs). The initial schedule is mostly inexecutable due to disruptions such as neutralization of a target ASM, detecting a new ASM, and breakdown of a SAM system. To handle the dynamic disruptions while keeping efficiency high, we use a bi-objective model that considers the efficiency of SAM systems and the stability of the schedule simultaneously. The rescheduling decision is time-sensitive, and the amount of information to be processed is enormous. Thus, we propose a novel approach that supplements the decision-maker (DM) in choosing a Pareto optimal solution considering two conflicting objectives. The proposed approach uses an artificial neural network (ANN) that includes an adaptive learning alg...

Artificial Neural Networks Based Optimization Techniques: A Review

Electronics

In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and backtracking search algorithm (BSA) and some modern developed techniques, e.g., the lightning search algorithm (LSA) and whale optimization algorithm (WOA), and many more. The entire set of such techniques is classified as algorithms based on a population where the initial population is randomly created. Input parameters are initialized within the specified range, and they can provide optimal solutions. This paper emphasizes enhancing the neural network via optimization algorithms by manipulating its tuned parameters or training parameters to obtain the best structure network pattern to dissol...

Neural techniques for combinatorial optimization with applications

IEEE Transactions on Neural Networks, 1998

After more than a decade of research, there now exist several neural-network techniques for solving NP-hard combinatorial optimization problems. Hopfield networks and selforganizing maps are the two main categories into which most of the approaches can be divided. Criticism of these approaches includes the tendency of the Hopfield network to produce infeasible solutions, and the lack of generalizability of the self-organizing approaches (being only applicable to Euclidean problems). This paper proposes two new techniques which have overcome these pitfalls: a Hopfield network which enables feasibility of the solutions to be ensured and improved solution quality through escape from local minima, and a self-organizing neural network which generalizes to solve a broad class of combinatorial optimization problems. Two sample practical optimization problems from Australian industry are then used to test the performances of the neural techniques against more traditional heuristic solutions.