Digital very-large-scale integration (VLSI) Hopfield neural network implementation on field programmable gate arrays (FPGA) for solving constraint satisfaction problems (original) (raw)
Related papers
A New Method to Solve the Constraint Satisfaction Problem Using the Hopfield Neural Network
The constraint satisfaction problem is constituted by several condition formulas, which makes it difficult to be solved. In this paper, using the Hopfield neural network, a new method is pro posed to solve the constraint satisfaction problem by simplifying its condition formula. In this method, all restriction conditions of a constraint satisfaction problem are divided into two restrictions: restriction I and restriction II. In processing step, restriction II is satisfied by setting its value to be 0 and the value of restriction I is always made on the decreasing direction. The optimum so- lution could be obtained when the values of energy, restriction I and restriction II become 0 at the same time. To verify the valid ity of the proposed method, we apply it to two typical constraint satisfaction problems: N-queens problem and four-coloring prob lem. The simulation results show that the optimum solution can be obtained in high speed and high convergence rate. Moreover, compared ...
A wide variety of combinatorial problems can be viewed as Weighted Constraint Satisfaction Problems (WCSPs). All resolution methods have an exponential time complexity for big instances. Moreover, they combine several techniques, use a wide variety of concepts and notations that are difficult to understand and implement. In this paper, we model this problem in terms of an original 0-1 quadratic programming subject to linear constraints. This model is validated by the proposed and demonstrated theorem. View its performance, we use the Hopfield neural network to solve the obtained model basing on original energy function. To validate our model, we solve several instances of benchmarking WCSP. Our approach has the same memory complexity as the HNN and the same time complexity as Euler-Cauchy method. In this regard, our approach recognizes the optimal solution of the said instances.
Architecture Analysis of an FPGA-Based Hopfield Neural Network
Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.
An Optimal Implementation on FPGA of a Hopfield Neural Network
Advances in Artificial Neural Systems, 2011
The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network ) parallel architecture on a SRAM-based FPGA. The main advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions.
PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ADVANCES IN MATERIALS RESEARCH (ICAMR - 2019), 2020
Hopfield Neural Network (HNN) is a sort of neural network that is strongly dependent to energy minimization of solution. Although HNN managed to solve various optimization problem, the output of HNN suffered from a lack of interpretability and variation. This has severely limited the practical usability of HNN in doing logic programming. Inspired by random neuron perturbation, Estimation of Distribution Algorithm (EDA) has been proposed to explore various optimal neuron state. EDAs employs a probabilistic model to sample the neuron state in order to move toward the various optimal location of global minimum energy. In this paper, a new Mutation Hopfield Neural Network (MHNN) will be proposed to do k Satisfiability programming. Based on the experimental result, the proposed MHNN has outperformed conventional HNN in various performance metric.
Solving the N-Queens problem with a binary Hopfield-type network
Biological Cybernetics, 1995
The application of a discrete Hopfield-type neural network to solving the NP-Hard optimization problem -the N-Queens Problem (NQP) is presented. The applied network is binary, and at every moment each neuron potential is equal to either 0 or 1. The network can be implemented in the asynchronous mode as well as in the synchronous one with n parallelly running processors. In both cases the convergence rate is up to 100% and the experimental estimate of the average computational complexity is polynomial. Based on the computer simulation results and the theoretical analysis the proper network parameters are established. The behaviour of the network is explained.
Convergence and stability study of Hopfield's neural network for linear programming
1992
Parameters that affect stability and convergence of the Hopfield model were identified by simulation. The Hopfield model used to solve optimization problems was defined by an analog electrical circuit. The authors illustrate that by introducing one additional amplifier a convergence with a good stability can be obtained. It is shown that convergence and stability can be obtained without oscillations. This novel model was used to solve a linear programming problem. Some results are presented
A HOPFIELD-TYPE NEURAL NETWORK BASED MODEL FOR TEMPORAL CONSTRAINTS
International Journal on Artificial Intelligence Tools, 2004
In this paper we present an approximation method based on discrete Hopfield neural network (DHNN) for solving temporal constraint satisfaction problems. This method is of interest for problems involving numeric and symbolic temporal constraints and where a solution satisfying the constraints of the problem needs to be found within a given deadline. More precisely the method has the ability to provide a solution with a quality proportional to the allocated process time. The quality of the solution corresponds here to the number of satisfied constraints. This property is very important for real world applications including reactive scheduling and planning and also for over constrained problems where a complete solution cannot be found. Experimental study, in terms of time cost and quality of the solution provided, of the DHNN based method we propose provides promising results comparing to the other exact methods based on branch and bound and approximation methods based on stochastic local search.