Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks (original) (raw)

Architecture and implementation of an associative memory using sparse clustered networks

2012 IEEE International Symposium on Circuits and Systems, 2012

Associative memories are alternatives to indexed memories that when implemented in hardware can benefit many applications such as data mining. The classical neural network based methodology is impractical to implement since in order to increase the size of the memory, the number of information bits stored per memory bit (efficiency) approaches zero. In addition, the length of a message to be stored and retrieved needs to be the same size as the number of nodes in the network causing the total number of messages the network is capable of storing (diversity) to be limited. Recently, a novel algorithm based on sparse clustered neural networks has been proposed that achieves nearly optimal efficiency and large diversity. In this paper, a proof-of-concept hardware implementation of these networks is presented. The limitations and possible future research areas are discussed.

Associative Memories Based on Multiple-Valued Sparse Clustered Networks

2014 IEEE 44th International Symposium on Multiple-Valued Logic, 2014

Associative memories are structures that store data patterns and retrieve them given partial inputs. Sparse Clustered Networks (SCNs) are recently-introduced binary-weighted associative memories that significantly improve the storage and retrieval capabilities over the prior state-of-the art. However, deleting or updating the data patterns result in a significant increase in the data retrieval error probability. In this paper, we propose an algorithm to address this problem by incorporating multiple-valued weights for the interconnections used in the network. The proposed algorithm lowers the error rate by an order of magnitude for our sample network with 60% deleted contents. We then investigate the advantages of the proposed algorithm for hardware implementations.

Reduced-complexity binary-weight-coded associative memories

2013

Associative memories retrieve stored information given partial or erroneous input patterns. Recently, a new family of associative memories based on Clustered-Neural-Networks (CNNs) was introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work.

Selective decoding in associative memories based on Sparse-Clustered Networks

2013 IEEE Global Conference on Signal and Information Processing, 2013

Associative memories are structures that can retrieve previously stored information given a partial input pattern instead of an explicit address as in indexed memories. A few hardware approaches have recently been introduced for a new family of associative memories based on Sparse-Clustered Networks (SCN) that show attractive features. These architectures are suitable for implementations with low retrieval latency, but are limited to small networks that store a few hundred data entries. In this paper, a new hardware architecture of SCNs is proposed that features a new data-storage technique as well as a method we refer to as Selective Decoding (SD-SCN). The SD-SCN has been implemented using a similar FPGA used in the previous efforts and achieves two orders of magnitude higher capacity, with no error-performance penalty but with the cost of few extra clock cycles per data access.

MCA: A Developed Associative Memory Using Multi-Connect Architecture

International Journal of Intelligent Information Processing, 2011

Although Hopfield neural network is one of the most commonly used neural network models for auto-association and optimization tasks, it has several limitations. For example, it is well known that Hopfield neural networks has limited stored patterns, local minimum problems, limited noise ratio, retrieve reverse value of pattern, and shifting and scaling problems. This research will propose multiconnect architecture (MCA) associative memory to improve the Hopfield neural network by modifying the net architecture, learning and convergence processes. This modification is to increase the performance of associative memory neural network by avoiding most of the Hopfield neural network limitations. In general, MCA is a single layer neural network uses auto-association tasks and working in two phases, that is learning and convergence phases. MCA was developed based on two principles. First, the smallest net size will be used rather than depending on the pattern size. Second, the learning process will be performed to the limited parts of the pattern only to avoid learning similar parts several times. The experiments performed show promising results when MCA shows high efficiency associative memory by avoiding most of the Hopfield net limitations. The results proved that the MCA net can learn and recognize unlimited patterns in varying size with acceptable percentage noise rate in comparison to the traditional Hopfield neural network.

Implementation of an Associative Memory using a Restricted Hopfield Network

2021

An analog restricted Hopfield Network is presented in this paper. It consists of two layers of nodes, visible and hidden nodes, connected by directional weighted paths forming a bipartite graph with no intralayer connection. An energy or Lyapunov function was derived to show that the proposed network will converge to stable states. By introducing hidden nodes, the proposed network can be trained to store patterns and has increased memory capacity. Training to be an associative memory, simulation results show that the associative memory performs better than a classical Hopfield network by being able to perform better memory recall when the input is noisy. Keywords—Associative memory, Hopfield network, Lyapunov function, Restricted Hopfield network.

ASSOCIATIVE MEMORY IMPLEMENTATION WITH ARTIFICIAL NEURAL NETWORKS

The first description of ANN integrated circuit implements a continuous time analog circuit for AM. The design used a 22 x 22 matrix with 20,000 transistors, averaging 40 transistors per node to implement a Hopfield AM network. The design faced a scalability challenge at higher levels of integration. The paper advocates handling larger problems by a collection of smaller networks or hierarchical solutions, while predicting, "Significantly different connection technologies" as essential for success in larger systems.

A Neural Net Associative Memory for Real-Time Applications

Neural Computation - NECO, 1990

A parallel hardware implementation of the associative memory neural network introduced by Hopfield is described. The design utilizes the Geometric Arithmetic Parallel Processor (GAPP), a commercially available single-chip VLSI general-purpose array processor consisting of 72 processing elements. The ability to cascade these chips allows large arrays of processors to be easily constructed and used to implement the Hopfield network. The memory requirements and processing times of such arrays are analyzed based on the number of nodes in the network and the number of exemplar patterns. Compared with other digital implementations, this design yields significant improvements in runtime performance and offers the capability of using large neural network associative memories in real-time applications.

VLSI Implementation of a High-Capacity Neural Network Associative Memory

1990

In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation associative memory (ECAM). The prototype 3J.'-CMOS programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via sub-threshold MOS transistors in this design. The prototype chip is capable of performing one associative recall in 3 J.'S.