A low-power Content-Addressable Memory based on clustered-sparse networks (original) (raw)

Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse Clustered Networks

IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000

We propose a low-power Content-Addressable Memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered-network using binary connections that onaverage eliminates most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65 nm CMOS technology was used for simulation purposes. Following a selection of design parameters such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture respectively with a 10% area overhead. A design methodology based on the silicon-area and power budgets, and performance requirements is discussed.

AN ANALYSIS OF ALGORITHM AND ARCHITECTURE FOR LOW-POWER CONTENT ADDRESSABLE MEMORY

We propose extended versions are presented that elaborates the effect of the design’s degrees of freedom, and the effect on non uniformity of input patterns on energy consumption and the performance. The proposed architecture is based on a recently refined sparse clustered networks using binary connections that on-average eliminates most of the parallel comparisons performed during a search. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match, and also by using a reordered overlapped search mechanism, most mismatches can be found by searching a few bits of a search word. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. Keywords: Associative memory, content-addressable memory (CAM), lowpower computing, recurrent neural networks, binary connections.

Algorithm and Architecture for a Low-Power Content Addressable Memory Based On Sparse Compression Technique

We propose an extended versions are presented that elaborates the effect of the design's degrees of freedom, and the effect on non-uniformity of input patterns on energy consumption and the performance. The proposed architecture is based on a recently refined sparse clustered networks using binary connections that on-average eliminates most of the parallel comparisons performed during a search. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. And also by using a reordered overlapped search mechanism, most mismatches can be found by searching a few bits of a search word. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead.

A Survey on Different Techniques and Approaches for Low Power Content-Addressable Memory Architectures

2018

This paper presents a survey on current trends adapted in the low power content addressable memory (CAM) architectures. CAMs are modified for the requirement of high speed, low power table look up function and are especially popular in network routers. CAM is a special type of memory with comparison circuitry. It stores or searches the look up table data with the help of one clock cycle. Large amount of power is consuming during comparison process because of parallel circuitry. CAM architectures are designed to reduce the power by eliminating the number of comparisons. In this paper at architectural level we survey different architectures for reducing dynamic power in CAM design. We reviewed seven different methods at the architectural level for low power.

Design of Content Addressable Memory Based on Sparse Cluster Network using Load Store Queue Technique

In all type of electronic circuits, optimization of power and delay are the major topics of interest. In the emerging trends of VLSI technology, CAM is the hardware embodiment of what in software terms would be called an associative array. The Content Addressable Memory architecture is proposed by Sparse Clustered Network, using Load Store Queue(LSQ) technique that on average eliminates the most parallel comparisons during a search. Thus, the CAM based Sparse Clustered Network using LSQ technique is designed to match the given input data with the stored value. The stored data is specified with an particular tag. When the input data is given with an tag, a search is performed among the stored data and when a match occurs between the input and stored value, the output will display the input data which is given. Also when an mismatch occurs between the input and stored data, the output will not be display the input value. CAM operation will be performed faster when compared with other memories. Thus the power consumption and delay estimation is reduced by using SCN-CAM with LSQ technique.

Content Addressable Memory with Efficient Power Consumption and Throughput

Abstract: Content-addressable memory (CAM) is a hardware table that can search and Store data.CAM is actually considerable Power Consumption and parallel comparison feature where a large amount of transistor are active on each lookup. Thus, robust speed and low-power sense amplifiers are highly sought-after in CAM designs. In this paper, we introduce a modified parity bit matching that leads to delay reduction and power overhead. The modified design minimizes the searching time by matching the store bit from most significant bit instead of matching all the data's present in the row. Furthermore, we propose an effective gated power techniques to decrease the peak and average power consumption and enhance robustness of the design against the process variation. Indexterms-CAM,ParityCAM,ATMController,VPI/VCI

Design of Low Power NAND-NOR Content Addressable Memory (CAM) Using SRAM

Content addressable memory (CAM) is a type of computer memory used in high speed searching applications. A content addressable memory (CAM) compares input data to the existing stored data in memory and returns the address of the matching data. A CAM usually contains SRAM cell with a comparison circuitry that enables search operations to complete in single clock cycle. In case of advanced applications we need large sized CAM which leads in more power consumption. In order to reduce the power consumed by the CAM cell, the memory circuits used are 6T SRAM and single bit line SRAM as the core storage element. Single bit line SRAM consumes 46.3 percent of power lesser than 6T SRAM. This paper discusses the implementation of two different architectures of CAM cells namely NAND and NOR type. Power consumption of both architectures are analysed and an efficient NAND-NOR type CAM cell is implemented to overcome its limitations.

IJERT-Design of High Speed Low Power Content Addressable Memory

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/design-of-high-speed-low-power-content-addressable-memory https://www.ijert.org/research/design-of-high-speed-low-power-content-addressable-memory-IJERTV2IS110028.pdf Content-addressable memory (CAM) is frequently used in applications, such as lookup tables,databases, associative computing, and networking, that require high-speed searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a Gate-block algorithm approach to improve the efficiency of low power pre computation-based CAM (PBCAM) that leads to 40% sensing delay reduction at a cost of less than 1% area and power overhead. Furthermore, we propose an effective gated-power technique to reduce the peak and average power consumption and enhance the robustness of the design against process variations. A feedback loop is employed to auto-turn off the power supply to the comparison elements and hence reduce the average power consumption by 64%. The proposed design can work at a supply voltage down to 0.5 V.

Design and Analysis of Content Addressable Memory

The Content addressable Memory (CAM) is high speed memories that are used in high speed networks, lookup tables and so on. The data to be searched will be compared with the data stored in the CAM cell and the address of the cell will be returned for the matched data. The parallel search operation in the memory is the important feature which improves the speed of search operation in CAM cells. However this parallel search operation will have its impact on the power dissipation, delay and various other parameters. This paper discusses the various low power CAM cells and analysis of its important parameters.

Precharge-Free, Low-Power Content-Addressable Memory

— Content-addressable memory (CAM) is the hardware for parallel lookup/search. The parallel search scheme promises a high-speed search operation but at the cost of high power consumption. Parallel NOR-and NAND-type matchline (ML) CAMs are suitable for high-search-speed and low-power-consumption applications, respectively. The NOR-type ML CAM requires high power, and therefore, the reduction of its power consumption is the subject of many reported designs. Here, we report and explore the short-circuit (SC) current during the precharge phase of the NOR-type ML. Also proposed here is a novel precharge-free CAM. The proposed CAM is free of the drawbacks of the charge sharing in the NAND and the SC current in the NOR-type CAM. Postlayout simulations performed with a 45-nm technology node revealed a significant reduction in the energy metric: 93% and 77% lesser than NOR-and NAND-type CAMs, respectively. The Monte Carlo simulation for 500 runs was performed to ensure the robustness of the proposed precharge-free CAM. Index Terms— Content-addressable memory (CAM), high-speed search, low power, NAND-type matchline (ML), NOR-type ML, precharge free, short-circuit (SC) current.