A Comprehensive Review of Energy Efficient Content Addressable Memory Circuits for Network Applications (original) (raw)

Review on Performance Analysis of Content Addressable Memory Search Mechanisms

We surveyed recent techniques utilized in the construction of high-throughput low-energy Content available Memory (CAM).A CAM may be a memory that performs the lookup-table operation in a very single clock cycle using unique comparison electronic equipment.CAMs are particularly fashionable in network routers for packet forwarding and packet classification, however they're additionally useful in a variety of alternative applications that need high speed table lookup.The chop-chop growing size of routing tables brings with it the challenge to style main CAM design challenge is to scale back power consumption related to the massive quantity of parallel active electronic equipment, while not sacrificing speed or memory density.In this paper, CAM searchline design techniques at the circuit level for reducing power consumption are reviewed and presented.

A Survey on Different Techniques and Approaches for Low Power Content-Addressable Memory Architectures

2018

This paper presents a survey on current trends adapted in the low power content addressable memory (CAM) architectures. CAMs are modified for the requirement of high speed, low power table look up function and are especially popular in network routers. CAM is a special type of memory with comparison circuitry. It stores or searches the look up table data with the help of one clock cycle. Large amount of power is consuming during comparison process because of parallel circuitry. CAM architectures are designed to reduce the power by eliminating the number of comparisons. In this paper at architectural level we survey different architectures for reducing dynamic power in CAM design. We reviewed seven different methods at the architectural level for low power.

Content-addressable memory (CAM) circuits and architectures: A tutorial and survey

Solid-State Circuits, IEEE …, 2006

We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.

Content Addressable Memory with Efficient Power Consumption and Throughput

Abstract: Content-addressable memory (CAM) is a hardware table that can search and Store data.CAM is actually considerable Power Consumption and parallel comparison feature where a large amount of transistor are active on each lookup. Thus, robust speed and low-power sense amplifiers are highly sought-after in CAM designs. In this paper, we introduce a modified parity bit matching that leads to delay reduction and power overhead. The modified design minimizes the searching time by matching the store bit from most significant bit instead of matching all the data's present in the row. Furthermore, we propose an effective gated power techniques to decrease the peak and average power consumption and enhance robustness of the design against the process variation. Indexterms-CAM,ParityCAM,ATMController,VPI/VCI

Design and Analysis of Content Addressable Memory

The Content addressable Memory (CAM) is high speed memories that are used in high speed networks, lookup tables and so on. The data to be searched will be compared with the data stored in the CAM cell and the address of the cell will be returned for the matched data. The parallel search operation in the memory is the important feature which improves the speed of search operation in CAM cells. However this parallel search operation will have its impact on the power dissipation, delay and various other parameters. This paper discusses the various low power CAM cells and analysis of its important parameters.

IJERT-Design of High Speed Low Power Content Addressable Memory

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/design-of-high-speed-low-power-content-addressable-memory https://www.ijert.org/research/design-of-high-speed-low-power-content-addressable-memory-IJERTV2IS110028.pdf Content-addressable memory (CAM) is frequently used in applications, such as lookup tables,databases, associative computing, and networking, that require high-speed searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a Gate-block algorithm approach to improve the efficiency of low power pre computation-based CAM (PBCAM) that leads to 40% sensing delay reduction at a cost of less than 1% area and power overhead. Furthermore, we propose an effective gated-power technique to reduce the peak and average power consumption and enhance the robustness of the design against process variations. A feedback loop is employed to auto-turn off the power supply to the comparison elements and hence reduce the average power consumption by 64%. The proposed design can work at a supply voltage down to 0.5 V.

AN ANALYSIS OF ALGORITHM AND ARCHITECTURE FOR LOW-POWER CONTENT ADDRESSABLE MEMORY

We propose extended versions are presented that elaborates the effect of the design’s degrees of freedom, and the effect on non uniformity of input patterns on energy consumption and the performance. The proposed architecture is based on a recently refined sparse clustered networks using binary connections that on-average eliminates most of the parallel comparisons performed during a search. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match, and also by using a reordered overlapped search mechanism, most mismatches can be found by searching a few bits of a search word. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. Keywords: Associative memory, content-addressable memory (CAM), lowpower computing, recurrent neural networks, binary connections.

Content Addressable Memory

Content addressable memory (CAM) is a memory unit that performs single clock cycle content matching instead of an address. CAMs are vastly used in network routers and cache controllers, as basics look-up table function is performed over all the stored memory information with high power dissipation. There is a trade-off between power consumption, area used and speed. A robust, low power and soaring speed sensing amplifier are requisite after memory design. In this paper, a parity bit is used to reduce the peak and average power consumption and enhance the robustness of the design against process variation. Thus, proposed method is a reordering overlapped mechanism used to reduce power consumption. In this mechanism, the word circuit is split into two sections that are searched sequentially. The main CAM challenges are to reduce power consumption associated with large amount of parallel process, exclusive of sacrificing speed or memory density.

Match-Line Division and Control to Reduce Power Dissipation in Content Addressable Memory

IEEE Transactions on Consumer Electronics, 2018

Hardware search engines are widely used in network routers for high-speed look up and parallel data processing. Content addressable memory (CAM) is such an engine that performs high-speed search at the expense of large energy dissipation. Match-line (ML) power dissipation is one of the critical concerns in designing low-power CAM architectures. NOR-MLs make this issue more severe due to the higher number of shortcircuit discharge paths during search. In this paper, a ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency. 128×32-bit memory arrays have been designed using 45-nm CMOS technology and verified at different process-voltage-temperature (PVT) and frequency variations to test the improvements of performance. A search frequency of 100 MHz under 1 V supply, at 27 • C applied on the proposed CAM results 48.25%, 52.55% and 54.80% reduction in energy per search (EpS) compared to a conventional CAM, an early predict and terminate ML precharge CAM (EPTP-CAM) and a ML selective charging scheme CAM (MSCS-CAM) respectively. ML partition also minimizes precharge activities between subsequent searches to reduce total precharge power in the proposed scheme. An approximate reduction of 2.5 times from conventional and EPTP schemes is observed in the precharge dissipation. Besides low search power, proposed design improves the energy-delay by 42% to 88% from compared designs.

Low Power Implementation Of Ternary Content Addressable Memory (TCAM)

International Journal of Engineering and Advanced Technology

In network routers, Ternary Content Addressable Memory (TCAM)[1] based search engines take an important role. One of the improved versions of Content Addressable Memory (CAM) is TCAM. For high speed and broader searching operation TCAM is used. Unlike normal CAM, TCAM has 3 logic states: 0, 1, ‘X’. In TCAM within one single clock cycle, search operation can be performed. That is why it is called special type of memory. Also, quick search ability is one of the popular features of TCAM. To compare the search and stored data, TCAM array acts parallel in every location. But high power dissipation is the main disadvantage of TCAM. To overcome this power dissipation in this paper we proposed a low power TCAM implementation by using Reversible logic.[2] Reversible logic has less heat dissipating characteristics property with respect to irreversible gate. Also, Reversible logic has ultra-low power characteristics feature. In recent past it has been proved that reversible gates can implement...