Address Mapping In Content Addressable Memory Interface with A Low Power Approach (original) (raw)

A Survey on Different Techniques and Approaches for Low Power Content-Addressable Memory Architectures

2018

This paper presents a survey on current trends adapted in the low power content addressable memory (CAM) architectures. CAMs are modified for the requirement of high speed, low power table look up function and are especially popular in network routers. CAM is a special type of memory with comparison circuitry. It stores or searches the look up table data with the help of one clock cycle. Large amount of power is consuming during comparison process because of parallel circuitry. CAM architectures are designed to reduce the power by eliminating the number of comparisons. In this paper at architectural level we survey different architectures for reducing dynamic power in CAM design. We reviewed seven different methods at the architectural level for low power.

Content Addressable Memory with Efficient Power Consumption and Throughput

Abstract: Content-addressable memory (CAM) is a hardware table that can search and Store data.CAM is actually considerable Power Consumption and parallel comparison feature where a large amount of transistor are active on each lookup. Thus, robust speed and low-power sense amplifiers are highly sought-after in CAM designs. In this paper, we introduce a modified parity bit matching that leads to delay reduction and power overhead. The modified design minimizes the searching time by matching the store bit from most significant bit instead of matching all the data's present in the row. Furthermore, we propose an effective gated power techniques to decrease the peak and average power consumption and enhance robustness of the design against the process variation. Indexterms-CAM,ParityCAM,ATMController,VPI/VCI

Optimize Parity Encoding for Power Reduction in Content Addressable Memory

Most memory devices store and retrieve data by addressing specific memory locations. As a result, this path often becomes the limiting factor for systems that rely on fast memory accesses. The time required to find an item stored in memory can be reduced considerably if the item can be identified for access by its content rather than by its address. A memory that is accessed in this way is called content-addressable memory (CAM). However, due to parallel process characteristic, power consumption is always an important Concern when designing CAM circuitry. (i.e) Content addressable memories simultaneously compare an input word to all the contents of memory and return the address of matching locations. The main challenge in CAM design is to reduce power while maintaining speed and low area. Content addressable memory (CAM) or associative memory, is a storage device, which can be addressed by its own contents. This paper presents the CAM low power techniques at architecture level.

Content Addressable Memory

Content addressable memory (CAM) is a memory unit that performs single clock cycle content matching instead of an address. CAMs are vastly used in network routers and cache controllers, as basics look-up table function is performed over all the stored memory information with high power dissipation. There is a trade-off between power consumption, area used and speed. A robust, low power and soaring speed sensing amplifier are requisite after memory design. In this paper, a parity bit is used to reduce the peak and average power consumption and enhance the robustness of the design against process variation. Thus, proposed method is a reordering overlapped mechanism used to reduce power consumption. In this mechanism, the word circuit is split into two sections that are searched sequentially. The main CAM challenges are to reduce power consumption associated with large amount of parallel process, exclusive of sacrificing speed or memory density.

IJERT-Design of High Speed Low Power Content Addressable Memory

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/design-of-high-speed-low-power-content-addressable-memory https://www.ijert.org/research/design-of-high-speed-low-power-content-addressable-memory-IJERTV2IS110028.pdf Content-addressable memory (CAM) is frequently used in applications, such as lookup tables,databases, associative computing, and networking, that require high-speed searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a Gate-block algorithm approach to improve the efficiency of low power pre computation-based CAM (PBCAM) that leads to 40% sensing delay reduction at a cost of less than 1% area and power overhead. Furthermore, we propose an effective gated-power technique to reduce the peak and average power consumption and enhance the robustness of the design against process variations. A feedback loop is employed to auto-turn off the power supply to the comparison elements and hence reduce the average power consumption by 64%. The proposed design can work at a supply voltage down to 0.5 V.

Design and Analysis of Content Addressable Memory

The Content addressable Memory (CAM) is high speed memories that are used in high speed networks, lookup tables and so on. The data to be searched will be compared with the data stored in the CAM cell and the address of the cell will be returned for the matched data. The parallel search operation in the memory is the important feature which improves the speed of search operation in CAM cells. However this parallel search operation will have its impact on the power dissipation, delay and various other parameters. This paper discusses the various low power CAM cells and analysis of its important parameters.

AN ANALYSIS OF ALGORITHM AND ARCHITECTURE FOR LOW-POWER CONTENT ADDRESSABLE MEMORY

We propose extended versions are presented that elaborates the effect of the design’s degrees of freedom, and the effect on non uniformity of input patterns on energy consumption and the performance. The proposed architecture is based on a recently refined sparse clustered networks using binary connections that on-average eliminates most of the parallel comparisons performed during a search. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match, and also by using a reordered overlapped search mechanism, most mismatches can be found by searching a few bits of a search word. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. Keywords: Associative memory, content-addressable memory (CAM), lowpower computing, recurrent neural networks, binary connections.

A Comprehensive Review of Energy Efficient Content Addressable Memory Circuits for Network Applications

Journal of Circuits, Systems and Computers, 2016

Content addressable memory (CAM) can perform high-speed table look-up with bit level masking capability. This feature makes CAMs extremely attractive for high-speed packet forwarding and classification in network routers. High-speed look-up implies all the CAM word entries to be accessed and compared with a search word to find a suitable match in a single clock cycle. This parallel search activity requires large energy consumption which needs to be reduced. In this paper, a review of the energy reduction techniques of CAM is presented. A comparative study of some popular techniques has been made with the help of simulations carried out in this work and published results.

Match-Line Division and Control to Reduce Power Dissipation in Content Addressable Memory

IEEE Transactions on Consumer Electronics, 2018

Hardware search engines are widely used in network routers for high-speed look up and parallel data processing. Content addressable memory (CAM) is such an engine that performs high-speed search at the expense of large energy dissipation. Match-line (ML) power dissipation is one of the critical concerns in designing low-power CAM architectures. NOR-MLs make this issue more severe due to the higher number of shortcircuit discharge paths during search. In this paper, a ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency. 128×32-bit memory arrays have been designed using 45-nm CMOS technology and verified at different process-voltage-temperature (PVT) and frequency variations to test the improvements of performance. A search frequency of 100 MHz under 1 V supply, at 27 • C applied on the proposed CAM results 48.25%, 52.55% and 54.80% reduction in energy per search (EpS) compared to a conventional CAM, an early predict and terminate ML precharge CAM (EPTP-CAM) and a ML selective charging scheme CAM (MSCS-CAM) respectively. ML partition also minimizes precharge activities between subsequent searches to reduce total precharge power in the proposed scheme. An approximate reduction of 2.5 times from conventional and EPTP schemes is observed in the precharge dissipation. Besides low search power, proposed design improves the energy-delay by 42% to 88% from compared designs.

A Survey on Various Types of Content Address Memory

Journal of emerging technologies and innovative research, 2018

We overview late improvements in the design of extensive limit content-addressable memory (CAM). A CAM is a memory that executes the query table capacity in a solitary clock cycle utilizing committed examination circuitry. CAMs are particularly mainstream in organize switches for parcel sending and bundle characterization, however they are additionally helpful in an assortment of different applications that require rapid table query. The primary CAM-design challenge is to diminish power utilization related with the expansive measure of parallel dynamic circuitry, without yielding velocity or memory thickness. In this paper, we review CAMdesign strategies at the circuit level and at the structural level. At the circuit level, we review low-power coordinate line sensing systems and hunt line driving methodologies. At the engineering level we review three strategies for decreasing power utilization. Keyword: Content Addressable Memory(CAM), Matchline Pipelining, Matchline Sensing, NAND...