Parameterized SDRAM-based content-addressable memory on field programmable gate array (original) (raw)

Implementation and Design of High Speed FPGA-based Content Addressable Memory

IJSRD, 2013

CAM stands for content addressable memory. It is a special type of computer memory used in very high speed searching application. A CAM is a memory that implements the high speed lookup-table function in a single clock cycle using dedicated comparison circuitry. It is also known as associative memory or associative array although the last term used for a programming data structure. Unlike standard computer memory (RAM) in which user supplies the memory address and the RAM returns the data word stored in that memory address, CAM is designed in such a way that user supplies data word and CAM searches its entire memory to see if that data word stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage address where the word was found. This design coding, simulation, logic synthesis and implementation will be done using various EDA tools.

D-TCAM: A High-Performance Distributed RAM Based TCAM Architecture on FPGAs

IEEE Access, 2019

Ternary content-addressable memory (TCAM) is a high-speed searching device that searches the entire memory in parallel in deterministic time, unlike random-access memory (RAM), which searches sequentially. A network router classifies and forwards a data packet with the aid of a TCAM that stores the routing data in a table. Field-programmable gate arrays (FPGAs), due to its hardware-like performance and software-like reconfigurability, are widely used in networking systems where TCAM is an essential component. TCAM is not included in modern FPGAs, which leads to the emulation of TCAM using available resources on FPGA. Several emulated TCAM designs are presented but they lack the efficient utilization of FPGA's hardware resources. In this paper, we present a novel TCAM architecture, the distributed RAM based TCAM (D-TCAM), using D-CAM as a building block. One D-CAM block implements a 48-bytes TCAM using 64 lookup tables (LUTs), that is cascaded horizontally and vertically to increase the width and depth of TCAM, respectively. A sample size of 512 × 144 is implemented on Xilinx Virtex-6 FPGA, which reduced the hardware utilization by 60% compared to the state-of-the-art FPGA-based TCAMs. Similarly, by exploiting the LUT-flip-flip (LUT-FF) pair nature of Xilinx FPGAs, the proposed TCAM architecture improves throughput by 58.8% without any additional hardware cost. INDEX TERMS Content-addressable memory (CAM), field-programmable gate array (FPGA), lookup table (LUT), memory, random-access memory (RAM). I. INTRODUCTION Content-addressable memory (CAM) is a special type of associative array that performs parallel searching of the stored rules. It returns the address of the matched rule in one clock cycle [1]. CAMs are classified into two major types: binary CAM (BiCAM) supports only two memory states ('0' and '1') and ternary CAM (TCAM) supports three memory states ('0', '1', and 'X'). 'X' state is known as don't care bit or wildcard. TCAM is widely used in many applications where the searching speed is imperative [2], [3], i.e., digital signal processing, artificial intelligence [4], translation lookaside buffers (TLBs) in soft processors [5], cache memory, and gene pattern recognition [6] in bioinformatics. Packet classification and packet forwarding maintain the high throughput of the system using TCAM in networking applications [7], The associate editor coordinating the review of this manuscript and approving it for publication was Remigiusz Wisniewski.

Algorithmic TCAM on FPGA with data collision approach

Indonesian Journal of Electrical Engineering and Computer Science, 2021

Content addressable memory (CAM) and ternary content addressable memory (TCAM) are specialized high-speed memories for data searching. CAM and TCAM have many applications in network routing, packet forwarding and Internet data centers. These types of memories have drawbacks on power dissipation and area. As field-programmable gate array (FPGA) is recently being used for network acceleration applications, the demand to integrate TCAM and CAM on FPGA is increasing. Because most FPGAs do not support native TCAM and CAM hardware, methods of implementing algorithmic TCAM using FPGA resources have been proposed through recent years. Algorithmic TCAM on FPGA have the advantages of FPGAs low power consumption and high intergration scalability. This paper proposes a scaleable algorithmic TCAM design on FPGA. The design uses memory blocks to negate power dissipation issue and data collision to save area. The paper also presents a design of a 256 x 104-bit algorithmic TCAM on Intel FPGA Cyclone V, evaluates the performance and application ability of the design on large scale and in future developments.

Design and Analysis of Content Addressable Memory

The Content addressable Memory (CAM) is high speed memories that are used in high speed networks, lookup tables and so on. The data to be searched will be compared with the data stored in the CAM cell and the address of the cell will be returned for the matched data. The parallel search operation in the memory is the important feature which improves the speed of search operation in CAM cells. However this parallel search operation will have its impact on the power dissipation, delay and various other parameters. This paper discusses the various low power CAM cells and analysis of its important parameters.

Content-addressable memory (CAM) circuits and architectures: A tutorial and survey

Solid-State Circuits, IEEE …, 2006

We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.

Programmable memory blocks supporting content-addressable memory

2000

The Embedded System Block (ESB) of the APEX E programmable logic device family from Altera Corporation includes the capability of implementing content addressable memory (CAM) as well as product term macrocells, ROM, and dual port RAM. In CAM mode each ESB can implement a 32 word CAM with 32 bits per word. In product term mode, each ESB has 16 macrocells built out of 32 product terms with 32 literal inputs. The ability to reconfigure memory blocks in this way represents a new and innovative use of resources in a programmable logic device, requiring creative solutions in both the hardware and software domains. The architecture and features of this Embedded System Block are described.

FPGA Based Architecture for High Performance SRAM Based TCAM for Search Operations

2015

Ternary Content Addressable memory is a type of memory that allows the memory to be searched by content rather than by address. It performs high speed lookup operations within a single clock cycle. But when compared to RAM technology the conventional TCAM circuitry has certain limitations such as low access time, low storage capacity, circuit complexity and high cost. So we can use the benefits of SRAM by configuring it to behave like TCAM. The project focuses on a memory architecture based on the hybrid partitioning concept which emulates the TCAM (Ternary Content Addressable Memory) functionality with SRAM. The hybrid partitioned SRAM based TCAM logically dissects conventional TCAM table in a hybrid way (row-wise and column-wise) into TCAM sub tables, which are then processed to map on their corresponding memory units and match address is produced. The 64*32 hybrid partitioned SRAM based TCAM can be implemented in Xilinx Spartan 3E.The SRAM based TCAMs offers better search perform...

Review on Performance Analysis of Content Addressable Memory Search Mechanisms

We surveyed recent techniques utilized in the construction of high-throughput low-energy Content available Memory (CAM).A CAM may be a memory that performs the lookup-table operation in a very single clock cycle using unique comparison electronic equipment.CAMs are particularly fashionable in network routers for packet forwarding and packet classification, however they're additionally useful in a variety of alternative applications that need high speed table lookup.The chop-chop growing size of routing tables brings with it the challenge to style main CAM design challenge is to scale back power consumption related to the massive quantity of parallel active electronic equipment, while not sacrificing speed or memory density.In this paper, CAM searchline design techniques at the circuit level for reducing power consumption are reviewed and presented.

Content Addressable Memory

Content addressable memory (CAM) is a memory unit that performs single clock cycle content matching instead of an address. CAMs are vastly used in network routers and cache controllers, as basics look-up table function is performed over all the stored memory information with high power dissipation. There is a trade-off between power consumption, area used and speed. A robust, low power and soaring speed sensing amplifier are requisite after memory design. In this paper, a parity bit is used to reduce the peak and average power consumption and enhance the robustness of the design against process variation. Thus, proposed method is a reordering overlapped mechanism used to reduce power consumption. In this mechanism, the word circuit is split into two sections that are searched sequentially. The main CAM challenges are to reduce power consumption associated with large amount of parallel process, exclusive of sacrificing speed or memory density.

FPGA Implementation of Content Addressable Memory

To reduce the power dissipation in circuits, the reversible logic design is implemented. Reversible logic design is one of the main low power techniques. In the proposed design the address decoder is designed using basic reversible logic gates Fredkin gate and Peres gate. The encoder is designed using Fredkin and Feynman gate. In the use of Peres gate in proposed design reduce the quantum cost and power dissipation of the decoder. The Content Addressable memory architecture will be realized using FPGA