Sheikh Wasmir Hussain | Indian Institute of Information Technology Guwahati (original) (raw)
Papers by Sheikh Wasmir Hussain
IET Circuits, Devices & Systems
Content addressable memory (CAM) is a hardware search engine utilised for accelerating translatio... more Content addressable memory (CAM) is a hardware search engine utilised for accelerating translation and table look-up in network routers and data processing systems. This article proposes a NAND-NOR match-line (ML) based CAM architecture with the main goals of elevating search performance and energy efficiency. A competent ML control unit (MLCU) is introduced to provide a short discharge path for output match-line after processing the ML sections. In this architecture, tag mismatch based on memory traces is utilised (in NAND-MLs) to deactivate redundant NOR-MLs in an attempt to reduce the overall ML switching activity. Based on the decision of NAND-ML partition, the MLCU restores the charge to reduce ML glitches during the evaluation phase. Match-line delay of the proposed 64�32-bit hybrid CAM is 366.90 ps in a standard 45-nm technology at 1 V, which is 56.51% and 72.55% reductions compared to a conventional CAM and a segmented CAM, respectively. Reduction in precharge power and search power of the presented CAM leads to 6� enhancements of power-delay-product from existing hybrid CAMs. The proposed CAM can operate up to low supply voltages by dissipating only 0.10 fJ/bit/search at 0.5 V. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS)
Content addressable memory (CAM) is used as a hardware in lookup intensive applications. Despite ... more Content addressable memory (CAM) is used as a hardware in lookup intensive applications. Despite high-speed feature, power consumption is the major limitation in CAM design. In this work, a compact CAM cell employing a high-speed comparison and evaluation is presented for a segmented NOR match-line (ML) with reduced ML capacitance. The non-pipeline CAM architecture minimizes switching activity of precharge and search at the cost of negligible overhead of a ML precharge and decision block. A 64×32-bit proposed macro is realized using 45-nm CMOS technology. Post-layout simulations at 1.1 V shows that proposed architecture achieves 203.60 ps search time while dissipating only 1.07 fJ/bit/search. Consequently, it leads to 71.24% and 84.85% energy-delay-product reductions over a conventional CAM and local-NOR global-NAND CAM, respectively. The proposed CAM based on two-segment configuration delivers ML power reductions of 12.26%-62.27% over the two CAMs. The proposed design is capable of operating efficiently at relatively low supply voltage and worst process corner.
2020 IEEE Region 10 Symposium (TENSYMP)
Content addressable memory (CAM) is used in applications requiring high-speed lookup operations. ... more Content addressable memory (CAM) is used in applications requiring high-speed lookup operations. Search operations in CAM is prone to frequent switching of large capacitive match-line (ML) which causes huge energy dissipation. In this paper, we present an pseudo-static master-slave ML (PSMSML) architecture to increase the energy efficiency of MLs. In the proposed approach, the core cell employs a transmission-gate comparison and a single transistor evaluation while pseudo-static ML is partitioned into master and slave nodes to minimize charge variation. The proposed 64×32-bit macro achieves sustainable search-speed and dissipates only 0.96 fJ/bit/search under 1.1-V supply. Despite increasing energy-efficiency by 30%-44%, energy-delay-product also reduces around 9% and 29% in the proposed CAM when compared to a gated-power ML sensing and an existing high-performance MSML designs.
2019 International Conference on Intelligent Computing and Control Systems (ICCS)
Content addressable memory (CAM) is widely used in many applications, wherever we need fast memor... more Content addressable memory (CAM) is widely used in many applications, wherever we need fast memory access CAM provides a provision. CAM is faster due to the arrangement of core cells that allows parallel access, it searches total memory array in single clock cycle. CAM plays a crucial role in network applications because of its parallel access. In this paper, we explored and described about CAM architectures based on N-type, P-type and TG-type comparison circuitry. For faster memory search, we used a latch-type match-line (ML) sensing scheme. Search operations and results of the CAM designs have been verified using a 45-nm CMOS process at 27°C with 1 V supply in SPECTRE simulator. Based on the post-layout results of 32×32-bit arrays, the P-CAM provides the best performance comparatively.
2019 IEEE 2nd International Conference on Electronics and Communication Engineering (ICECE), 2019
Content addressable memory (CAM) is a hardware search engine used for speeding look-up operation ... more Content addressable memory (CAM) is a hardware search engine used for speeding look-up operation in various applications. CAM implementation requires several input and output driver circuits for writing/searching/matching. This paper presents the required I/O processing peripherals with all necessary circuits, dedicated functionality and discussions for completing a CAM architecture. Row decoder (address input decoder) activates a specific row in an array so that bitline drivers come into operation to store predefined data at the specified CAM cells. Search driver fetches user defined data and pass on to the parallel search lines of each location of CAM array. Column of sense amplifiers associating match-lines (MLs) of each location process the voltage change in the MLs during a CAM search. Priority encoder (address output encoder) is used at the last stage to code the address of the matching location, a priority is usually used in case of multiple matches to decide the best match location. These encoders and decoders should work in synchronous with the related drivers for CAM read-in (write) and read-out (search/matching) operations in their respective cycles. These utilities of driver circuits for CAM will be helpful and ease of reference for memory designers while integrating proper choice of peripheral circuits in a memory chip.
2019 7th International Conference on Smart Computing & Communications (ICSCC), 2019
A novel pre-charge mechanism is presented to design an energy efficient content addressable memor... more A novel pre-charge mechanism is presented to design an energy efficient content addressable memory (CAM) with improved search performance. This scheme reduces the match-line (ML) power dissipation by lowering the loading capacitance and ML voltage swing. A mathbf256timesmathbf64−bit\mathbf{256}\times \mathbf{64-bit}mathbf256timesmathbf64−bit NOR-based CAM has been designed with the incorporation of this scheme with predictive 45-nm CMOS technology and post layout simulations are carried out in SPECTRE at 1 V supply. This scheme reduces the energy dissipation by 38% compared to the traditional precharging with 4.7% improvement in search speed.
2019 IEEE 2nd International Conference on Electronics and Communication Engineering (ICECE), 2019
Low-energy dissipation as well as high-speed matching are two goals in the design of content addr... more Low-energy dissipation as well as high-speed matching are two goals in the design of content addressable memories (CAMs). An efficient matchline (ML) controller is proposed in this work to achieve a low-power hybrid CAM while maintaining acceptable performance for search. This is applied in the implementation of 128×32-bit CAM using 45-nm CMOS technology. Short and fast discharged ML path of the proposed controller makes it possible for the CAM to provide 58.56% and 13.11% improvement in search-speed over a conventional NAND-based CAM and a hybrid-type CAM, respectively. The result of shorter NAND-ML controlling the longer NOR-ML is able to suppress not only mismatched but also partial matched NOR- MLs, so that power is saved during search operation. While the average-power consumption is only 3.93 µW, lesser by 30.69 % than the low-power NAND-CAM, the 32-bit word produces ML state in 3.58 ns search time leading to the improvement in energy-delay by 1.21, 2.16 and 4.95 times from the existed hybrid-type, NOR-ML and NAND-ML CAMs, respectively. Energy metric of proposed design, obtained as 0.17 fJ/bit/search under 1 V supply, is the least among the compared designs.
2019 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS), 2019
Content addressable memory (CAM) is a hardware search engine used for high speed searching/access... more Content addressable memory (CAM) is a hardware search engine used for high speed searching/accessing. The utilization of CAM is still constrained in some applications by large power dissipation. Especially, the search approach contributes more power in overall power dissipation. This paper proposes a low-power CAM cell using split-controlled single ended storage (SCSES) technique to achieve energy-efficient hardware search engine. A 64×32-bit CAM array is designed using GPDK 45-nm CMOS technology and the post-layout simulations are performed in SPECTRE simulator to verify the power efficiency. The proposed design can achieve search result in 0.558 ns with energy efficiency of 0.83 fJ/bit/search at 1.0 V.
2016 International Conference on Information Technology (ICIT), 2016
This work undertaken, aims to develop pulsed-OFD transmission system, which is used in wide area ... more This work undertaken, aims to develop pulsed-OFD transmission system, which is used in wide area of wireless communications. It is noteworthy to mention that the system is baseband transmission by excluding the RF (radio frequency) part. Pulsed-OFDM enhances the OFDM that endeavors prominent performance parameters with lesser complexity in implementation. This is possible by pulsating the OFDM symbols and spreading the frequency spectrum of the modulated signals. In this paper the implementation of the pulsed-OFDM transmission, as well as comparison results with OFDM have been presented. Focus is on the architectural characteristics of the system blocks which are used for transmission and reception. In this paper up-sampling is applied to the OFDM signal to induce the pulsed-OFDM signal. Finally, the performance of the system has been analyzed.
IET Computers & Digital Techniques, 2020
Performance of a memory depends on the storage stability, yield and sensing speed. Differential i... more Performance of a memory depends on the storage stability, yield and sensing speed. Differential input and the latching time of sense amplifiers are considered as primary performance factors in static random access memory. In a content addressable memory (CAM), the sensing is carried out through the matchline (ML) and the time for evaluation is the key to decide the search speed. The density of CAM is on a rise to accommodate a higher amount of information which increases the power dissipation associated with it. Issues such as the logical threshold variation and lower noise margin between match and mismatch are critical in the operation of a CAM. A good ML sensing technique can reduce the ML power with enhanced evaluation speed. This work provides an analogy of various ML sensing techniques based on their pre-charging, evaluation and performance improvement strategies. Estimation on the power dissipation and evaluation time are made and in-depth analysis on their power-speed-overhead trade-off are carried on 64-bit CAM macros.
IEEE Transactions on Circuits and Systems I: Regular Papers, 2020
Hardware search engines (HSEs) have been drawing significant attention in replacing software sear... more Hardware search engines (HSEs) have been drawing significant attention in replacing software search algorithms in order to speed up location access and data association in modern systems. Content addressable memory (CAM) is one of the promising HSEs due to its parallel search accessibility. However, it is subjected to considerable dissipation which becomes severe while accessing many components including cells and associated matchlines (MLs) during every search. Ternary CAM (TCAM) based routing tables, especially employed in network systems for packet classification, has put a challenge to design energy-efficient architectures with high-performance and reliable look-up operation. Precharge-free CAM schemes are preferred solutions over precharge types to accomplish high-speed as well as low-power goals of associative memory design. In order to overcome the drawbacks of precharge based designs and also to improve performance during the search, we introduce a precharge-free ternary content addressable memory (PF-TCAM). The proposed searching approach enhances the rate of search by reducing half of the ML evaluation time as it eliminates precharge phase prior to every search by performing search in HALF clock cycle. A 32times1632\times 1632times16 -bit proposed macro is designed using 45-nm CMOS technology and post-layout simulations at 1 V supply shows 56% and 63% energy efficiency improvements compared to conventional TCAM and compact TCAM respectively over 25 different search keys despite increasing evaluation speed by 50% with an area overhead of 1 transistor/cell over compact TCAM.
Integration, 2019
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Electronics Letters, 2018
IEEE Transactions on Consumer Electronics, 2018
Hardware search engines are widely used in network routers for high-speed look up and parallel da... more Hardware search engines are widely used in network routers for high-speed look up and parallel data processing. Content addressable memory (CAM) is such an engine that performs high-speed search at the expense of large energy dissipation. Match-line (ML) power dissipation is one of the critical concerns in designing low-power CAM architectures. NOR-MLs make this issue more severe due to the higher number of shortcircuit discharge paths during search. In this paper, a ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency. 128×32-bit memory arrays have been designed using 45-nm CMOS technology and verified at different process-voltage-temperature (PVT) and frequency variations to test the improvements of performance. A search frequency of 100 MHz under 1 V supply, at 27 • C applied on the proposed CAM results 48.25%, 52.55% and 54.80% reduction in energy per search (EpS) compared to a conventional CAM, an early predict and terminate ML precharge CAM (EPTP-CAM) and a ML selective charging scheme CAM (MSCS-CAM) respectively. ML partition also minimizes precharge activities between subsequent searches to reduce total precharge power in the proposed scheme. An approximate reduction of 2.5 times from conventional and EPTP schemes is observed in the precharge dissipation. Besides low search power, proposed design improves the energy-delay by 42% to 88% from compared designs.
IET Circuits, Devices & Systems
Content addressable memory (CAM) is a hardware search engine utilised for accelerating translatio... more Content addressable memory (CAM) is a hardware search engine utilised for accelerating translation and table look-up in network routers and data processing systems. This article proposes a NAND-NOR match-line (ML) based CAM architecture with the main goals of elevating search performance and energy efficiency. A competent ML control unit (MLCU) is introduced to provide a short discharge path for output match-line after processing the ML sections. In this architecture, tag mismatch based on memory traces is utilised (in NAND-MLs) to deactivate redundant NOR-MLs in an attempt to reduce the overall ML switching activity. Based on the decision of NAND-ML partition, the MLCU restores the charge to reduce ML glitches during the evaluation phase. Match-line delay of the proposed 64�32-bit hybrid CAM is 366.90 ps in a standard 45-nm technology at 1 V, which is 56.51% and 72.55% reductions compared to a conventional CAM and a segmented CAM, respectively. Reduction in precharge power and search power of the presented CAM leads to 6� enhancements of power-delay-product from existing hybrid CAMs. The proposed CAM can operate up to low supply voltages by dissipating only 0.10 fJ/bit/search at 0.5 V. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS)
Content addressable memory (CAM) is used as a hardware in lookup intensive applications. Despite ... more Content addressable memory (CAM) is used as a hardware in lookup intensive applications. Despite high-speed feature, power consumption is the major limitation in CAM design. In this work, a compact CAM cell employing a high-speed comparison and evaluation is presented for a segmented NOR match-line (ML) with reduced ML capacitance. The non-pipeline CAM architecture minimizes switching activity of precharge and search at the cost of negligible overhead of a ML precharge and decision block. A 64×32-bit proposed macro is realized using 45-nm CMOS technology. Post-layout simulations at 1.1 V shows that proposed architecture achieves 203.60 ps search time while dissipating only 1.07 fJ/bit/search. Consequently, it leads to 71.24% and 84.85% energy-delay-product reductions over a conventional CAM and local-NOR global-NAND CAM, respectively. The proposed CAM based on two-segment configuration delivers ML power reductions of 12.26%-62.27% over the two CAMs. The proposed design is capable of operating efficiently at relatively low supply voltage and worst process corner.
2020 IEEE Region 10 Symposium (TENSYMP)
Content addressable memory (CAM) is used in applications requiring high-speed lookup operations. ... more Content addressable memory (CAM) is used in applications requiring high-speed lookup operations. Search operations in CAM is prone to frequent switching of large capacitive match-line (ML) which causes huge energy dissipation. In this paper, we present an pseudo-static master-slave ML (PSMSML) architecture to increase the energy efficiency of MLs. In the proposed approach, the core cell employs a transmission-gate comparison and a single transistor evaluation while pseudo-static ML is partitioned into master and slave nodes to minimize charge variation. The proposed 64×32-bit macro achieves sustainable search-speed and dissipates only 0.96 fJ/bit/search under 1.1-V supply. Despite increasing energy-efficiency by 30%-44%, energy-delay-product also reduces around 9% and 29% in the proposed CAM when compared to a gated-power ML sensing and an existing high-performance MSML designs.
2019 International Conference on Intelligent Computing and Control Systems (ICCS)
Content addressable memory (CAM) is widely used in many applications, wherever we need fast memor... more Content addressable memory (CAM) is widely used in many applications, wherever we need fast memory access CAM provides a provision. CAM is faster due to the arrangement of core cells that allows parallel access, it searches total memory array in single clock cycle. CAM plays a crucial role in network applications because of its parallel access. In this paper, we explored and described about CAM architectures based on N-type, P-type and TG-type comparison circuitry. For faster memory search, we used a latch-type match-line (ML) sensing scheme. Search operations and results of the CAM designs have been verified using a 45-nm CMOS process at 27°C with 1 V supply in SPECTRE simulator. Based on the post-layout results of 32×32-bit arrays, the P-CAM provides the best performance comparatively.
2019 IEEE 2nd International Conference on Electronics and Communication Engineering (ICECE), 2019
Content addressable memory (CAM) is a hardware search engine used for speeding look-up operation ... more Content addressable memory (CAM) is a hardware search engine used for speeding look-up operation in various applications. CAM implementation requires several input and output driver circuits for writing/searching/matching. This paper presents the required I/O processing peripherals with all necessary circuits, dedicated functionality and discussions for completing a CAM architecture. Row decoder (address input decoder) activates a specific row in an array so that bitline drivers come into operation to store predefined data at the specified CAM cells. Search driver fetches user defined data and pass on to the parallel search lines of each location of CAM array. Column of sense amplifiers associating match-lines (MLs) of each location process the voltage change in the MLs during a CAM search. Priority encoder (address output encoder) is used at the last stage to code the address of the matching location, a priority is usually used in case of multiple matches to decide the best match location. These encoders and decoders should work in synchronous with the related drivers for CAM read-in (write) and read-out (search/matching) operations in their respective cycles. These utilities of driver circuits for CAM will be helpful and ease of reference for memory designers while integrating proper choice of peripheral circuits in a memory chip.
2019 7th International Conference on Smart Computing & Communications (ICSCC), 2019
A novel pre-charge mechanism is presented to design an energy efficient content addressable memor... more A novel pre-charge mechanism is presented to design an energy efficient content addressable memory (CAM) with improved search performance. This scheme reduces the match-line (ML) power dissipation by lowering the loading capacitance and ML voltage swing. A mathbf256timesmathbf64−bit\mathbf{256}\times \mathbf{64-bit}mathbf256timesmathbf64−bit NOR-based CAM has been designed with the incorporation of this scheme with predictive 45-nm CMOS technology and post layout simulations are carried out in SPECTRE at 1 V supply. This scheme reduces the energy dissipation by 38% compared to the traditional precharging with 4.7% improvement in search speed.
2019 IEEE 2nd International Conference on Electronics and Communication Engineering (ICECE), 2019
Low-energy dissipation as well as high-speed matching are two goals in the design of content addr... more Low-energy dissipation as well as high-speed matching are two goals in the design of content addressable memories (CAMs). An efficient matchline (ML) controller is proposed in this work to achieve a low-power hybrid CAM while maintaining acceptable performance for search. This is applied in the implementation of 128×32-bit CAM using 45-nm CMOS technology. Short and fast discharged ML path of the proposed controller makes it possible for the CAM to provide 58.56% and 13.11% improvement in search-speed over a conventional NAND-based CAM and a hybrid-type CAM, respectively. The result of shorter NAND-ML controlling the longer NOR-ML is able to suppress not only mismatched but also partial matched NOR- MLs, so that power is saved during search operation. While the average-power consumption is only 3.93 µW, lesser by 30.69 % than the low-power NAND-CAM, the 32-bit word produces ML state in 3.58 ns search time leading to the improvement in energy-delay by 1.21, 2.16 and 4.95 times from the existed hybrid-type, NOR-ML and NAND-ML CAMs, respectively. Energy metric of proposed design, obtained as 0.17 fJ/bit/search under 1 V supply, is the least among the compared designs.
2019 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS), 2019
Content addressable memory (CAM) is a hardware search engine used for high speed searching/access... more Content addressable memory (CAM) is a hardware search engine used for high speed searching/accessing. The utilization of CAM is still constrained in some applications by large power dissipation. Especially, the search approach contributes more power in overall power dissipation. This paper proposes a low-power CAM cell using split-controlled single ended storage (SCSES) technique to achieve energy-efficient hardware search engine. A 64×32-bit CAM array is designed using GPDK 45-nm CMOS technology and the post-layout simulations are performed in SPECTRE simulator to verify the power efficiency. The proposed design can achieve search result in 0.558 ns with energy efficiency of 0.83 fJ/bit/search at 1.0 V.
2016 International Conference on Information Technology (ICIT), 2016
This work undertaken, aims to develop pulsed-OFD transmission system, which is used in wide area ... more This work undertaken, aims to develop pulsed-OFD transmission system, which is used in wide area of wireless communications. It is noteworthy to mention that the system is baseband transmission by excluding the RF (radio frequency) part. Pulsed-OFDM enhances the OFDM that endeavors prominent performance parameters with lesser complexity in implementation. This is possible by pulsating the OFDM symbols and spreading the frequency spectrum of the modulated signals. In this paper the implementation of the pulsed-OFDM transmission, as well as comparison results with OFDM have been presented. Focus is on the architectural characteristics of the system blocks which are used for transmission and reception. In this paper up-sampling is applied to the OFDM signal to induce the pulsed-OFDM signal. Finally, the performance of the system has been analyzed.
IET Computers & Digital Techniques, 2020
Performance of a memory depends on the storage stability, yield and sensing speed. Differential i... more Performance of a memory depends on the storage stability, yield and sensing speed. Differential input and the latching time of sense amplifiers are considered as primary performance factors in static random access memory. In a content addressable memory (CAM), the sensing is carried out through the matchline (ML) and the time for evaluation is the key to decide the search speed. The density of CAM is on a rise to accommodate a higher amount of information which increases the power dissipation associated with it. Issues such as the logical threshold variation and lower noise margin between match and mismatch are critical in the operation of a CAM. A good ML sensing technique can reduce the ML power with enhanced evaluation speed. This work provides an analogy of various ML sensing techniques based on their pre-charging, evaluation and performance improvement strategies. Estimation on the power dissipation and evaluation time are made and in-depth analysis on their power-speed-overhead trade-off are carried on 64-bit CAM macros.
IEEE Transactions on Circuits and Systems I: Regular Papers, 2020
Hardware search engines (HSEs) have been drawing significant attention in replacing software sear... more Hardware search engines (HSEs) have been drawing significant attention in replacing software search algorithms in order to speed up location access and data association in modern systems. Content addressable memory (CAM) is one of the promising HSEs due to its parallel search accessibility. However, it is subjected to considerable dissipation which becomes severe while accessing many components including cells and associated matchlines (MLs) during every search. Ternary CAM (TCAM) based routing tables, especially employed in network systems for packet classification, has put a challenge to design energy-efficient architectures with high-performance and reliable look-up operation. Precharge-free CAM schemes are preferred solutions over precharge types to accomplish high-speed as well as low-power goals of associative memory design. In order to overcome the drawbacks of precharge based designs and also to improve performance during the search, we introduce a precharge-free ternary content addressable memory (PF-TCAM). The proposed searching approach enhances the rate of search by reducing half of the ML evaluation time as it eliminates precharge phase prior to every search by performing search in HALF clock cycle. A 32times1632\times 1632times16 -bit proposed macro is designed using 45-nm CMOS technology and post-layout simulations at 1 V supply shows 56% and 63% energy efficiency improvements compared to conventional TCAM and compact TCAM respectively over 25 different search keys despite increasing evaluation speed by 50% with an area overhead of 1 transistor/cell over compact TCAM.
Integration, 2019
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Electronics Letters, 2018
IEEE Transactions on Consumer Electronics, 2018
Hardware search engines are widely used in network routers for high-speed look up and parallel da... more Hardware search engines are widely used in network routers for high-speed look up and parallel data processing. Content addressable memory (CAM) is such an engine that performs high-speed search at the expense of large energy dissipation. Match-line (ML) power dissipation is one of the critical concerns in designing low-power CAM architectures. NOR-MLs make this issue more severe due to the higher number of shortcircuit discharge paths during search. In this paper, a ML control scheme is presented that enables dynamic evaluation of a match-line by effectively activating or deactivating ML sections to improve the energy efficiency. 128×32-bit memory arrays have been designed using 45-nm CMOS technology and verified at different process-voltage-temperature (PVT) and frequency variations to test the improvements of performance. A search frequency of 100 MHz under 1 V supply, at 27 • C applied on the proposed CAM results 48.25%, 52.55% and 54.80% reduction in energy per search (EpS) compared to a conventional CAM, an early predict and terminate ML precharge CAM (EPTP-CAM) and a ML selective charging scheme CAM (MSCS-CAM) respectively. ML partition also minimizes precharge activities between subsequent searches to reduce total precharge power in the proposed scheme. An approximate reduction of 2.5 times from conventional and EPTP schemes is observed in the precharge dissipation. Besides low search power, proposed design improves the energy-delay by 42% to 88% from compared designs.