Michel Renovell - Academia.edu (original) (raw)
Uploads
Papers by Michel Renovell
ACM Journal on Emerging Technologies in Computing Systems
This article proposes an electrical analysis of a new defect mechanism, to be named as b-open def... more This article proposes an electrical analysis of a new defect mechanism, to be named as b-open defect, which may occur in nanometer technologies due to the use of the Self-Aligned Double Patterning (SADP) technique. In metal lines making use of the SADP technique, a single dust particle may cause the simultaneous occurrence of a bridge defect and an open defect. When the two defects impact the same gates, the electrical effects of the bridge and the open combine and exhibit a new specific electrical behavior; we call this new defect behavior a b-open. As a consequence, existing test generation methodologies may miss defect detection. The electrical behavior of the b-open defect is first analyzed graphically and then validated through extensive SPICE simulations. The test pattern conditions to detect the b-open defect are finally determined, and it is shown that the b-open defect requires specific test generation.
This paper presents recent developments for testing SRAM-based FPGAs using a structural approach.... more This paper presents recent developments for testing SRAM-based FPGAs using a structural approach. The specific architecture of these new chips is first presented identifying the specific FPGA test problems as well as the FPGA test properties. The FPGA architecture is then conceptually divided into different architectural elements. For each architectural element test configurations and test vectors are derived targeting the assumed fault models.
2009 IEEE International Workshop on Memory Technology, Design, and Testing, 2009
With today manufacturing technology, it is not possible to eliminate all defects so that every ma... more With today manufacturing technology, it is not possible to eliminate all defects so that every manufactured unit is perfect. Instead, each manufactured unit must be tested so that defective parts are not shipped to a customer. In this situation, the test process consists in identifying defective circuits by applying test vectors in such a way that the presence of the defect can be observed on some circuit outputs. Traditionally, test generation targets on fault models to produce tests that are expected to identify defects such as unintended shorts and opens. Test generation does not directly target defects for two main reasons. Firstly, many defects are not easy to analyze and no model exists to completely describe their behavior, thus making inconsistent test generation for these defects. Secondly, there can be a very large number of possible defects in a circuit. Since test generation and test application are limited by available resources such as memory and time, generating tests for all defects is unfeasible. Consequently, a relatively small set of abstract defects, namely faults, is constructed and these faults are targeted to generate the tests. With this approach, the test quality relies on fortuitous detection of non-targeted defects. As the quality demands increase, the effectiveness of test generation without any defect consideration becomes questionable. High quality test generation requires a better knowledge of defect behavior. As a matter of fact, the analysis of defect behavior is a quite difficult task. One of the main difficulties comes from the presence of random value parameters in the defects, preventing any prediction of the defect behavior. The mechanisms of defect appearance are obviously not controlled, resulting in electrical situations with unknown parameters. As a simple example, how to predict the voltage created by a shortcircuit when the value of the short resistance is not known a priori. The classical assumptions such as zero-resistance short can no longer be used and a realistic analysis of defect behavior is required. A challenging but realistic model of defect behavior must now incorporate the random parameters. In the following different fault models for resistive bridging are revisited. 1. Classical Fault Models Historically many fault models have been used to detect bridging defects, each new model generation trying to more precisely describe and represent the real defects. In the early 80’s, the most used fault models were the wired-AND, wired-OR and Dominant models. In fact, these first purely ‘logic’ fault models did not consider any electrical parameter of the real bridging
Lecture Notes in Computer Science, 2002
2014 9th IEEE International Conference on Design & Technology of Integrated Systems in Nanoscale Era (DTIS), 2014
Proceedings International Test Conference 2001 (Cat. No.01CH37260), 2000
... 161 Rue Ada 47 AV Diagonal 101 Metro Drive Barcelona Spain renovell @ lirmm.fr figueras @eel.... more ... 161 Rue Ada 47 AV Diagonal 101 Metro Drive Barcelona Spain renovell @ lirmm.fr figueras @eel.upc.es zorian@lvision.com ... An original and optimal implementation of the proposedarchitecture is given with minimum area overhead and absolutely no delay impact. ...
Ieee Design and Test of Computers, Mar 1, 2003
Journal of Computer Science and Technology, 2005
Ieee Design and Test of Computers, Nov 1, 2002
Analyzing defect behavior is becoming increasingly difficult with the rising significance of defe... more Analyzing defect behavior is becoming increasingly difficult with the rising significance of defects that depend on random parameters. Such unpredictable parameters can affect various types of test escape. The concept of detection domains can help sort out the behavior of these test escapes.
Journal of Electronic Testing Theory and Applications, 2005
Ieee Transactions on Computer Aided Design of Integrated Circuits and Systems, 2008
AbstractTest application at reduced power supply voltage (low-voltage testing) or reduced temper... more AbstractTest application at reduced power supply voltage (low-voltage testing) or reduced temperature (low-temperature testing) can improve the defect coverage of a test set, particularly of resistive short defects. Using a probabilistic model of two-line nonfeedback short ...
... the synchronization. 34 35 36 37 38 39 40 41 N=64 N=128 N=256 N=512 N=1024 N=2048 N=4096 # sa... more ... the synchronization. 34 35 36 37 38 39 40 41 N=64 N=128 N=256 N=512 N=1024 N=2048 N=4096 # samples SINA DF S (d B ) phase = 0 phase = 5 phase = 10 phase = -3 phase = -8 Figure 2. SINAD vs. number of samples It ...
Proceedings of the the Roadmap to Reconfigurable Computing 10th International Workshop on Field Programmable Logic and Applications, 2000
2009 4th International Conference on Design & Technology of Integrated Systems in Nanoscal Era, 2009
The Eighth IEEE European Test Workshop, 2003. Proceedings., 2000
European Test Workshop 1999 (Cat. No.PR00390), 2000
This paper describes an approach to minimize the number of test configurations for testing the lo... more This paper describes an approach to minimize the number of test configurations for testing the logic cells of a RAM-based FPGA. The proposed approach concerns the XILINX4000 family. On this example of FPGA, a classical test technique consists in first generating test configurations for the elementary modules, then test configurations for a single logic cell, and finally test configurations for the m×m array of logic cells. In this classical technique, it is shown that the key point is the minimization of the number of test configurations for a logic cell. An approach for the logic cell of the XILINX4000 family is then described to define a minimum number of test configurations. This approach gives only 5 test configurations for the XILINX4000 family while the previous published works concerning Boolean testing of this FPGA family gives 8 or 21 test configurations
Proceedings of the Xiith Conference on Integrated Circuits and Systems Design, Sep 29, 1999
ACM Journal on Emerging Technologies in Computing Systems
This article proposes an electrical analysis of a new defect mechanism, to be named as b-open def... more This article proposes an electrical analysis of a new defect mechanism, to be named as b-open defect, which may occur in nanometer technologies due to the use of the Self-Aligned Double Patterning (SADP) technique. In metal lines making use of the SADP technique, a single dust particle may cause the simultaneous occurrence of a bridge defect and an open defect. When the two defects impact the same gates, the electrical effects of the bridge and the open combine and exhibit a new specific electrical behavior; we call this new defect behavior a b-open. As a consequence, existing test generation methodologies may miss defect detection. The electrical behavior of the b-open defect is first analyzed graphically and then validated through extensive SPICE simulations. The test pattern conditions to detect the b-open defect are finally determined, and it is shown that the b-open defect requires specific test generation.
This paper presents recent developments for testing SRAM-based FPGAs using a structural approach.... more This paper presents recent developments for testing SRAM-based FPGAs using a structural approach. The specific architecture of these new chips is first presented identifying the specific FPGA test problems as well as the FPGA test properties. The FPGA architecture is then conceptually divided into different architectural elements. For each architectural element test configurations and test vectors are derived targeting the assumed fault models.
2009 IEEE International Workshop on Memory Technology, Design, and Testing, 2009
With today manufacturing technology, it is not possible to eliminate all defects so that every ma... more With today manufacturing technology, it is not possible to eliminate all defects so that every manufactured unit is perfect. Instead, each manufactured unit must be tested so that defective parts are not shipped to a customer. In this situation, the test process consists in identifying defective circuits by applying test vectors in such a way that the presence of the defect can be observed on some circuit outputs. Traditionally, test generation targets on fault models to produce tests that are expected to identify defects such as unintended shorts and opens. Test generation does not directly target defects for two main reasons. Firstly, many defects are not easy to analyze and no model exists to completely describe their behavior, thus making inconsistent test generation for these defects. Secondly, there can be a very large number of possible defects in a circuit. Since test generation and test application are limited by available resources such as memory and time, generating tests for all defects is unfeasible. Consequently, a relatively small set of abstract defects, namely faults, is constructed and these faults are targeted to generate the tests. With this approach, the test quality relies on fortuitous detection of non-targeted defects. As the quality demands increase, the effectiveness of test generation without any defect consideration becomes questionable. High quality test generation requires a better knowledge of defect behavior. As a matter of fact, the analysis of defect behavior is a quite difficult task. One of the main difficulties comes from the presence of random value parameters in the defects, preventing any prediction of the defect behavior. The mechanisms of defect appearance are obviously not controlled, resulting in electrical situations with unknown parameters. As a simple example, how to predict the voltage created by a shortcircuit when the value of the short resistance is not known a priori. The classical assumptions such as zero-resistance short can no longer be used and a realistic analysis of defect behavior is required. A challenging but realistic model of defect behavior must now incorporate the random parameters. In the following different fault models for resistive bridging are revisited. 1. Classical Fault Models Historically many fault models have been used to detect bridging defects, each new model generation trying to more precisely describe and represent the real defects. In the early 80’s, the most used fault models were the wired-AND, wired-OR and Dominant models. In fact, these first purely ‘logic’ fault models did not consider any electrical parameter of the real bridging
Lecture Notes in Computer Science, 2002
2014 9th IEEE International Conference on Design & Technology of Integrated Systems in Nanoscale Era (DTIS), 2014
Proceedings International Test Conference 2001 (Cat. No.01CH37260), 2000
... 161 Rue Ada 47 AV Diagonal 101 Metro Drive Barcelona Spain renovell @ lirmm.fr figueras @eel.... more ... 161 Rue Ada 47 AV Diagonal 101 Metro Drive Barcelona Spain renovell @ lirmm.fr figueras @eel.upc.es zorian@lvision.com ... An original and optimal implementation of the proposedarchitecture is given with minimum area overhead and absolutely no delay impact. ...
Ieee Design and Test of Computers, Mar 1, 2003
Journal of Computer Science and Technology, 2005
Ieee Design and Test of Computers, Nov 1, 2002
Analyzing defect behavior is becoming increasingly difficult with the rising significance of defe... more Analyzing defect behavior is becoming increasingly difficult with the rising significance of defects that depend on random parameters. Such unpredictable parameters can affect various types of test escape. The concept of detection domains can help sort out the behavior of these test escapes.
Journal of Electronic Testing Theory and Applications, 2005
Ieee Transactions on Computer Aided Design of Integrated Circuits and Systems, 2008
AbstractTest application at reduced power supply voltage (low-voltage testing) or reduced temper... more AbstractTest application at reduced power supply voltage (low-voltage testing) or reduced temperature (low-temperature testing) can improve the defect coverage of a test set, particularly of resistive short defects. Using a probabilistic model of two-line nonfeedback short ...
... the synchronization. 34 35 36 37 38 39 40 41 N=64 N=128 N=256 N=512 N=1024 N=2048 N=4096 # sa... more ... the synchronization. 34 35 36 37 38 39 40 41 N=64 N=128 N=256 N=512 N=1024 N=2048 N=4096 # samples SINA DF S (d B ) phase = 0 phase = 5 phase = 10 phase = -3 phase = -8 Figure 2. SINAD vs. number of samples It ...
Proceedings of the the Roadmap to Reconfigurable Computing 10th International Workshop on Field Programmable Logic and Applications, 2000
2009 4th International Conference on Design & Technology of Integrated Systems in Nanoscal Era, 2009
The Eighth IEEE European Test Workshop, 2003. Proceedings., 2000
European Test Workshop 1999 (Cat. No.PR00390), 2000
This paper describes an approach to minimize the number of test configurations for testing the lo... more This paper describes an approach to minimize the number of test configurations for testing the logic cells of a RAM-based FPGA. The proposed approach concerns the XILINX4000 family. On this example of FPGA, a classical test technique consists in first generating test configurations for the elementary modules, then test configurations for a single logic cell, and finally test configurations for the m×m array of logic cells. In this classical technique, it is shown that the key point is the minimization of the number of test configurations for a logic cell. An approach for the logic cell of the XILINX4000 family is then described to define a minimum number of test configurations. This approach gives only 5 test configurations for the XILINX4000 family while the previous published works concerning Boolean testing of this FPGA family gives 8 or 21 test configurations
Proceedings of the Xiith Conference on Integrated Circuits and Systems Design, Sep 29, 1999