Hana Chockler | King's College London (original) (raw)
Papers by Hana Chockler
Information Processing Letters, 2004
We show an (m) lower bound on the number of queries required to test whether a Boolean function d... more We show an (m) lower bound on the number of queries required to test whether a Boolean function depends on at most m out of its n variables. This improves a previously known lower bound for testing this property. Our proof is simple and uses only elementary techniques.
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. In this paper we study coverage metrics for model checking.Coverage metrics are based on modifications we apply to the system in order to check which parts of it were actually relevant for the verification process to succeed. We introduce two principles that we believe should be part of any coverage metric for model checking: a distinction between state-based and logicbased coverage, and a distinction between the system and its environment. We suggest several coverage metrics that apply these principles, and we describe two algorithms for finding the uncovered parts of the system under these definitions. The first algorithm is a symbolic implementation of a naive algorithm that model checks many variants of the original system. The second algorithm improves the naive algorithm by exploiting overlaps in the variants. We also suggest a few helpful outputs to the user, once the uncovered parts are found.
ACM Transactions on Computational Logic, 2010
ABSTRACT Cook and Krajíček have recently obtained the following Karp-Lipton collap... more ABSTRACT Cook and Krajíček have recently obtained the following Karp-Lipton collapse result in bounded arithmetic: if the theory PV proves NP⊆ P/poly, then the polynomial hierarchy collapses to the Boolean hierarchy, and this ...
Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is b... more Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3-valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived from a given user specification. An STE run results in “pass” (1), if the circuit satisfies the specification, “fail” (0) if the circuit falsifies it, and “unknown” (X), if the abstraction is too coarse to determine either of the two. In the latter case, refinement is needed: The X values of some of the abstracted inputs should be replaced. The main difficulty is to choose an appropriate subset of these inputs that will help to eliminate the “unknown” STE result, while avoiding an unnecessary increase in memory and runtime. The common approach to this problem is to manually choose these inputs. This work suggests a novel approach to automatic refinement for STE, which is based on the notion of responsibility. For each input with X value we compute its Degree of Responsibility (DoR) to the “unknown” STE result. We then refine those inputs whose DoR is maximal. We implemented an efficient algorithm, which is linear in the size of the circuit, for computing the approximate DoR of inputs. We used it for refinements for STE on several circuits and specifications. Our experimental results show that DoR is a very useful device for choosing inputs for refinement. In comparison with previous works on automatic refinement, our computation of the refinement set is faster, STE needs fewer refinement iterations and uses less overall memory and time.
Formal verification is a reliable and fully automatic technique for proving correctness of hardwa... more Formal verification is a reliable and fully automatic technique for proving correctness of hardware designs. Its main drawback is the high complexity of verification, and this problem is especially acute in regression verification, where a new version of the design, differing from the previous version very slightly, is verified with respect to the same or a very similar property. In this paper, we present an efficient algorithm for incremental verification, based on the ic3 algorithm, that uses stored information from the previous verification runs in order to improve the complexity of re-verifying similar designs on similar properties. Our algorithm applies both to the positive and to the negative results of verification (that is, both when there is a proof of correctness and when there is a counterexample). The algorithm is implemented and experimental results show improvement of up to two orders of magnitude in running time, compared to full verification.
We continue the study of combinatorial property testing. For a property ψ, an ɛ-test for ψ, for 0... more We continue the study of combinatorial property testing. For a property ψ, an ɛ-test for ψ, for 0 ≤ 1, is a randomized algorithm that given an input x, returns “yes” if x satisfies ψ, and returns “no” with high probability if x is ɛ-far from satisfying ψ, where ɛ-far essentially means that an ɛ-fraction of x needs to be changed in order for it to satisfy ψ. In [AKNS99], Alon et al. show that regular languages are ɛ-testable with a constant (depends on ψ and ɛ and independent of x) number of queries. We extend the result in [AKNS99] to ω-regular languages: given a nondeterministic Büchi automaton A on infinite words and a small ɛ > 0, we describe an algorithm that gets as input an infinite lasso-shape word of the form x · y ω, for finite words x and y, samples only a constant number of letters in x and y, returns “yes” if w ∈ L(A), and returns “no” with probability 2/3 if w is ɛ-far from L(A). We also discuss the applicability of property testing to formal verification, where ω-regular languages are used for the specification of the behavior of nonterminating reactive systems, and computations correspond to lasso-shape words.
Journal of Artificial Intelligence Research, 2004
Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. ... more Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. We extend the definition of causality introduced by Halpern and Pearl (2004a) to take into account the degree of responsibility of A for B. For example, if someone wins an election 11-0, then each person who votes for him is less responsible for the victory than if he had won 6-5. We then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent.
Computing Research Repository, 2003
Even when a system is proven to be correct with respect to a specification, there is still a ques... more Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given by to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification. £ specification AG¦ §¨© AF § (every request is eventually followed by a grant on every path) is satisfied vacuously in a system where requests are never sent. A specification that is satisfied vacuously is likely to point to some problems in the modelling of the system or its desired behavior.
ACM Transactions on Computational Logic, 2008
Even when a system is proven to be correct with respect to a specification, there is still a ques... more Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given by to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification. £ specification AG¦ §¨© AF § (every request is eventually followed by a grant on every path) is satisfied vacuously in a system where requests are never sent. A specification that is satisfied vacuously is likely to point to some problems in the modelling of the system or its desired behavior.
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed, there has been an extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification, and describe an algorithmic way to check it.
In formal verification, we verify that a system is correct with respect to a specification. When ... more In formal verification, we verify that a system is correct with respect to a specification. When verification succeeds and the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. In this paper we study coverage metrics for model checking from a practical point of view. Coverage metrics are based on modifications we apply to the system in order to check which parts of it were actually relevant for the verification process to succeed. We suggest several definitions of coverage, suitable for specifications given in linear temporal logic or by automata on infinite words. We describe two algorithms for computing the parts of the system that are not covered by the specification. The first algorithm is built on top of automata-based model-checking algorithms. The second algorithm reduces the coverage problem to the model-checking problem. Both algorithms can be implemented on top of existing model checking tools.
In simulation-based verification, we check the correctness of a given program by executing it on ... more In simulation-based verification, we check the correctness of a given program by executing it on some input vectors. Even for medium-size programs, exhaustive testing is impossible. Thus, many errors are left undetected. The problem of increasing the exhaustiveness of testing and decreasing the number of undetected errors is the main problem of software testing. In this paper, we present a novel approach to software testing, which allows us to dramatically raise the probability of catching rare errors in large programs. Our approach is based on the cross-entropy method. We define a performance function, which is higher in the neighborhood of an error or a pattern we are looking for. Then, the program is executed many times, choosing input vectors from some random distribution. The starting distribution is usually uniform, and it is changed at each iteration based on the vectors with highest value of the performance function in the previous iteration. The crossentropy method was shown to be very efficient in estimating the probabilities of rare events and in searching for solutions for hard optimization problems. Our experiments show that the cross-entropy method is also very efficient in locating rare bugs and patterns in large programs. We show the experimental results of our cross-entropy based testing tool and compare them to the performance of ConTest and of Java scheduler.
International Journal on Software Tools for Technology Transfer, 2006
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed there has been extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification and describe an algorithmic way to check it.
In simulation-based verification, we check the correctness of a given program by executing it on ... more In simulation-based verification, we check the correctness of a given program by executing it on some input vectors. Even for medium-size programs, exhaustive testing is impossible. Thus, many errors are left undetected. The problem of increasing the exhaustiveness of testing and decreasing the number of undetected errors is the main problem of software testing. In this paper, we present a novel approach to software testing, which allows us to dramatically raise the probability of catching rare errors in large programs. Our approach is based on the cross-entropy method. We define a performance function, which is higher in the neighborhood of an error or a pattern we are looking for. Then, the program is executed many times, choosing input vectors from some random distribution. The starting distribution is usually uniform, and it is changed at each iteration based on the vectors with highest value of the performance function in the previous iteration. The crossentropy method was shown to be very efficient in estimating the probabilities of rare events and in searching for solutions for hard optimization problems. Our experiments show that the cross-entropy method is also very efficient in locating rare bugs and patterns in large programs.We show the experimental results of our cross-entropy based testing tool and compare them to the performance of ConTest and of Java scheduler.
Software model checking came to the focus of research recently, with sharp growth in the number o... more Software model checking came to the focus of research recently, with sharp growth in the number of safety-critical applications and in the complexity of software. In model checking of software we meet some obstacles which do not exist in model checking of hardware: the state explosion problem is more acute, the model often consists of many processes that run concurrently,
Information Processing Letters, 2004
We show an (m) lower bound on the number of queries required to test whether a Boolean function d... more We show an (m) lower bound on the number of queries required to test whether a Boolean function depends on at most m out of its n variables. This improves a previously known lower bound for testing this property. Our proof is simple and uses only elementary techniques.
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. In this paper we study coverage metrics for model checking.Coverage metrics are based on modifications we apply to the system in order to check which parts of it were actually relevant for the verification process to succeed. We introduce two principles that we believe should be part of any coverage metric for model checking: a distinction between state-based and logicbased coverage, and a distinction between the system and its environment. We suggest several coverage metrics that apply these principles, and we describe two algorithms for finding the uncovered parts of the system under these definitions. The first algorithm is a symbolic implementation of a naive algorithm that model checks many variants of the original system. The second algorithm improves the naive algorithm by exploiting overlaps in the variants. We also suggest a few helpful outputs to the user, once the uncovered parts are found.
ACM Transactions on Computational Logic, 2010
ABSTRACT Cook and Krajíček have recently obtained the following Karp-Lipton collap... more ABSTRACT Cook and Krajíček have recently obtained the following Karp-Lipton collapse result in bounded arithmetic: if the theory PV proves NP⊆ P/poly, then the polynomial hierarchy collapses to the Boolean hierarchy, and this ...
Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is b... more Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on 3-valued symbolic simulation, using 0,1, and X (“unknown”). X is used to abstract away values of circuit nodes, thus reducing memory and runtime of STE runs. The abstraction is derived from a given user specification. An STE run results in “pass” (1), if the circuit satisfies the specification, “fail” (0) if the circuit falsifies it, and “unknown” (X), if the abstraction is too coarse to determine either of the two. In the latter case, refinement is needed: The X values of some of the abstracted inputs should be replaced. The main difficulty is to choose an appropriate subset of these inputs that will help to eliminate the “unknown” STE result, while avoiding an unnecessary increase in memory and runtime. The common approach to this problem is to manually choose these inputs. This work suggests a novel approach to automatic refinement for STE, which is based on the notion of responsibility. For each input with X value we compute its Degree of Responsibility (DoR) to the “unknown” STE result. We then refine those inputs whose DoR is maximal. We implemented an efficient algorithm, which is linear in the size of the circuit, for computing the approximate DoR of inputs. We used it for refinements for STE on several circuits and specifications. Our experimental results show that DoR is a very useful device for choosing inputs for refinement. In comparison with previous works on automatic refinement, our computation of the refinement set is faster, STE needs fewer refinement iterations and uses less overall memory and time.
Formal verification is a reliable and fully automatic technique for proving correctness of hardwa... more Formal verification is a reliable and fully automatic technique for proving correctness of hardware designs. Its main drawback is the high complexity of verification, and this problem is especially acute in regression verification, where a new version of the design, differing from the previous version very slightly, is verified with respect to the same or a very similar property. In this paper, we present an efficient algorithm for incremental verification, based on the ic3 algorithm, that uses stored information from the previous verification runs in order to improve the complexity of re-verifying similar designs on similar properties. Our algorithm applies both to the positive and to the negative results of verification (that is, both when there is a proof of correctness and when there is a counterexample). The algorithm is implemented and experimental results show improvement of up to two orders of magnitude in running time, compared to full verification.
We continue the study of combinatorial property testing. For a property ψ, an ɛ-test for ψ, for 0... more We continue the study of combinatorial property testing. For a property ψ, an ɛ-test for ψ, for 0 ≤ 1, is a randomized algorithm that given an input x, returns “yes” if x satisfies ψ, and returns “no” with high probability if x is ɛ-far from satisfying ψ, where ɛ-far essentially means that an ɛ-fraction of x needs to be changed in order for it to satisfy ψ. In [AKNS99], Alon et al. show that regular languages are ɛ-testable with a constant (depends on ψ and ɛ and independent of x) number of queries. We extend the result in [AKNS99] to ω-regular languages: given a nondeterministic Büchi automaton A on infinite words and a small ɛ > 0, we describe an algorithm that gets as input an infinite lasso-shape word of the form x · y ω, for finite words x and y, samples only a constant number of letters in x and y, returns “yes” if w ∈ L(A), and returns “no” with probability 2/3 if w is ɛ-far from L(A). We also discuss the applicability of property testing to formal verification, where ω-regular languages are used for the specification of the behavior of nonterminating reactive systems, and computations correspond to lasso-shape words.
Journal of Artificial Intelligence Research, 2004
Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. ... more Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. We extend the definition of causality introduced by Halpern and Pearl (2004a) to take into account the degree of responsibility of A for B. For example, if someone wins an election 11-0, then each person who votes for him is less responsible for the victory than if he had won 6-5. We then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent.
Computing Research Repository, 2003
Even when a system is proven to be correct with respect to a specification, there is still a ques... more Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given by to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification. £ specification AG¦ §¨© AF § (every request is eventually followed by a grant on every path) is satisfied vacuously in a system where requests are never sent. A specification that is satisfied vacuously is likely to point to some problems in the modelling of the system or its desired behavior.
ACM Transactions on Computational Logic, 2008
Even when a system is proven to be correct with respect to a specification, there is still a ques... more Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given by to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification. £ specification AG¦ §¨© AF § (every request is eventually followed by a grant on every path) is satisfied vacuously in a system where requests are never sent. A specification that is satisfied vacuously is likely to point to some problems in the modelling of the system or its desired behavior.
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed, there has been an extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification, and describe an algorithmic way to check it.
In formal verification, we verify that a system is correct with respect to a specification. When ... more In formal verification, we verify that a system is correct with respect to a specification. When verification succeeds and the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. In this paper we study coverage metrics for model checking from a practical point of view. Coverage metrics are based on modifications we apply to the system in order to check which parts of it were actually relevant for the verification process to succeed. We suggest several definitions of coverage, suitable for specifications given in linear temporal logic or by automata on infinite words. We describe two algorithms for computing the parts of the system that are not covered by the specification. The first algorithm is built on top of automata-based model-checking algorithms. The second algorithm reduces the coverage problem to the model-checking problem. Both algorithms can be implemented on top of existing model checking tools.
In simulation-based verification, we check the correctness of a given program by executing it on ... more In simulation-based verification, we check the correctness of a given program by executing it on some input vectors. Even for medium-size programs, exhaustive testing is impossible. Thus, many errors are left undetected. The problem of increasing the exhaustiveness of testing and decreasing the number of undetected errors is the main problem of software testing. In this paper, we present a novel approach to software testing, which allows us to dramatically raise the probability of catching rare errors in large programs. Our approach is based on the cross-entropy method. We define a performance function, which is higher in the neighborhood of an error or a pattern we are looking for. Then, the program is executed many times, choosing input vectors from some random distribution. The starting distribution is usually uniform, and it is changed at each iteration based on the vectors with highest value of the performance function in the previous iteration. The crossentropy method was shown to be very efficient in estimating the probabilities of rare events and in searching for solutions for hard optimization problems. Our experiments show that the cross-entropy method is also very efficient in locating rare bugs and patterns in large programs. We show the experimental results of our cross-entropy based testing tool and compare them to the performance of ConTest and of Java scheduler.
International Journal on Software Tools for Technology Transfer, 2006
In formal verification, we verify that a system is correct with respect to a specification. Even ... more In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed there has been extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification and describe an algorithmic way to check it.
In simulation-based verification, we check the correctness of a given program by executing it on ... more In simulation-based verification, we check the correctness of a given program by executing it on some input vectors. Even for medium-size programs, exhaustive testing is impossible. Thus, many errors are left undetected. The problem of increasing the exhaustiveness of testing and decreasing the number of undetected errors is the main problem of software testing. In this paper, we present a novel approach to software testing, which allows us to dramatically raise the probability of catching rare errors in large programs. Our approach is based on the cross-entropy method. We define a performance function, which is higher in the neighborhood of an error or a pattern we are looking for. Then, the program is executed many times, choosing input vectors from some random distribution. The starting distribution is usually uniform, and it is changed at each iteration based on the vectors with highest value of the performance function in the previous iteration. The crossentropy method was shown to be very efficient in estimating the probabilities of rare events and in searching for solutions for hard optimization problems. Our experiments show that the cross-entropy method is also very efficient in locating rare bugs and patterns in large programs.We show the experimental results of our cross-entropy based testing tool and compare them to the performance of ConTest and of Java scheduler.
Software model checking came to the focus of research recently, with sharp growth in the number o... more Software model checking came to the focus of research recently, with sharp growth in the number of safety-critical applications and in the complexity of software. In model checking of software we meet some obstacles which do not exist in model checking of hardware: the state explosion problem is more acute, the model often consists of many processes that run concurrently,