Program Slicing Research Papers - Academia.edu (original) (raw)

In this paper we will apply a generic database reverse engineering methodology to a case study. We will sketch a database reverse engineering methodology. Then, we will describe the DB- MAIN CASE tool and its reverse engineering... more

In this paper we will apply a generic database reverse engineering methodology to a case study. We will sketch a database reverse engineering methodology. Then, we will describe the DB- MAIN CASE tool and its reverse engineering functionality. We will explain more precisely the program slicing. This is a powerful and useful technique to understand a program at a given point. All will be put together in a realistic, but small, case study.

Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software-engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and... more

Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software-engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages, such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception-handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and thus, limit the usefulness of the applications that use them. This paper discusses the effects of exceptionhandling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences-exceptions that are raised explicitly through throw statements-and exception-handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several softwareengineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception-handling constructs in Java programs, and their effects on some analysis tasks.

Low­cost gas metal arc welding (GMAW)­based 3­D printing has proven effective at additive manufacturing steel and aluminum parts. Early success, however, was based on hand­writing G­code, which is inadequate for the majority of potential... more

Low­cost gas metal arc welding (GMAW)­based 3­D printing has proven effective at additive manufacturing steel and aluminum parts. Early success, however, was based on hand­writing G­code, which is inadequate for the majority of potential users. To enable automated slicing a 3­D model and generating G­code for an acceptable path for GMAW 3­D printing, this paper reports on upgrading of the free and open source CuraEngine. The new slicer, MOSTMetalCura, provides the following novel abilities necessary for GMAW 3­D printing: i) change the perimeter metric from width to track count, ii) avoid movement that overlaps previous weld beads, iii) have infill start immediately after the perimeter finished and in the direction that eliminates translations, iv) add a variable pause between layers to allow for substrate cooling, v) configure GPIO pins to turn on/off the welder, and vi) set optimized wire feed speed and voltage of the welder based on printing speed, layer height, filament diameter, and tool track width. The process for initiating these changes are detailed and the new slicer is used to help improve the function of the printer for ER70S­6 steel. To find the printing function with the smallest bead width based on volume of material, the line width, layer height, and printing speed are varied to provide wire feed speed calculated by MOSTMetalCura, then the settings are used to print 3­D models. The results of 3­D printing three case study objects of increasing geometric complexity using the process methodology improvements presented, which show resolution of 1mm bead widths.

Memotong adalah pekerjaan yang dilakukan untuk mengecilkan ukuran atau membagi bahan-bahan, baik dengan menggunakan pisau atau alat pemotong lain, pada arah melintang panjang bahan atau melintang serat bahan. Ukuran hasilnya relatif... more

Memotong adalah pekerjaan yang dilakukan untuk mengecilkan ukuran atau membagi bahan-bahan, baik dengan menggunakan pisau atau alat pemotong lain, pada arah melintang panjang bahan atau melintang serat bahan. Ukuran hasilnya relatif panjang ataun tebal. Mengiris adalah mengecilkan ukuran bahan dengan menggunakan pisau untuk mendapatkan ukuran panjang pontongan yang lebih kecil dan tipis dengan arah melintang, miring atau sejajar panjang bahan yang dipotong. Walaupun pada dasarnya mengiris dan memotong adalah sama, tetapi pengirisan yang dilakukan baik diatas landasan maupun tidak, biasanya menggunakan pisau atau alat lain yang sesuai dengan keperluannya.

Debugging is still among the most common and costly of programming activities. One reason is that current debugging tools do not directly support the inquisitive nature of the activity. Interrogative Debugging is a new debugging paradigm... more

Debugging is still among the most common and costly of programming activities. One reason is that current debugging tools do not directly support the inquisitive nature of the activity. Interrogative Debugging is a new debugging paradigm in which programmers can ask why did and even why didn't questions directly about their program's runtime failures. The Whyline is a prototype Interrogative Debugging interface for the Alice programming environment that visualizes answers in terms of runtime events directly relevant to a programmer's question. Comparisons of identical debugging scenarios from user tests with and without the Whyline showed that the Whyline reduced debugging time by nearly a factor of 8, and helped programmers complete 40% more tasks.

Dynamic program slicing methods are very attractive for debugging because many statements can be ignored in the process of localizing a bug. Although language interoperability is a key concept in modern develop ment platforms, current... more

Dynamic program slicing methods are very attractive for debugging because many statements can be ignored in the process of localizing a bug. Although language interoperability is a key concept in modern develop ment platforms, current slicing techniques are still res tricted to a single language. In this paper a cross -language dynamic program slicing technique is introduced for the .NET

Even after thorough testing, a few bugs still remain in a program with moderate complexity. These residual bugs are randomly distributed throughout the code. We have noticed that bugs in some parts of a program cause frequent and severe... more

Even after thorough testing, a few bugs still remain in a program with moderate complexity. These residual bugs are randomly distributed throughout the code. We have noticed that bugs in some parts of a program cause frequent and severe failures compared to those in other parts. Then, it is necessary to take a decision about what to test more and what to test less within the testing budget. It is possible to prioritize the methods and classes of an object-oriented program according to their potential to cause failures. For this, we propose a program metric called influence metric to find the influence of a program element on the source code. First, we represent the source code into an intermediate graph called extended system dependence graph. Then, forward slicing is applied on a node of the graph to get the influence of that node. The influence metric for a method m in a program shows the number of statements of the program which directly or indirectly use the result produced by method m. We compute the influence metric for a class c based on the influence metric of all its methods. As influence metric is computed statically, it does not show the expected behavior of a class at run time. It is already known that faults in highly executed parts tend to more failures. Therefore, we have considered operational profile} to find the average execution time of a class in a system. Then, classes are prioritized in the source code based on influence metric and average execution time. The priority of an element indicates the potential of the element to cause failures. Once all program elements have been prioritized, the testing effort can be apportioned so that the elements causing frequent failures will be tested thoroughly. We have conducted experiments for two well-known case studies — Library Management System and Trading Automation System — and successfully identified critical elements in the source code of each case study. We have also conducted experiments to compare our scheme with a related scheme. The experimental studies justify that our approach is more accurate than the existing ones in exposing critical elements at the implementation level.

This article surveys previous work on program slicing-based techniques. For each technique we describe its features, its main applications and a common example of slicing using such a technique. After discussing each technique separately,... more

This article surveys previous work on program slicing-based techniques. For each technique we describe its features, its main applications and a common example of slicing using such a technique. After discussing each technique separately, all of them are compared in order to clarify and establish the relations between them. This comparison gives rise to a classification of techniques which can help to guide future research directions in this field.

Software modernization converts legacy systems into component-based systems. The process involves program understanding, business rules extraction, and software transformation. In this paper, we will present a semi-automated program... more

Software modernization converts legacy systems into component-based systems. The process involves program understanding, business rules extraction, and software transformation. In this paper, we will present a semi-automated program slicing technique for business rules extraction from legacy code and convert the reusable code into a component conforming to the protocols of a component interconnection model. The component interconnection model was developed to standardize the development of the components so they can communicate with each other seamlessly in a heterogeneous computing environment. We adopted CORBA as an underlying communicating infrastructure in the model.

Program slicing, introduced by Weiser, is known to help programmers in understanding foreign code and in debugging. We apply program slicing to the maintenance problem by extending the notion of a program slice (that originally required... more

Program slicing, introduced by Weiser, is known to help programmers in understanding foreign code and in debugging. We apply program slicing to the maintenance problem by extending the notion of a program slice (that originally required both a variable and line number) to a decomposition slice, one that captures all computation on a given variable; i.e., is independent of line numbers. Using the lattice of single variable decomposition slices, ordered by set inclusion, we demonstrate how to form a slice-based decomposition for programs. We are then able to delineate the e ects of a proposed change by isolating those e ects in a single component of the decomposition. This gives maintainers a straightforward technique for determining those statements and variables that may be modi ed in a component and those that may not. Using the decomposition, we provide a set of principles to prohibit changes that will interfere with unmodi ed components. These semantically consistent changes can then be merged back into the original program in linear time. Moreover, the maintainer can test the changes in the component with the assurance that there are no linkages into other components. Thus, decomposition slicing induces a new software maintenance process model that eliminates the need for regression testing.

During the last two decades the software evolution community has intensively tackled the program integration issue whose main objective is to merge in a consistent way different versions of source code descriptions corresponding to... more

During the last two decades the software evolution community has intensively tackled the program integration issue whose main objective is to merge in a consistent way different versions of source code descriptions corresponding to different and independent variants of the same system. Well established approaches, mainly based on the dependence analysis techniques, have been used to bring suitable solutions. More recently, software evolution researchers focused on the need to develop techniques and tools that support software architecture understanding, testing and maintaining. The objective of this paper is to investigate the software architecture integration, which is a new interesting issue. Its purpose is to merge independent versions of software architecture descriptions instead of source code descriptions. The proposed approach, based on dependence analysis techniques, is illustrated through an appropriate case study.

Software Product Line (SPL) engineering is a paradigm shift towards modeling and developing software system families rather than individual systems. It focuses on the means of efficiently producing and maintaining multiple similar... more

Software Product Line (SPL) engineering is a paradigm shift towards modeling and developing software system families rather than individual systems. It focuses on the means of efficiently producing and maintaining multiple similar software products, exploiting what they have in common and managing what varies among them. This is analogous to what is practiced in the automotive industry, where the focus is on creating a single production line, out of which many customized but similar variations of a car model are produced. Feature models (FMs) are a fundamental formalism for specifying and reasoning about commonality and variability of SPLs. FMs are becoming increasingly complex, handled by several stakeholders or organizations, used to describe features at various levels of abstraction and related in a variety of ways. In different contexts and application domains, maintaining a single large FM is neither feasible nor desirable. Instead, multiple FMs are now used. In this thesis, we develop theoretical foundations and practical support for managing multiple FMs. We design and develop a set of composition and decomposition operators (aggregate, merge, slice) for supporting separation of concerns. The operators are formally defined, implemented with a fully automated algorithm and guarantee properties in terms of sets of configurations. We show how the composition and decomposition operators can be combined together or with other reasoning and editing operators to realize complex tasks. We propose a textual language, FAMILIAR (for FeAture Model scrIpt Language for manIpulation and Automatic Reasoning), which provides a practical solution for managing FMs on a large scale. An SPL practitioner can combine the different operators and manipulate a restricted set of concepts (FMs, features, configurations, etc.) using a concise notation and language facilities. FAMILIAR hides implementation details (e.g., solvers) and comes with a development environment. We report various applications of the operators and usages of FAMILIAR in different domains (medical imaging, video surveillance) and for different purposes (scientific workflow design, variability modeling from requirements to runtime, reverse engineering), showing the applicability of both the operators and the supporting language. Without the new capabilities brought by the operators and FAMILIAR, some analysis and reasoning operations would not be made possible in the different case studies. To conclude, we discuss different research perspectives in the medium term (regarding the operators, the language and validation elements) and in the long term (e.g., relationships between FMs and other models).

Conditioned slicing can be applied to reverse engineering problems which involve the extraction of executable fragments of code in the context of some criteria of interest. This paper introduces ConSUS, a conditioner for the Wide Spectrum... more

Conditioned slicing can be applied to reverse engineering problems which involve the extraction of executable fragments of code in the context of some criteria of interest. This paper introduces ConSUS, a conditioner for the Wide Spectrum Language, WSL. The symbolic executor of Con-SUS prunes the symbolic execution paths, and its predicate reasoning system uses the FermaT simplify transformation in place of a more conventional theorem prover. We show that this combination of pruning and simplificationas-reasoner leads to a more scalable approach to conditioning.

Program slicing is an important operation that can be used as the basis for programming tools that help programmers understand, debug, maintain, and test their code. This paper extends previous work on program slicing by providing a new... more

Program slicing is an important operation that can be used as the basis for programming tools that help programmers understand, debug, maintain, and test their code. This paper extends previous work on program slicing by providing a new definition of “correct” slices, by introducing a representation for C-style switch statements, and by defining a new way to compute control dependences and to slice a programdependence graph so as to compute more precise slices of programs that include jumps and switches. Experimental results show that the new approach to slicing can sometimes lead to a significant improvement in slice precision.

Program slicing is a decomposition technique which has many applications in various software engineering activities such as program debugging, testing, maintenance etc. Aspect-oriented programming (AOP) is a new programming paradigm that... more

Program slicing is a decomposition technique which has many applications in various software engineering activities such as program debugging, testing, maintenance etc. Aspect-oriented programming (AOP) is a new programming paradigm that enables modular implementation of cross-cutting concerns such as exception handling, security, synchronization, logging etc. The unique features of AOP such as join-point, advice, aspect, introduction etc. pose difficulties for slicing of AOPs. We propose a dynamic slicing algorithm for aspect-oriented programs. Our algorithm uses a dependence-based representation called Dynamic Aspect-Oriented Dependence Graph (DADG) as the intermediate program representation. The DADG is an arc-classified digraph which represents various dynamic dependences between the statements of the aspect-oriented program. We have used a trace file to store the execution history of the program. We have developed a tool called Dynamic Depenedence Slicing Tool (DDST) to implement our algorithm. We have tested our algorithm on many programs for 40-50 runs. The resulting dynamic slice is precise as we create a node in the DADG for each occurrence of the statement in the execution trace.

Many reverse-engineering tools have been developed to derive abstract representations from existing source code. Graphic visuals derived from reverse engineered source code have long been recognized for their impact on improving the... more

Many reverse-engineering tools have been developed to derive abstract representations from existing source code. Graphic visuals derived from reverse engineered source code have long been recognized for their impact on improving the comprehensibility of the structural and behavioral aspects of software systems and their source code. As programs become more complex and larger, the sheer volume of information to be

This paper presents a method to generate, analyse and represent test cases from protocol specification. The language of temporal ordering specification (LOTOS) is mapped into an extended finite state machine (EFSM). Test cases are... more

This paper presents a method to generate, analyse and represent test cases from protocol specification. The language of temporal ordering specification (LOTOS) is mapped into an extended finite state machine (EFSM). Test cases are generated from EFSM. The generated test cases are modelled as a dependence graph. Predicate slices are used to identify infeasible test cases that must be eliminated. Redundant assignments and predicates in all the feasible test cases are removed by reducing the test case dependence graph. The reduced test case dependence graph is adapted for a local single-layer (LS) architecture. The reduced test cases for the LS architecture are enhanced to represent the tester's behaviour. The dynamic behaviour of the test cases is represented in the form of control graphs by inverting the events, assigning verdicts to the events in the enhanced dependence graph.

This paper presents Kato, a tool that implements a novel class of optimizations that are inspired by program slicing for imperative languages but are applicable to analyzable declarative languages, such as Alloy. Kato implements a novel... more

This paper presents Kato, a tool that implements a novel class of optimizations that are inspired by program slicing for imperative languages but are applicable to analyzable declarative languages, such as Alloy. Kato implements a novel algorithm for slicing declarative models written in Alloy and leverages its relational engine KodKod for analysis. Given an Alloy model, Kato identifies a slice representing the model's core: a satisfying instance for the core can systematically be extended into a satisfying instance for the entire model, while unsatisfiability of the core implies unsatisfiability of the entire model. The experimental results show that for a variety of subject models Kato's slicing algorithm enables an order of magnitude speed-up over Alloy's default translation to SAT. 29th International Conference on Software Engineering (ICSE'07) 0-7695-2828-7/07 $20.00

Automatic cost analysis has interesting applications in the context of verification and certification of mobile code. For instance, the code receiver can use cost information in order to decide whether to reject mobile code which has too... more

Automatic cost analysis has interesting applications in the context of verification and certification of mobile code. For instance, the code receiver can use cost information in order to decide whether to reject mobile code which has too large cost requirements in terms of computing resources (in time and/or space) or billable events (SMSs sent, bandwidth required). Existing cost analyses for a variety of languages describe the resource consumption of programs by means of Cost Equation Systems (CESs), which are similar to, but more general than recurrence equations. CESs express the cost of a program in terms of the size of its input data. In a further step, a closed form (i.e., non-recursive) solution or upper bound can sometimes be found by using existing Computer Algebra Systems (CASs), such as Maple and Mathematica. In this work, we focus on cost analysis of Java bytecode, a language which is widely used in the context of mobile code and we study the problem of identifying variables which are useless in the sense that they do not affect the execution cost and therefore can be ignored by cost analysis. We identify two classes of useless variables and propose automatic analysis techniques to detect them. The first class corresponds to stack variables that can be replaced by program variables or constant values. The second class corresponds to variables whose value is cost-irrelevant, i.e., does not affect the cost of the program. We propose an algorithm, inspired in static slicing which safely identifies cost-irrelevant variables. The benefits of eliminating useless variables are two-fold: (1) cost analysis without useless variables can be more efficient and (2) resulting CESs are more likely to be solvable by existing CASs.

Recent research proposed efficient methods for software verification combining static and dynamic analysis, where static analysis reports possible runtime errors (some of which may be false alarms) and test generation confirms or rejects... more

Recent research proposed efficient methods for software verification combining static and dynamic analysis, where static analysis reports possible runtime errors (some of which may be false alarms) and test generation confirms or rejects them. However, test generation may time out on real-sized programs before confirming some alarms as real bugs or rejecting some others as unreachable.

We describe DUA-FORENSICS, our open-source Java-bytecode program analysis and instrumentation system built on top of Soot. DUA-FORENSICS has been in development for more than six years and has supported multiple research projects on... more

We describe DUA-FORENSICS, our open-source Java-bytecode program analysis and instrumentation system built on top of Soot. DUA-FORENSICS has been in development for more than six years and has supported multiple research projects on efficient monitoring, test-suite augmentation, fault localization, symbolic execution, and change-impact analysis. Three core features of Soot have proven essential: the Java bytecode processor, the Jimple intermediate representation, and the API to access and manipulate Jimple programs. On top of these foundations, DUA-FORENSICS offers a number of features of potential interest to the Java-analysis community, including (1) a layer that facilitates the instrumentation of Jimple code, (2) a library modeling system for efficient points-to, data-flow, and symbolic analysis, and (3) a fine-grained dependence analysis component. These features have made our own research more productive, reliable, and effective.

Program slicing is becoming increasingly popular as an initial step in the construction of finite-state models for automated verification. As part of a project aimed at building tools to automate the extraction of compact, sound... more

Program slicing is becoming increasingly popular as an initial step in the construction of finite-state models for automated verification. As part of a project aimed at building tools to automate the extraction of compact, sound finitestate models of concurrent Java programs, we have developed the theoretical foundations of slicing threaded programs that use Java monitors and wait/notify synchronization. In this paper, we describe how these foundations are incorporated into a tool that slices multi-threaded Java programs. We describe a simple static analysis that can be used to refine the underlying dependences used by the slicer and illustrate the effectiveness of this refinement by describing the slicing of a realistic Java program.

Program analysis is useful for debugging, testing, and maintaining software systems due to availability of information about the structure and relationship of the program modules. In general, program analysis is performed either based on... more

Program analysis is useful for debugging, testing, and maintaining software systems due to availability of information about the structure and relationship of the program modules. In general, program analysis is performed either based on control flow graph (CFG) or dependence graph (DG). However, in the case of aspect-oriented programming (AOP), control flow graph or dependence graph are not enough to model the features of Aspect-oriented (AO) programs. Although AOP is good for modular representation and crosscutting concern, a suitable graph model for program analysis is required to gather information on its structure for the purpose of minimizing maintenance effort. In this thesis, a graph model known as Slicing Aspect-Oriented Program by Using Dependence Flow Graph For Software Maintenance Purpose is proposed to represent the structure of aspect-oriented programs. The graph is formed by merging CFG and DG. As a consequence, more information about dependencies involving the features of AOP, such as join point, advice, aspects, their related constructs, and the flow of control are able to be gathered. Based on AODFG, slicing criteria are defined for aspect-oriented features. The concept of slicing AODFG model is also proposed. A prototype tool called Aspect-oriented Slicing Tool was developed to implement AODFG. The performance of AODFG was evaluated by analysing some AspectJ programs taken from AspectJ Development Tools. The analysis showed the consistency of the output compared with DG and CFG. In addition, an empirical study was conducted to find out the effect of AOST in terms of effectiveness, understandability , and modifiability in maintenance purpose. The results of the empirical analysis showed positive responses.

In this paper, we introduce Static Execute After (SEA) relationship among program components and present an efficient analysis algorithm. Our case studies show that SEA may approximate static slicing with perfect recall and high... more

In this paper, we introduce Static Execute After (SEA) relationship among program components and present an efficient analysis algorithm. Our case studies show that SEA may approximate static slicing with perfect recall and high precision, while being much less expensive and more usable. When differentiating between explicit and hidden dependencies, our case studies also show that SEA may correlate with direct and indirect class coupling. We speculate that SEA may find applications in computation of hidden dependencies and through it in many maintenance tasks, including change propagation and regression testing.

Many slicing techniques have been proposed based on the traditional Program Dependence Graph (PDG) representation. In traditional PDGs, the notion of dependency between statements is based on syntactic presence of a variable in the... more

Many slicing techniques have been proposed based on the traditional Program Dependence Graph (PDG) representation. In traditional PDGs, the notion of dependency between statements is based on syntactic presence of a variable in the definition of another variable or on a conditional expression. Mastroeni and Zanardini introduced a semantics-based dependency both at concrete and abstract domain. This semantic dependency is computed at expression level over all possible (abstract) states appearing at program points. In this paper we strictly improve this approach by (i) considering the semantic relevancy of the statements (not only the expressions), and (ii) adopting conditional dependency. This allows us to transform the semantics-based abstract PDG into an semantics-based abstract Dependence Condition Graph (DCG) that enables to identify the conditions for dependence between program points. The resulting program slicing algorithm is stricly more accurate than the Mastroeni and Zanardini's one.

This paper shows howalarge class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflowfacts is a finite set, and that the dataflowfunctions distribute... more

This paper shows howalarge class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflowfacts is a finite set, and that the dataflowfunctions distribute overthe confluence operator (either union or intersection). This class of problems includes-but is not limited to-the classical separable problems (also known as "gen/kill" or "bit-vector" problems)-e.g.,reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes manyn on-separable problems, including trulylive variables, copyconstant propagation, and possibly-uninitialized variables.

Programming languages designed specifically for multi-agent systems represent a new programming paradigm that has gained popularity over recent years, with some multi-agent programming languages being used in increasingly sophisticated... more

Programming languages designed specifically for multi-agent systems represent a new programming paradigm that has gained popularity over recent years, with some multi-agent programming languages being used in increasingly sophisticated applications, often in critical areas. To support this, we have developed a set of tools to allow the use of model-checking techniques in the verification of systems directly implemented in one particular language called AgentSpeak. The success of model checking as a verification technique for large software systems is dependent partly on its use in combination with various state-space reduction techniques, an important example of which is property-based slicing. This article introduces an algorithm for property-based slicing of AgentSpeak multi-agent systems. The algorithm uses literal dependence graphs, as developed for slicing logic programs, and generates a program slice whose state space is stuttering-equivalent to that of the original program; the slicing criterion is a property in a logic with LTL operators and (shallow) BDI modalities. In addition to showing correctness and characterizing the complexity of the slicing algorithm, we apply it to an AgentSpeak program based on autonomous planetary exploration rovers, and we discuss how slicing reduces the model-checking state space. The experiment results show a significant reduction in the state space required for model checking that agent, thus indicating that this approach can have an important impact on the future practicality of agent verification.

A highly efficient lightweight forward static slicing method is introduced. The method is implemented as a tool on top of srcML, an XML representation of source code. The approach does not compute the program dependence graph but instead... more

A highly efficient lightweight forward static slicing method is introduced. The method is implemented as a tool on top of srcML, an XML representation of source code. The approach does not compute the program dependence graph but instead dependency information is computed as needed while computing the slice on a variable. The result is a list of line numbers, dependent variables, aliases, and function calls that are part of the slice for a given variable. The tool produces the slice in this manner for all variables in a given system. The approach is highly scalable and can generate the slices for all variables of the Linux kernel in less than 13 minutes. Benchmark results are compared with the CodeSurfer slicing tool and the approach compares well with regards to accuracy of slices.

A dynamic program slice is the part of a program that affects the computation of a variable of interest during program execution on a specific program input. Dynamic slices are usually smaller than static slices and are more useful in... more

A dynamic program slice is the part of a program that affects the computation of a variable of interest during program execution on a specific program input. Dynamic slices are usually smaller than static slices and are more useful in interactive applications such as program debugging and testing. The understanding and debugging of multithreaded and distributed programs are much harder compared to those of sequential programs. The nondeterministic nature of multithreaded programs, the lack of global states, unsynchronized interactions among processes, multiple threads of control and a dynamically varying number of processes are some of the reasons for this difficulty. Different types of dynamic program slices, together with algorithms to compute them have been proposed in the literature. Most of the existing algorithms for finding slices of distributed programs use trace files and are not efficient in terms of time and space complexity. Some existing algorithms use a dependency graph and traverse the graph when the slices are asked for, resulting in high response time. This paper proposes an efficient algorithm for distributed programs. It uses control dependence graph as an intermediate representation and generates the dynamic slices with fast response time.

Testing configurable systems, which are becoming prevalent, is expensive due to the large number of configurations and test cases. Existing approaches reduce this expense by selecting or prioritizing configurations. However, these... more

Testing configurable systems, which are becoming prevalent, is expensive due to the large number of configurations and test cases. Existing approaches reduce this expense by selecting or prioritizing configurations. However, these approaches redundantly run the full test suite for the selected configurations. To address this redundancy, we propose a test case selection approach by analyzing the impact of configuration changes with static program slicing. Given an existing test suite T used for testing a system S under a configuration C, our approach decides for each t in T if t has to be used for testing S under a different configuration C'. We have evaluated our approach on a large industrial system within ABB with promising results.

Feature models (FMs) are a popular formalism for describing the commonality and variability of software product lines (SPLs) in terms of features. As SPL development increasingly involves numerous large FMs, scalable modular techniques... more

Feature models (FMs) are a popular formalism for describing the commonality and variability of software product lines (SPLs) in terms of features. As SPL development increasingly involves numerous large FMs, scalable modular techniques are required to manage their complexity. In this paper, we present a novel slicing technique that produces a projection of an FM, including constraints. The slicing allows SPL practitioners to find semantically meaningful decompositions of FMs and has been integrated into the FAMILIAR language.

Abstract. Program slicing is a well understood concept in the imperative paradigm, but so far there has been little work on program slicing in the context of functional languages. This paper describes a program slicing technique for... more

Abstract. Program slicing is a well understood concept in the imperative paradigm, but so far there has been little work on program slicing in the context of functional languages. This paper describes a program slicing technique for Haskell that takes tuple-returning functions apart (called splitting); the converse of this is also described (called merging). The slicer is implemented as a transformation for the Haskell Refactorer, HaRe. Splitting functions is a useful transformation to allow the programmer to extract a particular subset of the ...

Business rules have attained a major role in the development of software systems for businesses. They influence the business behavior based on the decisions enforced upon a wide range of aspects. As the business requirements are subjected... more

Business rules have attained a major role in the development of software systems for businesses. They influence the business behavior based on the decisions enforced upon a wide range of aspects. As the business requirements are subjected to frequent amendments, new business rules have to be evolved to reinstate the previously formed ones. Business analysts count on the assistance from the IS developers for this accomplishment. As business rules are set and owned by the business, provisions must be enforced for business rule management by the business analysts directly. In order to get a clear picture of what the current policies and terms in the business are, business users utilize the document as a valuable tool. Unfortunately, documents are not updated as per the modifications in the source code. Moreover, as the software becomes larger, documents become increasingly large and hence difficult to understand and maintain. Thus they turn out to be a non-useful resource in such conditions. With a motive to resolve the above issues, several tools for extracting the rules from the business program code have been developed. In this paper, a robust architecture for an extraction engine to isolate rules from the base source code has been proposed. This suggested model involves slicing of code segments which results in easy identification of domain variables which in turn would result in extraction of business rules, validating them and exchanging them with the newly formulated business policies. This is a new research dimension which paves waves for efficient and quick business rule management strategy.

Legacy systems are valuable assets for organisations. They continuously evolve with new emerged technologies in rapidly changing business environment. ICENI provided an excellent Grid middleware framework for developing Grid-based... more

Legacy systems are valuable assets for organisations. They continuously evolve with new emerged technologies in rapidly changing business environment. ICENI provided an excellent Grid middleware framework for developing Grid-based systems. It creates an opportunity for legacy systems to evolve in Grid environment. In this paper, we propose a component-based reengineering approach which applies software clustering techniques and program slicing techniques to recover components from legacy systems. It supports component encapsulation with JNI and component integration with CXML. The resulting components with core legacy code function in Grid environment.

i Preface Aspect-oriented programming is a paradigm in software engineering and FOAL logos courtesy of Luca Cardelli programming languages that promises better support for separation of concerns. The third Foundations of Aspect-Oriented... more

i Preface Aspect-oriented programming is a paradigm in software engineering and FOAL logos courtesy of Luca Cardelli programming languages that promises better support for separation of concerns. The third Foundations of Aspect-Oriented Languages (FOAL) workshop was held at the Third International Conference on Aspect-Oriented Software Development in Lancaster, UK, on March 23, 2004. This workshop was designed to be a forum for research in formal foundations of aspect-oriented programming languages. The call for papers announced the areas of interest for FOAL as including, but not limited to: semantics of aspect-oriented languages, specification and verification for such languages, type systems, static analysis, theory of testing, theory of aspect composition, and theory of aspect translation (compilation) and rewriting. The call for papers welcomed all theoretical and foundational studies of foundations of aspect-oriented languages.

Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding,... more

Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperative or object paradigms, are based on some sort of graph structure representing

Software Development is a complex and multidimensional task. Often software development faces serious problems of meeting key constraints of cost and time. Big projects which are well planned and analyzed, can end up in a disaster because... more

Software Development is a complex and multidimensional task. Often software development faces serious problems of meeting key constraints of cost and time. Big projects which are well planned and analyzed, can end up in a disaster because of mismanagement in cost estimation and time allocation. Program slicing has unique importance in addressing the issues of cost and time. It is broadly applicable static program analysis technique which provides mechanism to analyze and understand the program behavior for further restructuring and refinement. In this paper, authors investigate the relationship between program slicing and software development phases on the basis of empirical studies conducted in the past and also establish the fact that how program slicing can be helpful in making software system cost and time effective.

Abstract—Program slicing is a popular but imprecise technique for identifying which parts of a program affect or are affected by a particular value. A major reason for this imprecision is that slicing reports all program statements that... more

Abstract—Program slicing is a popular but imprecise technique for identifying which parts of a program affect or are affected by a particular value. A major reason for this imprecision is that slicing reports all program statements that may belong to a slice, regardless of how relevant to the target value they are. To address this problem, we introduce quantitative slicing (q-slicing), a novel approach that quantifies the relevance of each statement in a slice.

We propose static program analysis techniques for identifying the impact of relational database schema changes upon object-oriented applications. We use dataflow analysis to extract all possible database interactions that an application... more

We propose static program analysis techniques for identifying the impact of relational database schema changes upon object-oriented applications. We use dataflow analysis to extract all possible database interactions that an application may make. We then use this information to predict the effects of schema change. We evaluate our approach with a case-study of a commercially available content management system, where we investigated 62 versions of between 70k-127k LoC and a schema size of up to 101 tables and 568 stored procedures. We demonstrate that the program analysis must be more precise, in terms of context-sensitivity than related work. However, increasing the precision of this analysis increases the computational cost. We use program slicing to reduce the size of the program that needs to be analysed. Using this approach, we are able to analyse the case study in under 2 minutes on a standard desktop machine, with no false negatives and a low level of false positives.

Atropos is a software tool for visualising concurrent program executions intended to help students debug concurrent programs and learn how concurrency works. Atropos supports a slicing debugging strategy by providing a visualisation of... more

Atropos is a software tool for visualising concurrent program executions intended to help students debug concurrent programs and learn how concurrency works. Atropos supports a slicing debugging strategy by providing a visualisation of dynamic dependence graphs that can be explored to trace the chain of events backwards from a symptom to its cause. In this paper, we present the reasoning behind the design of Atropos and summarise how we evaluated it with students.

Developing software product-lines based on a set of shared components is a proven tactic to enhance reuse, quality, and time to market in producing a portfolio of products. Largescale product families face rapidly increasing maintenance... more

Developing software product-lines based on a set of shared components is a proven tactic to enhance reuse, quality, and time to market in producing a portfolio of products. Largescale product families face rapidly increasing maintenance challenges as their evolution can happen both as a result of collective domain engineering activities, and as a result of product-specific developments. To make informed decisions about prospective modifications, developers need to estimate what other sections of the system will be affected and need attention, which is known as change impact analysis.