Environmental Science and Computer Programming Research Papers (original) (raw)

Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and... more

Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.

In this paper we offer a topology-driven ('natural') definition of subclusters of an undirected graph or network. In addition we find rules for assigning unique roles (from a small set of possible roles) to each node in the network. Our... more

In this paper we offer a topology-driven ('natural') definition of subclusters of an undirected graph or network. In addition we find rules for assigning unique roles (from a small set of possible roles) to each node in the network. Our approach is based on the use of a 'smooth' index for wellconnectedness (eigenvector centrality) which is computed for each node. This index, viewed as a height function, then guides the decomposition of the graph into regions (associated with local peaks of the index), and borders (valleys) between regions. We propose and compare two rules for assigning nodes to regions. We illustrate our approach with simple test graphs, and also by applying it to snapshots of the Gnutella peer-to-peer network from late 2001. This latter analysis suggests that our method implies novel ways of interpreting the notion of well-connectedness for a graph, as these snapshots represent very well connected networks. We argue that our approach is well suited for analyzing computer networks, towards the goal of enhancing their security.

This paper rigorously introduces the concept of model-based mutation testing (MBMT) and positions it in the landscape of mutation testing. Two elementary mutation operators, insertion and omission, are exemplarily applied to a hierarchy... more

This paper rigorously introduces the concept of model-based mutation testing (MBMT) and positions it in the landscape of mutation testing. Two elementary mutation operators, insertion and omission, are exemplarily applied to a hierarchy of graph-based models of increasing expressive power including directed graphs, event sequence graphs, finite-state machines and statecharts. Test cases generated based on the mutated models (mutants) are used to determine not only whether each mutant can be killed but also whether there are any faults in the corresponding system under consideration (SUC) developed based on the original model. Novelties of our approach are: (1) evaluation of the fault detection capability (in terms of revealing faults in the SUC) of test sets generated based on the mutated models, and (2) superseding of the great variety of existing mutation operators by iterations and combinations of the two proposed elementary operators. Three case studies were conducted on industrial and commercial real-life systems to demonstrate the feasibility of using the proposed MBMT approach in detecting faults in SUC, and to analyze its characteristic features. Our experimental data suggest that test sets generated based on the mutated models created by insertion operators are more effective in revealing faults in SUC than those generated by omission operators. Worth noting is that test sets following the MBMT approach were able to detect faults in the systems that were tested by manufacturers and independent testing organizations before they were released.

We propose a new algorithm for partial redundancy elimination based on the new concepts of safe partial availability and safe partial anticipability. These new concepts are derived by the integration of the notion of safety into the... more

We propose a new algorithm for partial redundancy elimination based on the new concepts of safe partial availability and safe partial anticipability. These new concepts are derived by the integration of the notion of safety into the deÿnitions of partial availability and partial anticipability. The algorithm works on ow graphs whose nodes are basic blocks. It is both computationally and lifetime optimal and requires four unidirectional analyses. The most important feature of the algorithm is its simplicity; the algorithm evolves naturally from the new concept of safe partial availability.

Programmers working on large software systems are faced with an extremely complex, information-rich environment. To help navigate through this, modern development environments allow flexible, multi-window browsing and exploration of the... more

Programmers working on large software systems are faced with an extremely complex, information-rich environment. To help navigate through this, modern development environments allow flexible, multi-window browsing and exploration of the source code. Our focus in this paper is on pretty-printing algorithms that can display source code in useful, appealing ways in a variety of styles. Our algorithm is flexible, stable, and peephole-efficient. It is flexible in that it is capable of screenoptimized layouts that support source code visualization techniques such as fisheye views. The algorithm is peephole-efficient, in that it performs work proportional to the size of the visible window and not the size of the entire file. Finally, the algorithm is stable, in that the rendered view is identical to that which would be produced by formatting the entire file. This work has 2 benefits. First, it enables rendering of source codes in multiple fonts and font sizes at interactive speeds. Second, it also allows the use of powerful (but algorithmically more complex) visualization techniques (such as fish-eye views), again, at interactive speeds.

The four key objective properties of a system that are required of it in order for it to qualify as "autonomic" are now wellaccepted-self-configuring, self-healing, self-protecting, and self-optimizing-together with the attribute... more

The four key objective properties of a system that are required of it in order for it to qualify as "autonomic" are now wellaccepted-self-configuring, self-healing, self-protecting, and self-optimizing-together with the attribute properties-viz. selfaware, environment-aware, self-monitoring and self-adjusting. This paper describes the need for next generation system software architectures, where components are agents, rather than objects masquerading as agents, and where support is provided for self-* properties (both existing self-chop and emerging self-* properties). These are discussed as exhibited in NASA missions, and in particular with reference to a NASA concept mission, ANTS, which is illustrative of future NASA exploration missions based on the technology of intelligent swarms.

Case-based reasoning (CBR) is a paradigm for combining problem solving and learning that has become one of the most successful applied subfields of AI in recent years. Now that CBR has become a mature and established technology two... more

Case-based reasoning (CBR) is a paradigm for combining problem solving and learning that has become one of the most successful applied subfields of AI in recent years. Now that CBR has become a mature and established technology two necessities have become critical: the availability of tools to build CBR systems, and the accumulated practical experience of applying CBR techniques to real-world problems. In this paper we are presenting jCOLIBRI, an object-oriented framework in Java for building CBR systems, that greatly benefits from the reuse of previously developed CBR systems.

The paper exemplifies programming in a wide spectrum language by nresttnting stklrs from non-operative specifications-using abstract types and tool; from predicate 1 as set thecJ:y--over recursive functions, to procedural program; with... more

The paper exemplifies programming in a wide spectrum language by nresttnting stklrs from non-operative specifications-using abstract types and tool; from predicate 1 as set thecJ:y--over recursive functions, to procedural program; with variables. B&&S a number of baste types, we develop an interpreter for parts of the language its&:. an rithm for applying ttansfosmation rules to program representations, a text editor. and r\ simulation of Back& functional programming language.

Generic programming is an effective methodology for developing reusable software libraries. Many programming languages provide generics and have features for describing interfaces, but none completely support the idioms used in generic... more

Generic programming is an effective methodology for developing reusable software libraries. Many programming languages provide generics and have features for describing interfaces, but none completely support the idioms used in generic programming. To address this need we developed the language G. The central feature of G is the concept, a mechanism for organizing constraints on generics that is inspired by

Multiple dispatch–the selection of a function to be invoked based on the dynamic type of two or more arguments–is a solution to several classical problems in object-oriented programming. Open multi-methods generalize multiple dispatch... more

Multiple dispatch–the selection of a function to be invoked based on the dynamic type of two or more arguments–is a solution to several classical problems in object-oriented programming. Open multi-methods generalize multiple dispatch towards open-class extensions, which improve separation of concerns and provisions for retroactive design. We present the rationale, design, implementation, performance, programming guidelines, and experiences of working with a language feature, called open multi-methods, for C++. Our open multi-methods support both repeated and virtual inheritance. Our call resolution rules generalize both virtual function dispatch and overload resolution semantics. After using all information from argument types, these rules can resolve further ambiguities by using covariant return types. Care was taken to integrate open multi-methods with existing C++ language features and rules. We describe a model implementation and compare its performance and space requirements to existing open multi-method extensions and work-around techniques for C++. Compared to these techniques, our approach is simpler to use, catches more user mistakes, and resolves more ambiguities through link-time analysis, is comparable in memory usage, and runs significantly faster. In particular, the runtime cost of calling an open multi-method is constant and less than the cost of a double dispatch (two virtual function calls). Finally, we provide a sketch of a design for open multi-methods in the presence of dynamic loading and linking of libraries.

Understanding software design practice is critical to understanding modern information systems development. New developments in empirical software engineering, information systems design science and the interdisciplinary design literature... more

Understanding software design practice is critical to understanding modern information systems development. New developments in empirical software engineering, information systems design science and the interdisciplinary design literature combined with recent advances in process theory and testability have created a situation ripe for innovation. Consequently, this paper utilizes these breakthroughs to formulate a process theory of software design practice: Sensemaking-Coevolution-Implementation Theory explains how complex software systems are created by collocated software development teams in organizations. It posits that an independent agent (design team) creates a software system by alternating between three activities: organizing their perceptions about the context, mutually refining their understandings of the context and design space, and manifesting their understanding of the design space in a technological artifact. This theory development paper defines and illustrates Sensemaking-Coevolution-Implementation Theory, grounds its concepts and relationships in existing literature, conceptually evaluates the theory and situates it in the broader context of information systems development.

There is a hint that this book is going to be fundamentally flawed in the first paragraph of its preface:. .. As a result of work in structured programming by Dijkstra, Hoare, Parnas, Gries, Wirth, and many others, we have systematic... more

There is a hint that this book is going to be fundamentally flawed in the first paragraph of its preface:. .. As a result of work in structured programming by Dijkstra, Hoare, Parnas, Gries, Wirth, and many others, we have systematic procedures for program design. As a result of work in functional and denotational semantics by Turing, Kleene, Scott, and others we have systematic procedures for proving program correctness.

Four different programming logics are compared by example. Three are versions of Martin-LiX type theory and tne fourth is a version of Aczel's logical theory of constructions. They differ in several respects. For example. what is the... more

Four different programming logics are compared by example. Three are versions of Martin-LiX type theory and tne fourth is a version of Aczel's logical theory of constructions. They differ in several respects. For example. what is the notion of specikation? Are there partial or just total objects? Is generai recursion allowed or only primitive recursion of higher type? Is the logic external or integrated? The example is the proof of correctness of a normalization function for conditional expressions.

Providing runtime information about generic types -that is, reifying generics -is a challenging problem studied in several research papers in the last years. This problem is not tackled in current version of the Java programming language... more

Providing runtime information about generic types -that is, reifying generics -is a challenging problem studied in several research papers in the last years. This problem is not tackled in current version of the Java programming language (Java 6), which consequently suffers from serious safety and coherence problems. The quest for finding effective and efficient solutions to this problem is still open, and is further made more complicated by the new mechanism of wildcards introduced in Java J2SE 5.0: its reification aspects are currently unexplored and pose serious semantics and implementation issues.

A key ingredient in system and organization modeling is modeling business processes that involve the collaborative participation of different teams within and outside the organization. Recently, the use of the Unified Modeling Language... more

A key ingredient in system and organization modeling is modeling business processes that involve the collaborative participation of different teams within and outside the organization. Recently, the use of the Unified Modeling Language (UML) for collaborative business modeling has been increasing, thanks to its human-friendly visual representation of a rich set of structural and behavioral views, albeit its unclear semantics. In the meantime, the use of the Web Ontology Language (OWL) has also been emerging, thanks to its clearly-defined semantics, hence being amenable to automatic analysis and reasoning, although it is less human friendly than, and also perhaps not as rich as, the UML notationespecially concerning processes, or activities. In this paper, we view the UML and the OWL as being complementary to each other, and exploit their relative strengths. We provide a mapping between the two, through a set of mapping rules, which allow for the capture of UML activity diagrams in an OWL-ontology. This mapping, which results in a formalization of collaborative processes, also sets a basis for subsequent construction of executable models using the Colored Petri Nets (CPN) formalism. For this purpose, we also provide appropriate mappings from OWL-based ontological elements into CPN elements. A case study of a mortgage granting system is described, along with the potential benefits and limitations of our proposal. 727 The modeling of collaboration processes encompasses the conceptual modeling of a particular domain along with simulation of executable business process models involved in that domain . A key factor in this process is that, besides the models themselves, their inherent semantics should also be shared in order to achieve a common understanding between the collaborating participants . AMENITIES [25] is a methodological framework for the study and development of collaborative systems which extends and makes use of the Unified Modeling Language (UML) . In this framework, different models are subsequently used by stakeholders (e.g., system architects, system analysts, users, testers, programmers, etc.) in order to model and analyze the main characteristics of these kinds of systems (e.g. system structure and behavior to be supported). These models range from those which provide a structural view of a collaborative system (e.g., class and object diagrams) to those which offer a behavioral view (e.g., activity diagrams and state machines).

The rising interest in Java for High Performance Computing (HPC) is based on the appealing features of this language for programming multi-core cluster architectures, particularly the built-in networking and multithreading support, and... more

The rising interest in Java for High Performance Computing (HPC) is based on the appealing features of this language for programming multi-core cluster architectures, particularly the built-in networking and multithreading support, and the continuous increase in Java Virtual Machine (JVM) performance. However, its adoption in this area is being delayed by the lack of analysis of the existing programming options in Java for HPC and thorough and up-to-date evaluations of their performance, as well as the unawareness of the current research projects in this field, whose solutions are needed in order to boost the embracement of Java in HPC.

More and more aspects of concurrency and concurrent programming are becoming part of mainstream programming and software engineering, due to several factors such as the widespread availability of multi-core/parallel architectures and... more

More and more aspects of concurrency and concurrent programming are becoming part of mainstream programming and software engineering, due to several factors such as the widespread availability of multi-core/parallel architectures and Internet-based systems. This leads to the extension of mainstream object-oriented programming languages and platforms–Java is a main example–with libraries providing fine-grained mechanisms and idioms to support concurrent programming, in

The classic readers-writers problem has been extensively studied. This holds to a lesser degree for the reentrant version, where it is allowed to nest locking actions. Such nesting is useful when a library is created with various... more

The classic readers-writers problem has been extensively studied. This holds to a lesser degree for the reentrant version, where it is allowed to nest locking actions. Such nesting is useful when a library is created with various procedures each starting and ending with a lock operation. Allowing nesting makes it possible for these procedures to call each other.

This article demonstrates a method for composing a programming language by combining action-semantics modules. Each module is deÿned separately, and then a programming-language module is deÿned by combining existing modules. This method... more

This article demonstrates a method for composing a programming language by combining action-semantics modules. Each module is deÿned separately, and then a programming-language module is deÿned by combining existing modules. This method enables the language designer to gradually develop a language by deÿning, selecting and combining suitable modules. The resulting modular structure is substantially di erent from that previously employed in action-semantic descriptions.

The notion of Abstract Data Type (ADT) has served as a foundation model for structured and object oriented programming for some thirty years. The current trend in software engineering toward component based systems requires a foundation... more

The notion of Abstract Data Type (ADT) has served as a foundation model for structured and object oriented programming for some thirty years. The current trend in software engineering toward component based systems requires a foundation model as well. The most basic inherent property of an ADT, i.e., that it provides a set of operations, subverts some highly desirable properties in emerging formal models for components that are based on the object oriented paradigm.

An Object Grammar is a variation on traditional BNF grammars, where the notation is extended to support declarative bidirectional mappings between text and object graphs. The two directions for interpreting Object Grammars are parsing and... more

An Object Grammar is a variation on traditional BNF grammars, where the notation is extended to support declarative bidirectional mappings between text and object graphs. The two directions for interpreting Object Grammars are parsing and formatting. Parsing transforms text into an object graph by recognizing syntactic features and creating the corresponding object structure. In the reverse direction, formatting recognizes object graph features and generates an appropriate textual presentation. The key to Object Grammars is the expressive power of the mapping, which decouples the syntactic structure from the graph structure. To handle graphs, Object Grammars support declarative annotations for resolving textual names that refer to arbitrary objects in the graph structure. Predicates on the semantic structure provide additional control over the mapping. Furthermore, Object Grammars are compositional so that languages may be defined in a modular fashion. We have implemented our approach to Object Grammars as one of the foundations of the Ensō system and illustrate the utility of our approach by showing how it enables definition and composition of domain-specific languages (DSLs).

This paper brings together agent-oriented programming, organisation-oriented programming and environment-oriented programming, all of which are programming paradigms that emerged out of research in the area of multi-agent systems. In... more

This paper brings together agent-oriented programming, organisation-oriented programming and environment-oriented programming, all of which are programming paradigms that emerged out of research in the area of multi-agent systems. In putting together a programming model and concrete platform called JaCaMo which integrates important results and technologies in all those research directions, we show in this paper that with the combined paradigm, that we prefer to call “multi-agent oriented programming” ...

Programmers working on large software systems are faced with an extremely complex, information-rich environment. To help navigate through this, modern development environments allow flexible, multi-window browsing and exploration of the... more

Programmers working on large software systems are faced with an extremely complex, information-rich environment. To help navigate through this, modern development environments allow flexible, multi-window browsing and exploration of the source code. Our focus in this paper is on pretty-printing algorithms that can display source code in useful, appealing ways in a variety of styles. Our algorithm is flexible, stable, and peephole-efficient. It is flexible in that it is capable of screenoptimized layouts that support source code visualization techniques such as fisheye views. The algorithm is peephole-efficient, in that it performs work proportional to the size of the visible window and not the size of the entire file. Finally, the algorithm is stable, in that the rendered view is identical to that which would be produced by formatting the entire file. This work has 2 benefits. First, it enables rendering of source codes in multiple fonts and font sizes at interactive speeds. Second, it also allows the use of powerful (but algorithmically more complex) visualization techniques (such as fish-eye views), again, at interactive speeds.

Current software development often relies on non trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly weaved within the... more

Current software development often relies on non trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identification becomes a major methodological (and technical) problem and a challenge to any program understanding or refactoring process.

We de ne a class of operations called pseudo read-modify-write (PRMW) operations, and show that nontrivial shared data objects with such operations can be implemented in a bounded, wait-free manner from atomic registers. A PRMW operation... more

We de ne a class of operations called pseudo read-modify-write (PRMW) operations, and show that nontrivial shared data objects with such operations can be implemented in a bounded, wait-free manner from atomic registers. A PRMW operation is similar to a \true" read-modify-write (RMW) operation in that it modi es the value of a shared variable based upon the original value of that variable. However, unlike a n RMW operation, a PRMW operation does not return the value of the variable that it modi es. We consider a class of shared data objects that can either be read, written, or modi ed by a n a s s o c i a t i v e, commutative PRMW operation, and show that any object in this class can be implemented without waiting from atomic registers. The implementations that we present are polynomial in both space and time and thus are an improvement o ver previously published ones, all of which h a ve u n bounded space complexity.

This paper reports on the Simulink/Stateflow based development of the on-board equipment of the Metrô Rio Automatic Train Protection system. Particular focus is given to the strategies followed to address formal weaknesses and... more

This paper reports on the Simulink/Stateflow based development of the on-board equipment of the Metrô Rio Automatic Train Protection system. Particular focus is given to the strategies followed to address formal weaknesses and certification issues of the adopted tool-suite. On the development side, constraints on the Simulink/Stateflow semantics have been introduced and design practices have been adopted to gradually achieve a formal model of the system. On the verification side, a two-phase approach based on model-based testing and abstract interpretation has been followed to enforce functional correctness and runtime error freedom. Formal verification has been experimented as a side activity of the project.

In this paper, we propose a documental approach to the development of graphical adventure videogames. This approach is oriented to the production and maintenance of adventure videogames using the game's storyboard as the key development... more

In this paper, we propose a documental approach to the development of graphical adventure videogames. This approach is oriented to the production and maintenance of adventure videogames using the game's storyboard as the key development element. The videogame storyboard is marked up with a suitable domain-specific descriptive markup language, from which the different art assets needed are referred, and then the final executable videogame itself is automatically produced by processing the marked storyboard with a suitable processor for such a language. This document-oriented approach opens new authoring possibilities in videogame development and allows a rational collaboration between the different communities that participate in the development process: game writers, artists and programmers. We have implemented the approach in the context of the project, by defining a suitable markup language for the storyboards (the language) and by building a suitable processor for this language (the engine).

h i g h l i g h t s • TCTL model checker for (dense-)timed Kripke structures in a pointwise semantics. • Reduce TCTL model checking from continuous semantics to pointwise semantics. • Sound and complete TCTL model checker for time-robust... more

h i g h l i g h t s • TCTL model checker for (dense-)timed Kripke structures in a pointwise semantics. • Reduce TCTL model checking from continuous semantics to pointwise semantics. • Sound and complete TCTL model checker for time-robust Real-Time Maude models. a r t i c l e i n f o a b s t r a c t

Previous research of our own has shown that by avoiding certain bad specification practices, or WSDL anti-patterns, contract-first Web Service descriptions expressed in WSDL can be greatly improved in terms of understandability and... more

Previous research of our own has shown that by avoiding certain bad specification practices, or WSDL anti-patterns, contract-first Web Service descriptions expressed in WSDL can be greatly improved in terms of understandability and retrievability. The former means the capability of a human discoverer to effectively reason about a Web Service functionality just by inspecting its associated WSDL description. The latter means correctly retrieving a relevant Web Service by a syntactic service registry upon a meaningful user's query. However, code-first service construction dominates in the industry due to its simplicity. This paper proposes an approach to avoid WSDL anti-patterns in code-first Web Services. We also evaluate the approach in terms of services understandability and retrievability, deeply discuss the experimental results, and delineate some guidelines to help code-first Web Service developers in dealing with the trade-off that arise between these two dimensions. Certainly, our approach allows services to be more understandable, due to anti-pattern remotion, and retrievable as measured by classical Information Retrieval metrics.

ASDL is a metalanguage for specifying integrated programming environments as specializations and extensions of a language independent environment kernel. The language combines syntaxdirected translation schemes with an object-oriented... more

ASDL is a metalanguage for specifying integrated programming environments as specializations and extensions of a language independent environment kernel. The language combines syntaxdirected translation schemes with an object-oriented type system. The type system supports data abstraction and multiple inheritance, thus encouraging extensibility, combination and reusability. Translation schemes are identified with generic manipulation operations associated with object types that allow the convenient and concise definition of (a) structural mappings between objects types and (b) message propagation along the structure of objects. Type and scheme definitions are compiled into executable code which is linked to a language independent environment kernel.

This paper presents an overview on the IF toolset which is an environment for modelling and validation of heterogeneous real-time systems. The toolset is built upon a rich formalism, the IF notation, allowing structured automata-based... more

This paper presents an overview on the IF toolset which is an environment for modelling and validation of heterogeneous real-time systems. The toolset is built upon a rich formalism, the IF notation, allowing structured automata-based system representations. Moreover, the IF notation is expressive enough to support real-time primitives and extensions of high-level modelling languages such as SDL and UML by means of structure preserving mappings.

GXL (Graph eXchange Language) is an XML-based standard exchange format for sharing data between tools. Formally, GXL represents typed, attributed, directed, ordered graphs which are extended to represent hypergraphs and hierarchical... more

GXL (Graph eXchange Language) is an XML-based standard exchange format for sharing data between tools. Formally, GXL represents typed, attributed, directed, ordered graphs which are extended to represent hypergraphs and hierarchical graphs. This flexible data model can be used for object-relational data and a wide variety of graphs. An advantage of GXL is that it can be used to exchange instance graphs together with their corresponding schema information in a uniform format, i.e. using a common document type specification. This paper describes GXL and shows how GXL is used to provide interoperability of graph-based tools. GXL has been ratified by reengineering and graph transformation research communities and is being considered for adoption by other communities.

Parsing Expression Grammars (PEGs) are a formalism that can describe all deterministic context-free languages through a set of rules that specify a top-down parser for some language. PEGs are easy to use, and there are efficient... more

Parsing Expression Grammars (PEGs) are a formalism that can describe all deterministic context-free languages through a set of rules that specify a top-down parser for some language. PEGs are easy to use, and there are efficient implementations of PEG libraries in several programming languages.

The day-today management of human resources that occurs during the development and maintenance process of software systems is a responsibility of project leads and managers, who usually perform such a task empirically. Moreover, rotation... more

The day-today management of human resources that occurs during the development and maintenance process of software systems is a responsibility of project leads and managers, who usually perform such a task empirically. Moreover, rotation and distributed software development affect the establishment of longterm relationships between project managers and software projects, as well as between project managers and companies. It is also common for project leads and managers to face decision-making on human resources without the necessary prior knowledge. In this context, the application of visual analytics to software evolution supports software project leads and managers using analysis methods and a shared knowledge space for decision-making by means of visualization and interaction techniques. This approach offers the possibility of determining which programmer has led a project or contributed more to the development and maintenance of a software system in terms of revisions. Moreover, this approach helps to elucidate both the software items 1 that have been changed in common by a group of programmers and who has changed what software items. With this information, software project leads and managers can make decisions regarding task assignment to developers and staff substitutions due to unexpected situations or staff turnover. Consequently, this research is aimed at supporting software practitioners in tasks related to human resources management through the application of Visual Analytics to Software Evolution.