Robert Feldt | Chalmers University of Technology (original) (raw)
Papers by Robert Feldt
Mutation analysis can effectively capture the dependency between source code and test results. Th... more Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance before a failure is observed, allowing the amortisation of the analysis cost. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation using DEFECTS4J shows that SIMFL can successfully localise up to 113 out of 203 studied fault...
Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, 2020
While Search-Based Software Testing (SBST) has improved significantly in the last decade we propo... more While Search-Based Software Testing (SBST) has improved significantly in the last decade we propose that more flexible, probabilistic models can be leveraged to improve it further. Rather than searching for an individual, or even sets of, test case(s) or datum(s) that fulfil specific needs the goal can be to learn a generative model tuned to output a useful family of values. Such generative models can naturally be decomposed into a structured generator and a probabilistic model that determines how to make non-deterministic choices during generation. While the former constrains the generation process to produce valid values the latter allows learning and tuning to specific goals. SBST techniques differ in their level of integration of the two but, regardless of how close it is, we argue that the flexibility and power of the probabilistic model will be a main determinant of success. In this short paper, we present how some existing SBST techniques can be viewed from this perspective a...
2018 25th Asia-Pacific Software Engineering Conference (APSEC)
Diversity has been used as an effective criteria to optimise test suites for cost-effective testi... more Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities.
IEEE Transactions on Software Engineering
Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reas... more Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings-such as lack of flexibility and results that are unintuitive and hard to interpret-that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice. In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits-as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice.
Journal of Systems and Software
Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engine... more Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engineering (SBSE), is the application of optimization algorithms to problems in software testing, and software engineering, respectively. New algorithms, methods, and tools are being developed and validated on benchmark problems. In previous work, we have also implemented and evaluated Interactive Search-Based Software Testing (ISBST) tool prototypes, with a goal to successfully transfer the technique to industry. Objective: While SBST and SBSE solutions are often validated on benchmark problems, there is a need to validate them in an operational setting, and to assess their performance in practice. The present paper discusses the development and deployment of SBST tools for use in industry, and reflects on the transfer of these techniques to industry. Method: In addition to previous work discussing the development and validation of an ISBST prototype, a new version of the prototype ISBST system was evaluated in the laboratory and in industry. This evaluation is based on an industrial System under Test (SUT) and was carried out with industrial practitioners. The Technology Transfer Model is used as a framework to describe the progression of the development and evaluation of the ISBST system, as it progresses through the first five of its seven steps. Results: The paper presents a synthesis of previous work developing and evaluating the ISBST prototype, as well as presenting an evaluation, in both academia and industry, of that prototype's latest version. In addition to the evaluation, the paper also discusses the lessons learned from this transfer. Conclusions: This paper presents an overview of the development and deployment of the ISBST system in an industrial setting, using the framework of the Technology Transfer Model. We conclude that the ISBST system is capable of evolving useful test cases for that setting, though improvements in the means the system uses to communicate that information to the user are still required. In addition, a set of lessons learned from the project are listed and discussed. Our objective is to help other researchers that wish to validate searchbased systems in industry, and provide more information about the benefits and drawbacks of these systems.
Journal of Systems and Software
Software engineering research is evolving and papers are increasingly based on empirical data fro... more Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001-2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyse the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.
Empirical Software Engineering
Empirical Software Engineering, 2016
Employees' attitudes towards organizational change are a critical determinant in the change proce... more Employees' attitudes towards organizational change are a critical determinant in the change process. Researchers have therefore tried to determine what underlying concepts that affect them. These extensive efforts have resulted in the identification of several antecedents. However, no studies have been conducted in a software engineering context and the research has provided little information on the relative impact and importance of the identified concepts. In this study, we have combined results from previous social science research with results from software engineering research, and thereby identified three underlying concepts with an expected significant impact on software engineers' attitudes towards organizational change, i.e. their knowledge about the intended change outcome, their understanding of the need for change, and their feelings of participation in the change process. The result of two separate multiple regression analysis, where we used industrial questionnaire data (N=56), showed that the attitude concept openness to change is predicted by all three concepts, while the attitude concept readiness for change is predicted by need for change and participation. Our research provides an empirical baseline to an important area of software engineering and the result can be a starting-point for future organizational
The powerful information processing capabilities of computers have made them an indispensable par... more The powerful information processing capabilities of computers have made them an indispensable part of our modern societies. As we become more reliant on computers and want them to handle more critical and difficult tasks it becomes important that we can depend on the software that controls them. Methods that help ensure software dependability is thus of utmost importance. While we struggle to keep our software dependable despite its increasing complexity, even the smallest biological system in nature shows features of dependability. This thesis applies ideas from and algorithms modeled after biological systems in the research for and development of dependable software.
Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, 2014
This is an author produced version of a conference paper. The paper has been peer-reviewed but ma... more This is an author produced version of a conference paper. The paper has been peer-reviewed but may not include the final publisher proof-corrections or pagination of the proceedings.
Case studies are used in software engineering (SE) research for detailed study of phenomena in th... more Case studies are used in software engineering (SE) research for detailed study of phenomena in their real-world context. There are guidelines listing important factors to consider when designing case studies, but there is a lack of advice on how to structure the collected information and ensure its breadth. Without considering multiple perspectives, such as business and organization, there is a risk that too few perspectives are covered. The objective of this paper is to develop a framework to give structure and ensure breadth of a SE case study. For an analysis of the verification and validation practices of a Swedish software company we developed an analytical framework based on two dimensions. The matrix spanned by the dimensions (perspective and time) helped structure data collection and connect different findings. A six-step process was defined to adapt and execute the framework at the company and we exemplify its use and describe its perceived advantages and disadvantages. The framework simplified the analysis and gave a broader understanding of the studied practices but there is a trade- off with the depth of the results, making the framework more suitable for explorative, open-ended studies.
2011 18th Asia-Pacific Software Engineering Conference, 2011
ABSTRACT Robustness of a software system is defined as the degree to which the system can behave ... more ABSTRACT Robustness of a software system is defined as the degree to which the system can behave ordinarily and in conformance with the requirements in extraordinary situations. By increasing the robustness many failures which decrease the quality of the system can be avoided or masked. When it comes to specifying, testing and assessing software robustness in an efficient manner the methods and techniques are not mature yet. This paper presents RobusTest, a framework for testing robustness properties of a system with currently focus on timing issues. The expected robust behavior of the system is formulated as properties. The properties are then used to automatically generate robustness test cases and assess the results. An implementation of RobusTest in Java is presented here together with results from testing different, open-source implementations of the XMPP instant messaging protocol. By executing 400 test cases that were automatically generated from properties on two such implementations we found 11 critical failures and 15 nonconformance problems as compared to the XMPP specification.
2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications, 2011
ABSTRACT Cost estimation of software projects is an important management activity. Despite resear... more ABSTRACT Cost estimation of software projects is an important management activity. Despite research efforts the accuracy of estimates does not seem to improve. In this paper we confirm intentional distortions of estimates reported in a previous study. This study is based on questionnaire responses from 48 software practitioners from eight different companies. The results of the questionnaire suggest that prevalence of intentional distortions is affected by the organizational type and the development process in use. Further, we extend the results with information about three companies' estimation practices and related distortions collected in interviews with three managers. Lastly, based on these results and additional organizational politics theory we describe organizational politics tactics that affect cost estimates.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010
Context and motivation] When developing software, coordination between different organizational u... more Context and motivation] When developing software, coordination between different organizational units is essential in order to develop a good quality product, on time and within budget. Particularly, the synchronization between requirements and verification processes is crucial in order to assure that the developed software product satisfies customer requirements. [Question/problem] Our research question is: what are the current challenges in aligning the requirements and verification processes? [Principal ideas/results] We conducted an interview study at a large software development company. This paper presents preliminary findings of these interviews that identify key challenges in aligning requirements and verification processes. [Contribution] The result of this study includes a range of challenges faced by the studied organization grouped into the categories: organization and processes, people, tools, requirements process, testing process, change management, traceability, and measurement. The findings of this study can be used by practitioners as a basis for investigating alignment in their organizations, and by scientists in developing approaches for more efficient and effective management of the alignment between requirements and verification.
Proceedings of the 2010 National Software Engineering Conference on - NSEC '10, 2010
Developing software for high-dependability space applications and systems is a formidable task. T... more Developing software for high-dependability space applications and systems is a formidable task. The industry has a long tradition of developing standards that strictly sets quality goals and prescribes engineering processes and methods to fulfill them. The ECSS standards is a recent addition, but being built on the PSS-05, it has a legacy of plan-driven software processes. With new political and market pressures on the space industry to deliver more software at a lower cost, alternative methods need to be investigated. In particular, the agile development processes studied and practiced in the Software Engineering field at large has tempting properties. This paper presents results from an industrial case study on a company in the European space industry that is using agile software development methods in ECSS projects. We discuss success factors based on detailed process and document analysis as well as empirical data from interviews and questionnaires.
2008 IEEE International Multitopic Conference, 2008
Software reliability growth modeling helps in deciding project release time and managing project ... more Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the presence of these many number of models, their inherent complexity and accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different statistics in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.
2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, 2011
Requirements should specify expectations on a software system and testing should ensure these exp... more Requirements should specify expectations on a software system and testing should ensure these expectations are met. Thus, to enable high product quality and efficient development it is crucial that requirements and testing activities and information are aligned. A lot of research has been done in the respective fields of Requirements Engineering and Testing but there is a lack of summaries of the current state of the art on how to link the two. This study presents a systematic mapping of the alignment of specification and testing of functional or nonfunctional requirements in order to identify useful approaches and needs for future research. In particular we focus on results relevant for nonfunctional requirements but since only a few studies was found on alignment in total we also cover the ones on functional requirements. We summarize the 35 relevant papers found and discuss them within six major sub categories with model-based testing and traceability being the ones with most prior results. requirement" OR "non functional software requirements" OR "nonbehavioral requirement" OR "nonbehavioral requirements" OR "nonbehavioural requirement" OR "nonbehavioural requirements" OR "non behavioral requirement" OR "non behavioral requirements" OR "non behavioural requirement" OR "non behavioural requirements" OR "nonfunctional property" OR "nonfunctional properties" OR "non functional property" OR "non functional properties" OR "quality attribute" OR "quality attributes" OR "quality requirement" OR "quality requirements" OR "quality attribute requirement" OR
Journal of Software: Evolution and Process, 2013
Software Process Improvement (SPI) encompasses the analysis and modification of the processes wit... more Software Process Improvement (SPI) encompasses the analysis and modification of the processes within software development, aimed at improving key areas that contribute to the organizations' goals. The task of evaluating whether the selected improvement path meets these goals is challenging. Based on the results of a systematic literature review on SPI measurement and evaluation practices, we developed a framework (SPI-MEF) that supports the planning and implementation of SPI evaluations. SPI-MEF guides the practitioner in scoping the evaluation, determining measures and performing the assessment. SPI-MEF does not assume a specific approach to process improvement and can be integrated in existing measurement programs, refocusing the assessment on evaluating the improvement initiative's outcome. Sixteen industry and academic experts evaluated the framework's usability and capability to support practitioners, providing additional insights that were integrated in the application guidelines of the framework.
Ruby Developer's Guide, 2002
This chapter explains the need of writing a Ruby extension module in C/C++. Like regular Ruby mod... more This chapter explains the need of writing a Ruby extension module in C/C++. Like regular Ruby modules, C extension modules can expose constants, methods, and classes to the Ruby interpreter. Several modules in the standard Ruby library (including socket, tk, and Win32API ) are implemented as C extensions, and a survey through the Ruby Application Archive reveals a number of other popular Ruby extensions implemented using C/C++ code. A subject closely related to writing Ruby extensions in C is the practice of embedding the Ruby interpreter into C/C++ applications. This is an increasingly popular choice for application developers who want to provide a scripting language for their application end-users, to allow them to easily write “plug-in” code modules that can run alongside the main application and extend its functionality. Ruby is a natural fit for this kind of application, because the Ruby interpreter is already packaged as a C library with APIs to facilitate embedding. The wide variation in compilers and development environments and platforms makes it difficult for individual developers to come up with a consistent build and installation process. The Ruby standard library provides a useful Mkmf module for this very purpose, and Ruby's developer has outlined the standard procedure for making use of this module's functionality: the extconf.rb script.
Ruby Developer's Guide, 2002
This chapter develops a sample application with four different GUI toolkits available for Ruby: T... more This chapter develops a sample application with four different GUI toolkits available for Ruby: Tk, Gtk, Fox, and VRuby. Ruby is an excellent tool for writing low-level scripts for system administration tasks, but it is equally useful for writing end-user applications. One of the benefits of Ruby programming is that it enables rapid application development. In contrast to the time-consuming code-compile-test cycle of traditional programming languages, changes can be quickly made to Ruby scripts to try out new ideas. This benefit becomes more evident while developing GUI applications with Ruby. It is both instructive and rewarding to build up the user interface incrementally, adding new elements and then rerunning the program to see how the user interface has changed as a result. Tk was one of the first cross-platform GUIs, and the easy application development afforded by Tcl/Tk opened up the world of GUI programming to a lot of programmers who were struggling with earlier C-based GUI libraries, like Motif and the Windows Win32 API. For developers who work primarily on the Linux operating system and are already familiar with GTK+ and GNOME-based applications in that environment, this is an obvious choice. FXRuby is a strong cross-platform GUI toolkit for Ruby, and it works equally well under Unix and Windows. Speaking of poor documentation, SWin/VRuby is a hard sell for anyone other than experienced Windows programmers.
Mutation analysis can effectively capture the dependency between source code and test results. Th... more Mutation analysis can effectively capture the dependency between source code and test results. This has been exploited by Mutation Based Fault Localisation (MBFL) techniques. However, MBFL techniques suffer from the need to expend the high cost of mutation analysis after the observation of failures, which may present a challenge for its practical adoption. We introduce SIMFL (Statistical Inference for Mutation-based Fault Localisation), an MBFL technique that allows users to perform the mutation analysis in advance before a failure is observed, allowing the amortisation of the analysis cost. SIMFL uses mutants as artificial faults and aims to learn the failure patterns among test cases against different locations of mutations. Once a failure is observed, SIMFL requires either almost no or very small additional cost for analysis, depending on the used inference model. An empirical evaluation using DEFECTS4J shows that SIMFL can successfully localise up to 113 out of 203 studied fault...
Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, 2020
While Search-Based Software Testing (SBST) has improved significantly in the last decade we propo... more While Search-Based Software Testing (SBST) has improved significantly in the last decade we propose that more flexible, probabilistic models can be leveraged to improve it further. Rather than searching for an individual, or even sets of, test case(s) or datum(s) that fulfil specific needs the goal can be to learn a generative model tuned to output a useful family of values. Such generative models can naturally be decomposed into a structured generator and a probabilistic model that determines how to make non-deterministic choices during generation. While the former constrains the generation process to produce valid values the latter allows learning and tuning to specific goals. SBST techniques differ in their level of integration of the two but, regardless of how close it is, we argue that the flexibility and power of the probabilistic model will be a main determinant of success. In this short paper, we present how some existing SBST techniques can be viewed from this perspective a...
2018 25th Asia-Pacific Software Engineering Conference (APSEC)
Diversity has been used as an effective criteria to optimise test suites for cost-effective testi... more Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities.
IEEE Transactions on Software Engineering
Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reas... more Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings-such as lack of flexibility and results that are unintuitive and hard to interpret-that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice. In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits-as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice.
Journal of Systems and Software
Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engine... more Context: Search-Based Software Testing (SBST), and the wider area of Search-Based Software Engineering (SBSE), is the application of optimization algorithms to problems in software testing, and software engineering, respectively. New algorithms, methods, and tools are being developed and validated on benchmark problems. In previous work, we have also implemented and evaluated Interactive Search-Based Software Testing (ISBST) tool prototypes, with a goal to successfully transfer the technique to industry. Objective: While SBST and SBSE solutions are often validated on benchmark problems, there is a need to validate them in an operational setting, and to assess their performance in practice. The present paper discusses the development and deployment of SBST tools for use in industry, and reflects on the transfer of these techniques to industry. Method: In addition to previous work discussing the development and validation of an ISBST prototype, a new version of the prototype ISBST system was evaluated in the laboratory and in industry. This evaluation is based on an industrial System under Test (SUT) and was carried out with industrial practitioners. The Technology Transfer Model is used as a framework to describe the progression of the development and evaluation of the ISBST system, as it progresses through the first five of its seven steps. Results: The paper presents a synthesis of previous work developing and evaluating the ISBST prototype, as well as presenting an evaluation, in both academia and industry, of that prototype's latest version. In addition to the evaluation, the paper also discusses the lessons learned from this transfer. Conclusions: This paper presents an overview of the development and deployment of the ISBST system in an industrial setting, using the framework of the Technology Transfer Model. We conclude that the ISBST system is capable of evolving useful test cases for that setting, though improvements in the means the system uses to communicate that information to the user are still required. In addition, a set of lessons learned from the project are listed and discussed. Our objective is to help other researchers that wish to validate searchbased systems in industry, and provide more information about the benefits and drawbacks of these systems.
Journal of Systems and Software
Software engineering research is evolving and papers are increasingly based on empirical data fro... more Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001-2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyse the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.
Empirical Software Engineering
Empirical Software Engineering, 2016
Employees' attitudes towards organizational change are a critical determinant in the change proce... more Employees' attitudes towards organizational change are a critical determinant in the change process. Researchers have therefore tried to determine what underlying concepts that affect them. These extensive efforts have resulted in the identification of several antecedents. However, no studies have been conducted in a software engineering context and the research has provided little information on the relative impact and importance of the identified concepts. In this study, we have combined results from previous social science research with results from software engineering research, and thereby identified three underlying concepts with an expected significant impact on software engineers' attitudes towards organizational change, i.e. their knowledge about the intended change outcome, their understanding of the need for change, and their feelings of participation in the change process. The result of two separate multiple regression analysis, where we used industrial questionnaire data (N=56), showed that the attitude concept openness to change is predicted by all three concepts, while the attitude concept readiness for change is predicted by need for change and participation. Our research provides an empirical baseline to an important area of software engineering and the result can be a starting-point for future organizational
The powerful information processing capabilities of computers have made them an indispensable par... more The powerful information processing capabilities of computers have made them an indispensable part of our modern societies. As we become more reliant on computers and want them to handle more critical and difficult tasks it becomes important that we can depend on the software that controls them. Methods that help ensure software dependability is thus of utmost importance. While we struggle to keep our software dependable despite its increasing complexity, even the smallest biological system in nature shows features of dependability. This thesis applies ideas from and algorithms modeled after biological systems in the research for and development of dependable software.
Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, 2014
This is an author produced version of a conference paper. The paper has been peer-reviewed but ma... more This is an author produced version of a conference paper. The paper has been peer-reviewed but may not include the final publisher proof-corrections or pagination of the proceedings.
Case studies are used in software engineering (SE) research for detailed study of phenomena in th... more Case studies are used in software engineering (SE) research for detailed study of phenomena in their real-world context. There are guidelines listing important factors to consider when designing case studies, but there is a lack of advice on how to structure the collected information and ensure its breadth. Without considering multiple perspectives, such as business and organization, there is a risk that too few perspectives are covered. The objective of this paper is to develop a framework to give structure and ensure breadth of a SE case study. For an analysis of the verification and validation practices of a Swedish software company we developed an analytical framework based on two dimensions. The matrix spanned by the dimensions (perspective and time) helped structure data collection and connect different findings. A six-step process was defined to adapt and execute the framework at the company and we exemplify its use and describe its perceived advantages and disadvantages. The framework simplified the analysis and gave a broader understanding of the studied practices but there is a trade- off with the depth of the results, making the framework more suitable for explorative, open-ended studies.
2011 18th Asia-Pacific Software Engineering Conference, 2011
ABSTRACT Robustness of a software system is defined as the degree to which the system can behave ... more ABSTRACT Robustness of a software system is defined as the degree to which the system can behave ordinarily and in conformance with the requirements in extraordinary situations. By increasing the robustness many failures which decrease the quality of the system can be avoided or masked. When it comes to specifying, testing and assessing software robustness in an efficient manner the methods and techniques are not mature yet. This paper presents RobusTest, a framework for testing robustness properties of a system with currently focus on timing issues. The expected robust behavior of the system is formulated as properties. The properties are then used to automatically generate robustness test cases and assess the results. An implementation of RobusTest in Java is presented here together with results from testing different, open-source implementations of the XMPP instant messaging protocol. By executing 400 test cases that were automatically generated from properties on two such implementations we found 11 critical failures and 15 nonconformance problems as compared to the XMPP specification.
2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications, 2011
ABSTRACT Cost estimation of software projects is an important management activity. Despite resear... more ABSTRACT Cost estimation of software projects is an important management activity. Despite research efforts the accuracy of estimates does not seem to improve. In this paper we confirm intentional distortions of estimates reported in a previous study. This study is based on questionnaire responses from 48 software practitioners from eight different companies. The results of the questionnaire suggest that prevalence of intentional distortions is affected by the organizational type and the development process in use. Further, we extend the results with information about three companies' estimation practices and related distortions collected in interviews with three managers. Lastly, based on these results and additional organizational politics theory we describe organizational politics tactics that affect cost estimates.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010
Context and motivation] When developing software, coordination between different organizational u... more Context and motivation] When developing software, coordination between different organizational units is essential in order to develop a good quality product, on time and within budget. Particularly, the synchronization between requirements and verification processes is crucial in order to assure that the developed software product satisfies customer requirements. [Question/problem] Our research question is: what are the current challenges in aligning the requirements and verification processes? [Principal ideas/results] We conducted an interview study at a large software development company. This paper presents preliminary findings of these interviews that identify key challenges in aligning requirements and verification processes. [Contribution] The result of this study includes a range of challenges faced by the studied organization grouped into the categories: organization and processes, people, tools, requirements process, testing process, change management, traceability, and measurement. The findings of this study can be used by practitioners as a basis for investigating alignment in their organizations, and by scientists in developing approaches for more efficient and effective management of the alignment between requirements and verification.
Proceedings of the 2010 National Software Engineering Conference on - NSEC '10, 2010
Developing software for high-dependability space applications and systems is a formidable task. T... more Developing software for high-dependability space applications and systems is a formidable task. The industry has a long tradition of developing standards that strictly sets quality goals and prescribes engineering processes and methods to fulfill them. The ECSS standards is a recent addition, but being built on the PSS-05, it has a legacy of plan-driven software processes. With new political and market pressures on the space industry to deliver more software at a lower cost, alternative methods need to be investigated. In particular, the agile development processes studied and practiced in the Software Engineering field at large has tempting properties. This paper presents results from an industrial case study on a company in the European space industry that is using agile software development methods in ECSS projects. We discuss success factors based on detailed process and document analysis as well as empirical data from interviews and questionnaires.
2008 IEEE International Multitopic Conference, 2008
Software reliability growth modeling helps in deciding project release time and managing project ... more Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the presence of these many number of models, their inherent complexity and accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different statistics in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.
2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, 2011
Requirements should specify expectations on a software system and testing should ensure these exp... more Requirements should specify expectations on a software system and testing should ensure these expectations are met. Thus, to enable high product quality and efficient development it is crucial that requirements and testing activities and information are aligned. A lot of research has been done in the respective fields of Requirements Engineering and Testing but there is a lack of summaries of the current state of the art on how to link the two. This study presents a systematic mapping of the alignment of specification and testing of functional or nonfunctional requirements in order to identify useful approaches and needs for future research. In particular we focus on results relevant for nonfunctional requirements but since only a few studies was found on alignment in total we also cover the ones on functional requirements. We summarize the 35 relevant papers found and discuss them within six major sub categories with model-based testing and traceability being the ones with most prior results. requirement" OR "non functional software requirements" OR "nonbehavioral requirement" OR "nonbehavioral requirements" OR "nonbehavioural requirement" OR "nonbehavioural requirements" OR "non behavioral requirement" OR "non behavioral requirements" OR "non behavioural requirement" OR "non behavioural requirements" OR "nonfunctional property" OR "nonfunctional properties" OR "non functional property" OR "non functional properties" OR "quality attribute" OR "quality attributes" OR "quality requirement" OR "quality requirements" OR "quality attribute requirement" OR
Journal of Software: Evolution and Process, 2013
Software Process Improvement (SPI) encompasses the analysis and modification of the processes wit... more Software Process Improvement (SPI) encompasses the analysis and modification of the processes within software development, aimed at improving key areas that contribute to the organizations' goals. The task of evaluating whether the selected improvement path meets these goals is challenging. Based on the results of a systematic literature review on SPI measurement and evaluation practices, we developed a framework (SPI-MEF) that supports the planning and implementation of SPI evaluations. SPI-MEF guides the practitioner in scoping the evaluation, determining measures and performing the assessment. SPI-MEF does not assume a specific approach to process improvement and can be integrated in existing measurement programs, refocusing the assessment on evaluating the improvement initiative's outcome. Sixteen industry and academic experts evaluated the framework's usability and capability to support practitioners, providing additional insights that were integrated in the application guidelines of the framework.
Ruby Developer's Guide, 2002
This chapter explains the need of writing a Ruby extension module in C/C++. Like regular Ruby mod... more This chapter explains the need of writing a Ruby extension module in C/C++. Like regular Ruby modules, C extension modules can expose constants, methods, and classes to the Ruby interpreter. Several modules in the standard Ruby library (including socket, tk, and Win32API ) are implemented as C extensions, and a survey through the Ruby Application Archive reveals a number of other popular Ruby extensions implemented using C/C++ code. A subject closely related to writing Ruby extensions in C is the practice of embedding the Ruby interpreter into C/C++ applications. This is an increasingly popular choice for application developers who want to provide a scripting language for their application end-users, to allow them to easily write “plug-in” code modules that can run alongside the main application and extend its functionality. Ruby is a natural fit for this kind of application, because the Ruby interpreter is already packaged as a C library with APIs to facilitate embedding. The wide variation in compilers and development environments and platforms makes it difficult for individual developers to come up with a consistent build and installation process. The Ruby standard library provides a useful Mkmf module for this very purpose, and Ruby's developer has outlined the standard procedure for making use of this module's functionality: the extconf.rb script.
Ruby Developer's Guide, 2002
This chapter develops a sample application with four different GUI toolkits available for Ruby: T... more This chapter develops a sample application with four different GUI toolkits available for Ruby: Tk, Gtk, Fox, and VRuby. Ruby is an excellent tool for writing low-level scripts for system administration tasks, but it is equally useful for writing end-user applications. One of the benefits of Ruby programming is that it enables rapid application development. In contrast to the time-consuming code-compile-test cycle of traditional programming languages, changes can be quickly made to Ruby scripts to try out new ideas. This benefit becomes more evident while developing GUI applications with Ruby. It is both instructive and rewarding to build up the user interface incrementally, adding new elements and then rerunning the program to see how the user interface has changed as a result. Tk was one of the first cross-platform GUIs, and the easy application development afforded by Tcl/Tk opened up the world of GUI programming to a lot of programmers who were struggling with earlier C-based GUI libraries, like Motif and the Windows Win32 API. For developers who work primarily on the Linux operating system and are already familiar with GTK+ and GNOME-based applications in that environment, this is an obvious choice. FXRuby is a strong cross-platform GUI toolkit for Ruby, and it works equally well under Unix and Windows. Speaking of poor documentation, SWin/VRuby is a hard sell for anyone other than experienced Windows programmers.