daniel mendez - Academia.edu (original) (raw)

Papers by daniel mendez

Research paper thumbnail of Causality in requirements artifacts: prevalence, detection, and impact

Requirements Engineering

Causal relations in natural language (NL) requirements convey strong, semantic information. Autom... more Causal relations in natural language (NL) requirements convey strong, semantic information. Automatically extracting such causal information enables multiple use cases, such as test case generation, but it also requires to reliably detect causal relations in the first place. Currently, this is still a cumbersome task as causality in NL requirements is still barely understood and, thus, barely detectable. In our empirically informed research, we aim at better understanding the notion of causality and supporting the automatic extraction of causal relations in NL requirements. In a first case study, we investigate 14.983 sentences from 53 requirements documents to understand the extent and form in which causality occurs. Second, we present and evaluate a tool-supported approach, called CiRA, for causality detection. We conclude with a second case study where we demonstrate the applicability of our tool and investigate the impact of causality on NL requirements. The first case study sho...

Research paper thumbnail of An initial Theory to Understand and Manage Requirements Engineering Debt in Practice

Cornell University - arXiv, Nov 11, 2022

Context: Advances in technical debt research demonstrate the benefits of applying the financial d... more Context: Advances in technical debt research demonstrate the benefits of applying the financial debt metaphor to support decision-making in software development activities. Although decision-making during requirements engineering has significant consequences, the debt metaphor in requirements engineering is inadequately explored. Objective: We aim to conceptualize how the debt metaphor applies to requirements engineering by organizing concepts related to practitioners' understanding and managing of requirements engineering debt (RED). Method: We conducted two in-depth expert interviews to identify key requirements engineering debt concepts and construct a survey instrument. We surveyed 69 practitioners worldwide regarding their perception of the concepts and developed an initial analytical theory. Results: We propose a RED theory that aligns key concepts from technical debt research but emphasizes the specific nature of requirements engineering. In particular, the theory consists of 23 falsifiable propositions derived from the literature, the interviews, and survey results. Conclusions: The concepts of requirements engineering debt are perceived to be similar to their technical debt counterpart. Nevertheless, measuring and tracking requirements engineering debt are immature in prac

Research paper thumbnail of PerSpecML: A Perspective-Based Approach for Specifying Machine Learning-Enabled Systems

CERN European Organization for Nuclear Research - Zenodo, Oct 11, 2022

This section presents an overview of the six practitioners who participated in the focus group. I... more This section presents an overview of the six practitioners who participated in the focus group. It also contains the transcriptions and codes involved in the static validation study. We include the explanation of our approach and the discussion with the participants. Discussion with the participants of the static validation Id Comment Code M If you don't understand some of the elements of PerSpecML (diagram + template), please let me know how I can help you. I can go back and explain again. [Here, we introduce to the participants some concepts about requirements engineering (RE) and machine learning (ML), and we detail our solution proposal called PerSpecML] Now we want to hear from you about requirements in the context of ML-enabled systems. Based on your experience and the problems/challenges you face in practice regarding requirements in the context of ML, does PerSpecML address a relevant industry problem? P6 I've been working for just over 2 years with ML systems. I'm not an expert, but to the best of my knowledge, there is a lack of tools and methods that help in understanding and detailing these types of requirements. My feeling is that solution proposals of this type do not reach the industry. This leads to mismatches between practitioners and literature. 01) Lack of methods for supporting RE for ML P3 I agree with P6, in my opinion, there is a lack of methods following this line. I've never seen a standard or guideline shedding light on how to specify and document ML-enabled systems. 01) Lack of methods for supporting RE for ML P2 From my side, I see lean inceptions as the closest thing to focused requirements for ML. I know this approach is not focused on ML-based systems, but in my opinion it has worked well. I also know that it has limitations, but this could be improved. 02) Other techniques for supporting RE for ML P4 I'm curious to see a formal specification of an ML component. Based on my experience, these definitions are informal and emerge as the project progresses. 01) Lack of methods for supporting RE for ML M Does anyone have any comments? P4 ML is a process of continuous experimentation. This is well known. Requirements could not be different in this context. I see that the process of designing an ML-enabled system is based on empirical knowledge, which makes it highly dependent on trial-and-error experimentation. At the beginning of the projects, initial requirements are defined, but at the end of the project these requirements are very different. As projects progress, these requirements change. I think this uncertainty could be reduced to avoid reworks.

Research paper thumbnail of PerSpecML: A Perspective-Based Approach for Specifying Machine Learning-Enabled Systems

CERN European Organization for Nuclear Research - Zenodo, Oct 11, 2022

This section presents an overview of the six practitioners who participated in the focus group. I... more This section presents an overview of the six practitioners who participated in the focus group. It also contains the transcriptions and codes involved in the static validation study. We include the explanation of our approach and the discussion with the participants. Discussion with the participants of the static validation Id Comment Code M If you don't understand some of the elements of PerSpecML (diagram + template), please let me know how I can help you. I can go back and explain again. [Here, we introduce to the participants some concepts about requirements engineering (RE) and machine learning (ML), and we detail our solution proposal called PerSpecML] Now we want to hear from you about requirements in the context of ML-enabled systems. Based on your experience and the problems/challenges you face in practice regarding requirements in the context of ML, does PerSpecML address a relevant industry problem? P6 I've been working for just over 2 years with ML systems. I'm not an expert, but to the best of my knowledge, there is a lack of tools and methods that help in understanding and detailing these types of requirements. My feeling is that solution proposals of this type do not reach the industry. This leads to mismatches between practitioners and literature. 01) Lack of methods for supporting RE for ML P3 I agree with P6, in my opinion, there is a lack of methods following this line. I've never seen a standard or guideline shedding light on how to specify and document ML-enabled systems. 01) Lack of methods for supporting RE for ML P2 From my side, I see lean inceptions as the closest thing to focused requirements for ML. I know this approach is not focused on ML-based systems, but in my opinion it has worked well. I also know that it has limitations, but this could be improved. 02) Other techniques for supporting RE for ML P4 I'm curious to see a formal specification of an ML component. Based on my experience, these definitions are informal and emerge as the project progresses. 01) Lack of methods for supporting RE for ML M Does anyone have any comments? P4 ML is a process of continuous experimentation. This is well known. Requirements could not be different in this context. I see that the process of designing an ML-enabled system is based on empirical knowledge, which makes it highly dependent on trial-and-error experimentation. At the beginning of the projects, initial requirements are defined, but at the end of the project these requirements are very different. As projects progress, these requirements change. I think this uncertainty could be reduced to avoid reworks.

Research paper thumbnail of Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study

Journal of Systems and Software

Acceptance testing is crucial to determine whether a system fulfills end-user requirements. Howev... more Acceptance testing is crucial to determine whether a system fulfills end-user requirements. However, the creation of acceptance tests is a laborious task entailing two major challenges: (1) practitioners need to determine the right set of test cases that fully covers a requirement, and (2) they need to create test cases manually due to insufficient tool support. Existing approaches for automatically deriving test cases require semi-formal or even formal notations of requirements, though unrestricted natural language is prevalent in practice. In this paper, we present our tool-supported approach CiRA (Conditionals in Requirements Artifacts) capable of creating the minimal set of required test cases from conditional statements in informal requirements. We demonstrate the feasibility of CiRA in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8 % can be generated automatically. Additionally, CiRA discovered 80 relevant test cases that were missed in manual test case design. CiRA is publicly available at www.cira.bth.se/demo/.

Research paper thumbnail of Model-based Analysis and Specification of Functional Requirements and Tests for Complex Automotive Systems

Cornell University - arXiv, Sep 3, 2022

The specification of requirements and tests are crucial activities in automotive development proj... more The specification of requirements and tests are crucial activities in automotive development projects. However, due to the increasing complexity of automotive systems, practitioners fail to specify requirements and tests for distributed and evolving systems with complex interactions when following traditional development processes. To address this research gap, we propose a technique that starts with the early identification of validation concerns from a stakeholder perspective, which we use to systematically design tests that drive a scenario-based modeling and automated analysis of system requirements. We discover that Natural Language Processing (NLP) techniques are suitable to automate the test-case design and hence enable the application of our technique to real-world stakeholder requirements. To ensure complete and consistent requirements and test specifications in a form that is required in automotive development projects, we develop a Model-Based Systems Engineering (MBSE) methodology. This methodology supports system architects and test designers in the collaborative application of our technique and in maintaining a central system model, in order to automatically derive the required specifications. We evaluate our methodology by applying it at KOSTAL (Tier1 supplier) and within student projects as part of the masters program Embedded Systems Engineering. Our study corroborates that our methodology is applicable and improves existing requirements and test specification processes by supporting the integrated and stakeholder-focused modeling of product and validation systems, where the early definition of stakeholder and validation concerns fosters a problem-oriented, iterative and test-driven requirements modeling.

Research paper thumbnail of A Study About the Knowledge and Use of Requirements Engineering Standards in Industry

IEEE Transactions on Software Engineering

Context. The use of standards is considered a vital part of any engineering discipline. So one co... more Context. The use of standards is considered a vital part of any engineering discipline. So one could expect that standards play an important role in Requirements Engineering (RE) as well. However, little is known about the actual knowledge and use of RE-related standards in industry. Objective. In this article, we investigate to which extent standards and related artifacts such as templates or guidelines are known and used by RE practitioners. Method. To this end, we have conducted a questionnaire-based online survey. We could analyze the replies from 90 RE practitioners using a combination of closed and open-text questions. Results. Our results indicate that the knowledge and use of standards and related artifacts in RE is less widespread than one might expect from an engineering perspective. For example, about 47% of the respondents working as requirements engineers or business analysts do not know the core standard in RE, ISO/IEC/IEEE 29148. Participants in our study mostly use standards by personal decision rather than being imposed by their respective company, customer, or regulator. Beyond insufficient knowledge, we also found cultural and organizational factors impeding the widespread adoption of standards in RE. Conclusions. Overall, our results provide empirically informed insights into the actual use of standards and related artifacts in RE practice and-indirectly-about the value that the current standards create for RE practitioners.

Research paper thumbnail of Only Time Will Tell: Modelling Information Diffusion in Code Review with Time-Varying Hypergraphs

ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

Background: Modern code review is expected to facilitate knowledge sharing: All relevant informat... more Background: Modern code review is expected to facilitate knowledge sharing: All relevant information, the collective expertise, and meta-information around the code change and its context become evident, transparent, and explicit in the corresponding code review discussion. The discussion participants can leverage this information in the following code reviews; the information diffuses through the communication network that emerges from code review. Traditional time-aggregated graphs fall short in rendering information diffusion as those models ignore the temporal order of the information exchange: Information can only be passed on if it is available in the first place. Aim: This manuscript presents a novel model based on time-varying hypergraphs for rendering information diffusion that overcomes the inherent limitations of traditional, time-aggregated graph-based models. Method: In an in-silico experiment, we simulate an information diffusion within the internal code review at Microsoft and show the empirical impact of time on a key characteristic of information diffusion: the number of reachable participants. Results: Time-aggregation significantly overestimates the paths of information diffusion available in communication networks and, thus, is neither precise nor accurate for modelling and measuring the spread of information within communication networks that emerge from code review. Conclusion: Our model overcomes the inherent limitations of traditional, static or time-aggregated, graph-based communication models and sheds the first light on information diffusion through code review. We believe that our model can serve as a foundation for understanding, measuring, managing, and improving knowledge sharing in code review in particular and information diffusion in software engineering in general.

Research paper thumbnail of Understanding the Implementation of Technical Measures in the Process of Data Privacy Compliance: A Qualitative Study

ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

Background: Modern privacy regulations, such as the General Data Protection Regulation (GDPR), ad... more Background: Modern privacy regulations, such as the General Data Protection Regulation (GDPR), address privacy in software systems in a technologically agnostic way by mentioning general "technical measures" for data privacy compliance rather than dictating how these should be implemented. An understanding of the concept of technical measures and how exactly these can be handled in practice, however, is not trivial due to its interdisciplinary nature and the necessary technical-legal interactions. Aims: We aim to investigate how the concept of technical measures for data privacy compliance is understood in practice as well as the technical-legal interaction intrinsic to the process of implementing those technical measures. Methods: We follow a research design that is 1) exploratory in nature, 2) qualitative, and 3) interview-based, with 16 selected privacy professionals in the technical and legal domains. Results: Our results suggest that there is no clear mutual understanding and commonly accepted approach to handling technical measures. Both technical and legal roles are involved in the implementation of such measures. While they still often operate in separate spheres, a predominant opinion amongst the interviewees is to promote more interdisciplinary collaboration. Conclusions: Our empirical findings confirm the need for better interaction between legal and engineering teams when implementing technical measures for data privacy. We posit that interdisciplinary collaboration is paramount to a more complete understanding of technical measures, which currently lacks a mutually accepted notion. Yet, as strongly suggested by our results, there is still a lack of systematic approaches to such interaction. Therefore, the results strengthen our confidence in the need for further investigations into the technical-legal dynamic of data privacy compliance.

Research paper thumbnail of Theories in Agile Software Development: Past, Present, and Future Introduction to the XP 2020 Special Section

Information and Software Technology

Research paper thumbnail of Assets in Software Engineering: What are they after all?

Journal of Systems and Software

During the development and maintenance of software-intensive products or services, we depend on v... more During the development and maintenance of software-intensive products or services, we depend on various assets. These assets are important to the feasibility of the project and influence product's final quality. However, despite their central role in the software development process, little thought is yet invested into what assets eventually are, often resulting in many terms and underlying concepts being mixed and used inconsistently. A precise terminology of assets and related concepts, such as asset degradation, are crucial for setting up a new generation of cost-effective software engineering practices. In this position paper, we critically reflect upon the resulting notion of assets in software engineering. As a starting point, we define the terminology and concepts of assets and extend the reasoning behind them. We explore assets' characteristics such as value and persistence. We discuss what asset degradation is, its various types and the implications that asset degradation might bring for the planning, realisation, and evolution of software-intensive products and services over time. With our work, we aspire to contribute to a more standardised definition of assets in software engineering and foster research endeavours and their practical dissemination in a common, more unified direction.

Research paper thumbnail of ICSE 2021 Artefact Evaluation - Submission and Reviewing Guidelines

This document outlines, briefly, the submission and reviewing process for the artifact evaluation... more This document outlines, briefly, the submission and reviewing process for the artifact evaluation (AE) track of ICSE 2021. It aims at providing authors and reviewers with pragmatic insights into the process and expected criteria to merit awarding the respective badges.

Research paper thumbnail of Using Process Models to understand Security Standards

Process Models of the IEC 62443-4-1 Standard. Online material that support the article. Using Pro... more Process Models of the IEC 62443-4-1 Standard. Online material that support the article. Using Process Models to understand Security Standards, SOFSEM 2021<br>

Research paper thumbnail of NaPiRE Trouble Predictor -- Reproduction Package

Based on data from the 2014 and 2018 runs of the NaPiRE survey (http://www.napire.org), this mach... more Based on data from the 2014 and 2018 runs of the NaPiRE survey (http://www.napire.org), this machine-learning-based tool implements a RESTful service predicting problems, causes, and their effects as potentially occurring in software development projects. To this end, we use Bayesian networks which are easily configurable from a web interface and can reach reasonable prediction recall and precision.

Research paper thumbnail of Automatic Detection of Causality in Requirement Artifacts: The CiRA Approach

Requirements Engineering: Foundation for Software Quality, 2021

System behavior is often expressed by causal relations in requirements (e.g., If event 1, then ev... more System behavior is often expressed by causal relations in requirements (e.g., If event 1, then event 2). Automatically extracting this embedded causal knowledge supports not only reasoning about requirements dependencies, but also various automated engineering tasks such as seamless derivation of test cases. However, causality extraction from natural language (NL) is still an open research challenge as existing approaches fail to extract causality with reasonable performance. [Question/problem:] We understand causality extraction from requirements as a two-step problem: First, we need to detect if requirements have causal properties or not. Second, we need to understand and extract their causal relations. At present, though, we lack knowledge about the form and complexity of causality in requirements, which is necessary to develop a suitable approach addressing these two problems. [Principal ideas/results:] We conduct an exploratory case study with 14,983 sentences from 53 requirements documents originating from 18 different domains and shed light on the form and complexity of causality in requirements. Based on our findings, we develop a toolsupported approach for causality detection (CiRA, standing for Causality in Requirement Artifacts). This constitutes a first step towards causality extraction from NL requirements. [Contribution:] We report on a case study and the resulting tool-supported approach for causality detection in requirements. Our case study corroborates, among other things, that causality is, in fact, a widely used linguistic pattern to describe system behavior, as about a third of the analyzed sentences are causal. We further demonstrate that our tool CiRA achieves a macro-F1 score of 82 % on real word data and that it outperforms related approaches with an average gain of 11.06 % in macro-Recall and 11.43 % in macro-Precision. Finally, we disclose our open data sets as well as our tool to foster the discourse on the automatic detection of causality in the RE community.

Research paper thumbnail of Data-driven Risk Management for Requirements Engineering: An Automated Approach based on Bayesian Networks

2020 IEEE 28th International Requirements Engineering Conference (RE), 2020

Requirements Engineering (RE) is a means to reduce the risk of delivering a product that does not... more Requirements Engineering (RE) is a means to reduce the risk of delivering a product that does not fulfill the stakeholders' needs. Therefore, a major challenge in RE is to decide how much RE is needed and what RE methods to apply. The quality of such decisions is strongly based on the RE expert's experience and expertise in carefully analyzing the context and current state of a project. Recent work, however, shows that lack of experience and qualification are common causes for problems in RE. We trained a series of Bayesian Networks on data from the NaPiRE survey to model relationships between RE problems, their causes, and effects in projects with different contextual characteristics. These models were used to conduct (1) a postmortem (diagnostic) analysis, deriving probable causes of suboptimal RE performance, and (2) to conduct a preventive analysis, predicting probable issues a young project might encounter. The method was subject to a rigorous cross-validation procedure for both use cases before assessing its applicability to real-world scenarios with a case study.

Research paper thumbnail of Thrusting Pressure and Supersonic Exhaust Velocity in a Rotating Detonation Engine

2018 AIAA Aerospace Sciences Meeting, 2018

Research paper thumbnail of Security Compliance in Agile Software Development: A Systematic Mapping Study

2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2020

Companies adopting agile development tend to face challenges in complying with security norms. Ex... more Companies adopting agile development tend to face challenges in complying with security norms. Existing research either focuses on how to integrate security into agile methods or on discussing compliance issues of agile methods but independently of the regulation type, in particular of security standards. A comprehensive overview of this scattered field is still missing and we know little about how to achieve security compliance in agile software development. Existing secondary studies (mapping studies and literature reviews) analyze publications on secure agile development, but they do not analyze implications of security standard compliance, e.g., integration of specific standard requirements or compliance assessments. To close this gap, we report on a systematic mapping study. Starting with a set of 2,383 papers, our work distills 11 relevant publications addressing security compliance in agile software development. With this study, we contribute by describing the maturity of the field, as well as domains where security compliant agile software engineering was investigated. Moreover, we make explicit which phases of a secure development process are covered by the field and which agile principles are analyzed when aiming at compliance with international security standards, country-specific security regulations, industry-specific security standards, and other well-known security frameworks.

Research paper thumbnail of Challenges in Survey Research

Contemporary Empirical Methods in Software Engineering, 2020

While being an important and often used research method, survey research has been less often disc... more While being an important and often used research method, survey research has been less often discussed on a methodological level in empirical software engineering than other types of research. This chapter compiles a set of important and challenging issues in survey research based on experiences with several large-scale international surveys. The chapter covers theory building, sampling, invitation and follow-up, statistical as well as qualitative analysis of survey data and the usage of psychometrics in software engineering surveys.

Research paper thumbnail of Integration of Security Standards in DevOps Pipelines: An Industry Case Study

Product-Focused Software Process Improvement, 2020

In the last decade, companies adopted DevOps as a fast path to deliver software products accordin... more In the last decade, companies adopted DevOps as a fast path to deliver software products according to customer expectations, with well aligned teams and in continuous cycles. As a basic practice, DevOps relies on pipelines that simulate factory swim-lanes. The more automation in the pipeline, the shorter a lead time is supposed to be. However, applying DevOps is challenging, particularly for industrial control systems (ICS) that support critical infrastructures and that must obey to rigorous requirements from security regulations and standards. Current research on security compliant DevOps presents open gaps for this particular domain and in general for systematic application of security standards. In this paper, we present a systematic approach to integrate standard-based security activities into DevOps pipelines and highlight their automation potential. Our intention is to share our experiences and help practitioners to overcome the trade-off between adding security activities into the development process and keeping a short lead time. We conducted an evaluation of our approach at a large industrial company considering the IEC 62443-4-1 security standard that regulates ICS. The results strengthen our confidence in the usefulness of our approach and artefacts, and in that they can support practitioners to achieve security compliance while preserving agility including short lead times.

Research paper thumbnail of Causality in requirements artifacts: prevalence, detection, and impact

Requirements Engineering

Causal relations in natural language (NL) requirements convey strong, semantic information. Autom... more Causal relations in natural language (NL) requirements convey strong, semantic information. Automatically extracting such causal information enables multiple use cases, such as test case generation, but it also requires to reliably detect causal relations in the first place. Currently, this is still a cumbersome task as causality in NL requirements is still barely understood and, thus, barely detectable. In our empirically informed research, we aim at better understanding the notion of causality and supporting the automatic extraction of causal relations in NL requirements. In a first case study, we investigate 14.983 sentences from 53 requirements documents to understand the extent and form in which causality occurs. Second, we present and evaluate a tool-supported approach, called CiRA, for causality detection. We conclude with a second case study where we demonstrate the applicability of our tool and investigate the impact of causality on NL requirements. The first case study sho...

Research paper thumbnail of An initial Theory to Understand and Manage Requirements Engineering Debt in Practice

Cornell University - arXiv, Nov 11, 2022

Context: Advances in technical debt research demonstrate the benefits of applying the financial d... more Context: Advances in technical debt research demonstrate the benefits of applying the financial debt metaphor to support decision-making in software development activities. Although decision-making during requirements engineering has significant consequences, the debt metaphor in requirements engineering is inadequately explored. Objective: We aim to conceptualize how the debt metaphor applies to requirements engineering by organizing concepts related to practitioners' understanding and managing of requirements engineering debt (RED). Method: We conducted two in-depth expert interviews to identify key requirements engineering debt concepts and construct a survey instrument. We surveyed 69 practitioners worldwide regarding their perception of the concepts and developed an initial analytical theory. Results: We propose a RED theory that aligns key concepts from technical debt research but emphasizes the specific nature of requirements engineering. In particular, the theory consists of 23 falsifiable propositions derived from the literature, the interviews, and survey results. Conclusions: The concepts of requirements engineering debt are perceived to be similar to their technical debt counterpart. Nevertheless, measuring and tracking requirements engineering debt are immature in prac

Research paper thumbnail of PerSpecML: A Perspective-Based Approach for Specifying Machine Learning-Enabled Systems

CERN European Organization for Nuclear Research - Zenodo, Oct 11, 2022

This section presents an overview of the six practitioners who participated in the focus group. I... more This section presents an overview of the six practitioners who participated in the focus group. It also contains the transcriptions and codes involved in the static validation study. We include the explanation of our approach and the discussion with the participants. Discussion with the participants of the static validation Id Comment Code M If you don't understand some of the elements of PerSpecML (diagram + template), please let me know how I can help you. I can go back and explain again. [Here, we introduce to the participants some concepts about requirements engineering (RE) and machine learning (ML), and we detail our solution proposal called PerSpecML] Now we want to hear from you about requirements in the context of ML-enabled systems. Based on your experience and the problems/challenges you face in practice regarding requirements in the context of ML, does PerSpecML address a relevant industry problem? P6 I've been working for just over 2 years with ML systems. I'm not an expert, but to the best of my knowledge, there is a lack of tools and methods that help in understanding and detailing these types of requirements. My feeling is that solution proposals of this type do not reach the industry. This leads to mismatches between practitioners and literature. 01) Lack of methods for supporting RE for ML P3 I agree with P6, in my opinion, there is a lack of methods following this line. I've never seen a standard or guideline shedding light on how to specify and document ML-enabled systems. 01) Lack of methods for supporting RE for ML P2 From my side, I see lean inceptions as the closest thing to focused requirements for ML. I know this approach is not focused on ML-based systems, but in my opinion it has worked well. I also know that it has limitations, but this could be improved. 02) Other techniques for supporting RE for ML P4 I'm curious to see a formal specification of an ML component. Based on my experience, these definitions are informal and emerge as the project progresses. 01) Lack of methods for supporting RE for ML M Does anyone have any comments? P4 ML is a process of continuous experimentation. This is well known. Requirements could not be different in this context. I see that the process of designing an ML-enabled system is based on empirical knowledge, which makes it highly dependent on trial-and-error experimentation. At the beginning of the projects, initial requirements are defined, but at the end of the project these requirements are very different. As projects progress, these requirements change. I think this uncertainty could be reduced to avoid reworks.

Research paper thumbnail of PerSpecML: A Perspective-Based Approach for Specifying Machine Learning-Enabled Systems

CERN European Organization for Nuclear Research - Zenodo, Oct 11, 2022

This section presents an overview of the six practitioners who participated in the focus group. I... more This section presents an overview of the six practitioners who participated in the focus group. It also contains the transcriptions and codes involved in the static validation study. We include the explanation of our approach and the discussion with the participants. Discussion with the participants of the static validation Id Comment Code M If you don't understand some of the elements of PerSpecML (diagram + template), please let me know how I can help you. I can go back and explain again. [Here, we introduce to the participants some concepts about requirements engineering (RE) and machine learning (ML), and we detail our solution proposal called PerSpecML] Now we want to hear from you about requirements in the context of ML-enabled systems. Based on your experience and the problems/challenges you face in practice regarding requirements in the context of ML, does PerSpecML address a relevant industry problem? P6 I've been working for just over 2 years with ML systems. I'm not an expert, but to the best of my knowledge, there is a lack of tools and methods that help in understanding and detailing these types of requirements. My feeling is that solution proposals of this type do not reach the industry. This leads to mismatches between practitioners and literature. 01) Lack of methods for supporting RE for ML P3 I agree with P6, in my opinion, there is a lack of methods following this line. I've never seen a standard or guideline shedding light on how to specify and document ML-enabled systems. 01) Lack of methods for supporting RE for ML P2 From my side, I see lean inceptions as the closest thing to focused requirements for ML. I know this approach is not focused on ML-based systems, but in my opinion it has worked well. I also know that it has limitations, but this could be improved. 02) Other techniques for supporting RE for ML P4 I'm curious to see a formal specification of an ML component. Based on my experience, these definitions are informal and emerge as the project progresses. 01) Lack of methods for supporting RE for ML M Does anyone have any comments? P4 ML is a process of continuous experimentation. This is well known. Requirements could not be different in this context. I see that the process of designing an ML-enabled system is based on empirical knowledge, which makes it highly dependent on trial-and-error experimentation. At the beginning of the projects, initial requirements are defined, but at the end of the project these requirements are very different. As projects progress, these requirements change. I think this uncertainty could be reduced to avoid reworks.

Research paper thumbnail of Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study

Journal of Systems and Software

Acceptance testing is crucial to determine whether a system fulfills end-user requirements. Howev... more Acceptance testing is crucial to determine whether a system fulfills end-user requirements. However, the creation of acceptance tests is a laborious task entailing two major challenges: (1) practitioners need to determine the right set of test cases that fully covers a requirement, and (2) they need to create test cases manually due to insufficient tool support. Existing approaches for automatically deriving test cases require semi-formal or even formal notations of requirements, though unrestricted natural language is prevalent in practice. In this paper, we present our tool-supported approach CiRA (Conditionals in Requirements Artifacts) capable of creating the minimal set of required test cases from conditional statements in informal requirements. We demonstrate the feasibility of CiRA in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8 % can be generated automatically. Additionally, CiRA discovered 80 relevant test cases that were missed in manual test case design. CiRA is publicly available at www.cira.bth.se/demo/.

Research paper thumbnail of Model-based Analysis and Specification of Functional Requirements and Tests for Complex Automotive Systems

Cornell University - arXiv, Sep 3, 2022

The specification of requirements and tests are crucial activities in automotive development proj... more The specification of requirements and tests are crucial activities in automotive development projects. However, due to the increasing complexity of automotive systems, practitioners fail to specify requirements and tests for distributed and evolving systems with complex interactions when following traditional development processes. To address this research gap, we propose a technique that starts with the early identification of validation concerns from a stakeholder perspective, which we use to systematically design tests that drive a scenario-based modeling and automated analysis of system requirements. We discover that Natural Language Processing (NLP) techniques are suitable to automate the test-case design and hence enable the application of our technique to real-world stakeholder requirements. To ensure complete and consistent requirements and test specifications in a form that is required in automotive development projects, we develop a Model-Based Systems Engineering (MBSE) methodology. This methodology supports system architects and test designers in the collaborative application of our technique and in maintaining a central system model, in order to automatically derive the required specifications. We evaluate our methodology by applying it at KOSTAL (Tier1 supplier) and within student projects as part of the masters program Embedded Systems Engineering. Our study corroborates that our methodology is applicable and improves existing requirements and test specification processes by supporting the integrated and stakeholder-focused modeling of product and validation systems, where the early definition of stakeholder and validation concerns fosters a problem-oriented, iterative and test-driven requirements modeling.

Research paper thumbnail of A Study About the Knowledge and Use of Requirements Engineering Standards in Industry

IEEE Transactions on Software Engineering

Context. The use of standards is considered a vital part of any engineering discipline. So one co... more Context. The use of standards is considered a vital part of any engineering discipline. So one could expect that standards play an important role in Requirements Engineering (RE) as well. However, little is known about the actual knowledge and use of RE-related standards in industry. Objective. In this article, we investigate to which extent standards and related artifacts such as templates or guidelines are known and used by RE practitioners. Method. To this end, we have conducted a questionnaire-based online survey. We could analyze the replies from 90 RE practitioners using a combination of closed and open-text questions. Results. Our results indicate that the knowledge and use of standards and related artifacts in RE is less widespread than one might expect from an engineering perspective. For example, about 47% of the respondents working as requirements engineers or business analysts do not know the core standard in RE, ISO/IEC/IEEE 29148. Participants in our study mostly use standards by personal decision rather than being imposed by their respective company, customer, or regulator. Beyond insufficient knowledge, we also found cultural and organizational factors impeding the widespread adoption of standards in RE. Conclusions. Overall, our results provide empirically informed insights into the actual use of standards and related artifacts in RE practice and-indirectly-about the value that the current standards create for RE practitioners.

Research paper thumbnail of Only Time Will Tell: Modelling Information Diffusion in Code Review with Time-Varying Hypergraphs

ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

Background: Modern code review is expected to facilitate knowledge sharing: All relevant informat... more Background: Modern code review is expected to facilitate knowledge sharing: All relevant information, the collective expertise, and meta-information around the code change and its context become evident, transparent, and explicit in the corresponding code review discussion. The discussion participants can leverage this information in the following code reviews; the information diffuses through the communication network that emerges from code review. Traditional time-aggregated graphs fall short in rendering information diffusion as those models ignore the temporal order of the information exchange: Information can only be passed on if it is available in the first place. Aim: This manuscript presents a novel model based on time-varying hypergraphs for rendering information diffusion that overcomes the inherent limitations of traditional, time-aggregated graph-based models. Method: In an in-silico experiment, we simulate an information diffusion within the internal code review at Microsoft and show the empirical impact of time on a key characteristic of information diffusion: the number of reachable participants. Results: Time-aggregation significantly overestimates the paths of information diffusion available in communication networks and, thus, is neither precise nor accurate for modelling and measuring the spread of information within communication networks that emerge from code review. Conclusion: Our model overcomes the inherent limitations of traditional, static or time-aggregated, graph-based communication models and sheds the first light on information diffusion through code review. We believe that our model can serve as a foundation for understanding, measuring, managing, and improving knowledge sharing in code review in particular and information diffusion in software engineering in general.

Research paper thumbnail of Understanding the Implementation of Technical Measures in the Process of Data Privacy Compliance: A Qualitative Study

ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

Background: Modern privacy regulations, such as the General Data Protection Regulation (GDPR), ad... more Background: Modern privacy regulations, such as the General Data Protection Regulation (GDPR), address privacy in software systems in a technologically agnostic way by mentioning general "technical measures" for data privacy compliance rather than dictating how these should be implemented. An understanding of the concept of technical measures and how exactly these can be handled in practice, however, is not trivial due to its interdisciplinary nature and the necessary technical-legal interactions. Aims: We aim to investigate how the concept of technical measures for data privacy compliance is understood in practice as well as the technical-legal interaction intrinsic to the process of implementing those technical measures. Methods: We follow a research design that is 1) exploratory in nature, 2) qualitative, and 3) interview-based, with 16 selected privacy professionals in the technical and legal domains. Results: Our results suggest that there is no clear mutual understanding and commonly accepted approach to handling technical measures. Both technical and legal roles are involved in the implementation of such measures. While they still often operate in separate spheres, a predominant opinion amongst the interviewees is to promote more interdisciplinary collaboration. Conclusions: Our empirical findings confirm the need for better interaction between legal and engineering teams when implementing technical measures for data privacy. We posit that interdisciplinary collaboration is paramount to a more complete understanding of technical measures, which currently lacks a mutually accepted notion. Yet, as strongly suggested by our results, there is still a lack of systematic approaches to such interaction. Therefore, the results strengthen our confidence in the need for further investigations into the technical-legal dynamic of data privacy compliance.

Research paper thumbnail of Theories in Agile Software Development: Past, Present, and Future Introduction to the XP 2020 Special Section

Information and Software Technology

Research paper thumbnail of Assets in Software Engineering: What are they after all?

Journal of Systems and Software

During the development and maintenance of software-intensive products or services, we depend on v... more During the development and maintenance of software-intensive products or services, we depend on various assets. These assets are important to the feasibility of the project and influence product's final quality. However, despite their central role in the software development process, little thought is yet invested into what assets eventually are, often resulting in many terms and underlying concepts being mixed and used inconsistently. A precise terminology of assets and related concepts, such as asset degradation, are crucial for setting up a new generation of cost-effective software engineering practices. In this position paper, we critically reflect upon the resulting notion of assets in software engineering. As a starting point, we define the terminology and concepts of assets and extend the reasoning behind them. We explore assets' characteristics such as value and persistence. We discuss what asset degradation is, its various types and the implications that asset degradation might bring for the planning, realisation, and evolution of software-intensive products and services over time. With our work, we aspire to contribute to a more standardised definition of assets in software engineering and foster research endeavours and their practical dissemination in a common, more unified direction.

Research paper thumbnail of ICSE 2021 Artefact Evaluation - Submission and Reviewing Guidelines

This document outlines, briefly, the submission and reviewing process for the artifact evaluation... more This document outlines, briefly, the submission and reviewing process for the artifact evaluation (AE) track of ICSE 2021. It aims at providing authors and reviewers with pragmatic insights into the process and expected criteria to merit awarding the respective badges.

Research paper thumbnail of Using Process Models to understand Security Standards

Process Models of the IEC 62443-4-1 Standard. Online material that support the article. Using Pro... more Process Models of the IEC 62443-4-1 Standard. Online material that support the article. Using Process Models to understand Security Standards, SOFSEM 2021<br>

Research paper thumbnail of NaPiRE Trouble Predictor -- Reproduction Package

Based on data from the 2014 and 2018 runs of the NaPiRE survey (http://www.napire.org), this mach... more Based on data from the 2014 and 2018 runs of the NaPiRE survey (http://www.napire.org), this machine-learning-based tool implements a RESTful service predicting problems, causes, and their effects as potentially occurring in software development projects. To this end, we use Bayesian networks which are easily configurable from a web interface and can reach reasonable prediction recall and precision.

Research paper thumbnail of Automatic Detection of Causality in Requirement Artifacts: The CiRA Approach

Requirements Engineering: Foundation for Software Quality, 2021

System behavior is often expressed by causal relations in requirements (e.g., If event 1, then ev... more System behavior is often expressed by causal relations in requirements (e.g., If event 1, then event 2). Automatically extracting this embedded causal knowledge supports not only reasoning about requirements dependencies, but also various automated engineering tasks such as seamless derivation of test cases. However, causality extraction from natural language (NL) is still an open research challenge as existing approaches fail to extract causality with reasonable performance. [Question/problem:] We understand causality extraction from requirements as a two-step problem: First, we need to detect if requirements have causal properties or not. Second, we need to understand and extract their causal relations. At present, though, we lack knowledge about the form and complexity of causality in requirements, which is necessary to develop a suitable approach addressing these two problems. [Principal ideas/results:] We conduct an exploratory case study with 14,983 sentences from 53 requirements documents originating from 18 different domains and shed light on the form and complexity of causality in requirements. Based on our findings, we develop a toolsupported approach for causality detection (CiRA, standing for Causality in Requirement Artifacts). This constitutes a first step towards causality extraction from NL requirements. [Contribution:] We report on a case study and the resulting tool-supported approach for causality detection in requirements. Our case study corroborates, among other things, that causality is, in fact, a widely used linguistic pattern to describe system behavior, as about a third of the analyzed sentences are causal. We further demonstrate that our tool CiRA achieves a macro-F1 score of 82 % on real word data and that it outperforms related approaches with an average gain of 11.06 % in macro-Recall and 11.43 % in macro-Precision. Finally, we disclose our open data sets as well as our tool to foster the discourse on the automatic detection of causality in the RE community.

Research paper thumbnail of Data-driven Risk Management for Requirements Engineering: An Automated Approach based on Bayesian Networks

2020 IEEE 28th International Requirements Engineering Conference (RE), 2020

Requirements Engineering (RE) is a means to reduce the risk of delivering a product that does not... more Requirements Engineering (RE) is a means to reduce the risk of delivering a product that does not fulfill the stakeholders' needs. Therefore, a major challenge in RE is to decide how much RE is needed and what RE methods to apply. The quality of such decisions is strongly based on the RE expert's experience and expertise in carefully analyzing the context and current state of a project. Recent work, however, shows that lack of experience and qualification are common causes for problems in RE. We trained a series of Bayesian Networks on data from the NaPiRE survey to model relationships between RE problems, their causes, and effects in projects with different contextual characteristics. These models were used to conduct (1) a postmortem (diagnostic) analysis, deriving probable causes of suboptimal RE performance, and (2) to conduct a preventive analysis, predicting probable issues a young project might encounter. The method was subject to a rigorous cross-validation procedure for both use cases before assessing its applicability to real-world scenarios with a case study.

Research paper thumbnail of Thrusting Pressure and Supersonic Exhaust Velocity in a Rotating Detonation Engine

2018 AIAA Aerospace Sciences Meeting, 2018

Research paper thumbnail of Security Compliance in Agile Software Development: A Systematic Mapping Study

2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2020

Companies adopting agile development tend to face challenges in complying with security norms. Ex... more Companies adopting agile development tend to face challenges in complying with security norms. Existing research either focuses on how to integrate security into agile methods or on discussing compliance issues of agile methods but independently of the regulation type, in particular of security standards. A comprehensive overview of this scattered field is still missing and we know little about how to achieve security compliance in agile software development. Existing secondary studies (mapping studies and literature reviews) analyze publications on secure agile development, but they do not analyze implications of security standard compliance, e.g., integration of specific standard requirements or compliance assessments. To close this gap, we report on a systematic mapping study. Starting with a set of 2,383 papers, our work distills 11 relevant publications addressing security compliance in agile software development. With this study, we contribute by describing the maturity of the field, as well as domains where security compliant agile software engineering was investigated. Moreover, we make explicit which phases of a secure development process are covered by the field and which agile principles are analyzed when aiming at compliance with international security standards, country-specific security regulations, industry-specific security standards, and other well-known security frameworks.

Research paper thumbnail of Challenges in Survey Research

Contemporary Empirical Methods in Software Engineering, 2020

While being an important and often used research method, survey research has been less often disc... more While being an important and often used research method, survey research has been less often discussed on a methodological level in empirical software engineering than other types of research. This chapter compiles a set of important and challenging issues in survey research based on experiences with several large-scale international surveys. The chapter covers theory building, sampling, invitation and follow-up, statistical as well as qualitative analysis of survey data and the usage of psychometrics in software engineering surveys.

Research paper thumbnail of Integration of Security Standards in DevOps Pipelines: An Industry Case Study

Product-Focused Software Process Improvement, 2020

In the last decade, companies adopted DevOps as a fast path to deliver software products accordin... more In the last decade, companies adopted DevOps as a fast path to deliver software products according to customer expectations, with well aligned teams and in continuous cycles. As a basic practice, DevOps relies on pipelines that simulate factory swim-lanes. The more automation in the pipeline, the shorter a lead time is supposed to be. However, applying DevOps is challenging, particularly for industrial control systems (ICS) that support critical infrastructures and that must obey to rigorous requirements from security regulations and standards. Current research on security compliant DevOps presents open gaps for this particular domain and in general for systematic application of security standards. In this paper, we present a systematic approach to integrate standard-based security activities into DevOps pipelines and highlight their automation potential. Our intention is to share our experiences and help practitioners to overcome the trade-off between adding security activities into the development process and keeping a short lead time. We conducted an evaluation of our approach at a large industrial company considering the IEC 62443-4-1 security standard that regulates ICS. The results strengthen our confidence in the usefulness of our approach and artefacts, and in that they can support practitioners to achieve security compliance while preserving agility including short lead times.