Ioannis Stamelos - Academia.edu (original) (raw)
Papers by Ioannis Stamelos
2010 Seventh International Conference on the Quality of Information and Communications Technology, 2010
Nowadays one key question for most organizations is which of the agile practices should be implem... more Nowadays one key question for most organizations is which of the agile practices should be implemented to improve product quality. This systematic literature review surveys studies published up to and including 2009 and attempts to present and evaluate the empirical findings regarding quality in agile practices. The studies were classified into three groups: test driven or test first development, pair programming, and miscellaneous agile practices and methods. The findings of most studies suggest that agile practices can improve quality if they are implemented correctly. The significant findings of this study, in conjunction with previous research, could be used as guidelines for practitioners on their own settings and situations.
Information and Software Technology, 2001
Although typically a software development organisation is involved in more than one project simul... more Although typically a software development organisation is involved in more than one project simultaneously, the available tools in the area of software cost estimation deal mostly with single software projects. In order to calculate the possible cost of the entire project portfolio, one must combine the single project estimates taking into account the uncertainty involved. In this paper, statistical simulation techniques are used to calculate con®dence intervals for the effort needed for a project portfolio. The overall approach is illustrated through the adaptation of the analogy-based method for software cost estimation to cover multiple projects.
Empirical Software Engineering, 2006
In this paper we discuss our empirical study about the advantages and difficulties 15 Greek softw... more In this paper we discuss our empirical study about the advantages and difficulties 15 Greek software companies experienced applying Extreme Programming (XP) as a holistic system in software development. Based on a generic XP system including feedback influences and using a cause-effect model including social-technical affecting factors, as our research tool, the study statistically evaluates the application of XP practices in the software companies being studied. Data were collected from 30 managers and developers, using the sample survey technique with questionnaires and interviews, in a time period of six months. Practices were analysed individually, using Descriptive Statistics (DS), and as a whole by building up different models using stepwise Discriminant Analysis (DA). The results have shown that companies, facing various problems with common code ownership, on-site customer, 40-hour week and metaphor, prefer to develop their own tailored XP method and way of working-practices that met their requirements. Pair programming and test-driven development were found to be the most significant success factors. Interactions and hidden dependencies for the majority of the practices as well as communication and synergy between skilled personnel were found to be other significant success factors. The contribution of this preliminary research work is to provide some evidence that may assist companies in evaluating whether the XP system as a holistic framework would suit their current situation.
Communications of the ACM, 2004
A study of almost six million lines of code tracks how freely accessible source code holds up aga... more A study of almost six million lines of code tracks how freely accessible source code holds up against time and multiple iterations.
25th Pan-Hellenic Conference on Informatics, 2021
This work describes a research collaboration between universities and industry with the aim to pr... more This work describes a research collaboration between universities and industry with the aim to provide a low-cost prototype based on Augmented Reality technologies, that assists with maintaining correct information in Warehouse Management Systems. The component interacts with the central server of an existing commercial WMS to provide up-to-date information on the actual state of the warehouse. The low-cost requirement restricts the solution to smartphones and other inexpensive equipment readily available, such as drones, as well as mostly Open Source Software. This requirement also introduces several interesting architectural issues that we discuss in this work. A prototype was built for the proposed architecture and several tests were carried out.
Proceedings of the XP2017 Scientific Workshops, 2017
Technical debt (TD) impedes software projects by reducing the velocity of development teams durin... more Technical debt (TD) impedes software projects by reducing the velocity of development teams during software evolution. Although TD is usually assessed on either the entire system or on individual software artifacts, it is the actual craftsmanship of developers that causes the accumulation of TD. In the light of extremely high maintenance costs, efficient software project management cannot occur without recognizing the relation between developer characteristics and the tendency to evoke violations that lead to TD. In this paper, we investigate three research questions related to the distribution of TD among the developers of a software project, the types of violations caused by each developer and the relation between developers' maturity and the tendency to accumulate TD. The study has been performed on four widely employed PHP open-source projects. All developers' personal characteristics have been anonymized in the study. CCS CONCEPTS • Software and its engineering → Software creation and management → Software post-development issues→ Maintaining software• Social and professional topics→ Management of computing and information systems → Software management → Software maintenance
2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2017
One of the first steps towards the effective Technical Debt (TD) management is the quantification... more One of the first steps towards the effective Technical Debt (TD) management is the quantification and continuous monitoring of the TD principal. In the current state-ofresearch and practice the most common ways to assess TD principal are the use of: (a) structural proxies-i.e., most commonly through quality metrics; and (b) monetized proxies-i.e., most commonly through the use of the SQALE (Software Quality Assessment based on Lifecycle Expectations) method. Although both approaches have merit, they seem to rely on different viewpoints of TD and their levels of agreement have not been evaluated so far. Therefore, in this paper, we empirically explore this relation by analyzing data obtained from 20 open source software projects and build a regression model that establishes a relationship between them. The results of the study suggest that a model of seven structural metrics, quantifying different aspects of quality (i.e., coupling, cohesion, complexity, size, and inheritance) can accurately estimate TD principal as appraised by SonarQube. The results of this case study are useful to both academia and industry. In particular, academia can gain knowledge on: (a) the reliability and agreement of TD principal assessment methods and (b) the structural characteristics of software that contribute to the accumulation of TD, whereas practitioners are provided with an alternative evaluation model with reduced number of parameters that can accurately assess TD, through traditional software quality metrics and tools.
Abstract: Requirements engineering is an extremely crucial phase in the software development life... more Abstract: Requirements engineering is an extremely crucial phase in the software development lifecycle, because mishaps in this stage are usually expensive to fix in later development phases. In the domain of computer games, requirements engineering is a heavily studied research field (39.3 % of published papers are dealing with requirements [1]), since it is considered substantially different from traditional software requirements engineering (see [1] and [14]). The main point of differentiation is that almost all computer games share a common key-driver as requirement, i.e. user satisfaction. In this paper, we investigate the most important user satisfaction factors from computer games, though a survey on regular gamers. The results of the study suggest that, user satisfaction factors are not uniform across different types of games (game genres), but are heavily dependent on them. Therefore, this study underlines the most important non-functional requirements that developers and r...
Proceedings of the 13th International Conference on Evaluation of Novel Approaches to Software Engineering, 2018
Game development is one of the fastest-growing industries in IT. In order for a game to be succes... more Game development is one of the fastest-growing industries in IT. In order for a game to be successful, the game should engage the player through a solid and interesting scenario, which does not only describe the state of the game, but also outlines the main characters and their interactions. By considering the increasing complexity of game scenarios, we seek for existing methods for scenario representation approaches, and based on the most popular one, we provide tool support for assisting the game design process. To evaluate the usefulness of the developed tool, we have performed a case study with the aim to assess the usability of the tool. The results of the case study suggested that after some interaction with end-users the tool has reached a highly usable state that to some extent guarantees its applicability in practice.
2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), 2017
Modeling Big Data Applications is a key research topic for designing, analyzing, programming and ... more Modeling Big Data Applications is a key research topic for designing, analyzing, programming and deploying data-intensive applications, with high value and long-term trade-offs. The need for unified perspectives, architectures and requirements techniques is requisite. The current approach proposes the use of Feature Models to fill this gap by extending present model-driven engineering practices with utter purpose to define a reusable, extensible and highly configurable design approach for Big Data Applications.
Proceedings of the 35th Annual ACM Symposium on Applied Computing, 2020
Game development is one of the fastest growing industries. Since games' success is mostly rel... more Game development is one of the fastest growing industries. Since games' success is mostly related to users' enjoyment, one of the cornerstones of their quality assessment is the evaluation from the user perspective. According to literature, game scenario constitutes a key-factor that leads to users' enjoyment. Despite their importance, scenarios are currently evaluated through heuristics in a subjective way. The aim of this paper is to develop an objective model (i.e., a set of quality attributes and metrics) for evaluating game scenarios with respect to users' satisfaction. The proposed model can be applied to flow charts and character models (i.e., common game scenario representation mechanisms). To achieve this goal, we: (a) gathered game scenario characteristics that are related to users' satisfaction, (b) proposed several metrics for quantifying these characteristics, and (c) performed a case study on three interactive scenarios to evaluate the model. As a r...
Proceedings of the Evaluation and Assessment on Software Engineering, 2019
Context: Technical Debt (TD) quantification has been studied in the literature and is supported b... more Context: Technical Debt (TD) quantification has been studied in the literature and is supported by various tools; however, there is no common ground on what information shall be presented to stakeholders. Similarly to other quality monitoring processes, it is desirable to provide several views of quality through a dashboard, in which metrics concerning the phenomenon of interest are displayed. Objective: The aim of this study is to investigate the indicators that shall be presented in such a dashboard, so as to: (a) be meaningful for industrial stakeholders, (b) present all necessary information, and (c) be simple enough so that stakeholders can use them. Method: We explore TD Management (TDM) activities (i.e., measurement, prioritization, repayment) and choose the main concepts that need to be visualized, based on existing literature and toolsupport. Next, we perform a survey with 60 software engineers (i.e., architects, developers, etc.) working for 11 software development companies located in 9 countries, to understand their needs for TDM. Results / Conclusions: The results of the study suggest that different stakeholders need a different view of the quality dashboard, but also some commonalities can be identified. For example, on the one hand, managers are mostly interested in financial concepts, whereas on the other hand developers are more interested in the nature of the problems that exist in the code. The outcomes of this study can be useful to both researchers and practitioners, in the sense that the former can focus their efforts on aspects that are meaningful to industry, whereas the latter to develop meaningful dashboards, with multiple views.
2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2018
Empirical evidence has pointed out that Extract Method refactorings are among the most commonly a... more Empirical evidence has pointed out that Extract Method refactorings are among the most commonly applied refactorings by software developers. The identification of Long Method code smells and the ranking of the associated refactoring opportunities is largely based on the use of metrics, primarily with measures of cohesion, size and coupling. Despite the relevance of these properties to the presence of large, complex and noncohesive pieces of code, the empirical validation of these metrics has exhibited relatively low accuracy (max precision: 66%) regarding their predictive power for long methods or extract method opportunities. In this work we perform an empirical validation of the ability of cohesion, coupling and size metrics to predict the existence and the intensity of long method occurrences. According to the statistical analysis, the existence and the intensity of the Long Method smell can be effectively predicted by two size (LoC and NoLV), two coupling (MPC and RFC), and four cohesion (LCOM1, LCOM2, Coh, and CC) metrics. Furthermore, the integration of these metrics into a multiple logistic regression model can predict whether a method should be refactored with a precision of 89% and a recall of 91%. The model yields suggestions whose ranking is strongly correlated to the ranking based on the effect of the corresponding refactorings on source code (correl. coef. 0.520). The results are discussed by providing interpretations and implications for research and practice.
Lecture Notes in Computer Science, 2016
In this paper, we focus on source code quality assessment for Share-Point applications, which is ... more In this paper, we focus on source code quality assessment for Share-Point applications, which is a powerful framework for developing software by combining imperative and declarative programming. In particular, we present an industrial case study conducted in a software consulting/development company in Netherlands, which aimed at: identifying the most common SharePoint quality rule violations and their severity. The results indicate that the most frequent rule violations are identified in the JavaScript part of the applications, and that the most severe ones are related to correctness, security and deployment. The aforementioned results can be exploited by both researchers and practitioners, in terms of future research directions, and to inform the quality assurance process.
Proceedings of the 18th International Academic MindTrek Conference: Media Business, Management, Content & Services, 2014
Requirements engineering is an extremely crucial phase in the software development lifecycle, bec... more Requirements engineering is an extremely crucial phase in the software development lifecycle, because mishaps in this stage are usually expensive to fix in later development phases. In the domain of computer games, requirements engineering is a heavily studied research field (39.3% of published papers are dealing with requirements [1]), since it is considered substantially different from traditional software requirements engineering (see [1] and [14]). The main point of differentiation is that almost all computer games share a common key-driver as requirement, i.e. user satisfaction. In this paper, we investigate the most important user satisfaction factors from computer games, though a survey on regular gamers. The results of the study suggest that, user satisfaction factors are not uniform across different types of games (game genres), but are heavily dependent on them. Therefore, this study underlines the most important non-functional requirements that developers and researchers should focus on, while dealing with game engineering.
Estimation of a software project effort, based on project analogies, is a promising method in the... more Estimation of a software project effort, based on project analogies, is a promising method in the area of software cost estimation. Projects in a historical database, that are analogous (similar) to the project under examination, are detected, and their effort data are used to produce estimates. As in all software cost estimation approaches, important decisions must be made regarding certain parameters, in order to calibrate with local data and obtain reliable estimates. In this paper, we present a statistical simulation tool, namely the bootstrap method, which helps the user in tuning the analogy approach before application to real projects. This is an essential step of the method, because if inappropriate values for the parameters are selected in the first place, the estimate will be inevitably wrong. Additionally, we show how measures of accuracy and in particular, confidence intervals, may be computed for the analogy-based estimates, using the bootstrap method with different assumptions about the population distribution of the data set. Estimate confidence intervals are necessary in order to assess point estimate accuracy and assist risk analysis and project planning. Examples of bootstrap confidence intervals and a comparison with regression models are presented on well-known cost data sets.
2012 Eighth International Conference on the Quality of Information and Communications Technology, 2012
Refactoring, aims to improve the design of existing code to cope with foreseen software architect... more Refactoring, aims to improve the design of existing code to cope with foreseen software architecture evolution. The selection of the optimum refactoring strategy can be a daunting task involving the identification of refactoring candidates, the determination of which refactorings to apply and the assessment of the refactoring impact on software product quality characteristics. As such, the benefits from refactorings are measured from the quality advancements achieved through the application of state of the art structural quality assessments on refactored code. Perceiving refactoring trough the lens of value creation, the optimum strategy should be the one that maximizes the endurance of the architecture in future imposed changes. We argue that an alternative measurement and examination of the refactoring success is possible, one, that focuses on the balance between effort spent and anticipated cost minimization. In this arena, traditional, quality evaluation methods fall short in examining the financial implications of uncertainties imposed by the frequent updates/modifications and by the dynamics of the XP programming. In this paper we apply simple Real Options Analysis techniques and we perceive the selection of the optimum refactoring strategy as an option capable of generating value (cost minimization) upon adoption. Doing so, we link the endurance of the refactored architecture to its true monetary value. To get an estimation of the expected cost that is needed to apply the considered refactorings and to the effect of applying them, in the cost of future adoptions we conducted a case study. The results of the case study suggest that every refactoring can be associated with different benefit levels during system extension.
Proceedings of the 17th Panhellenic Conference on Informatics - PCI '13, 2013
ABSTRACT Design patterns have been introduced in the field of software engineering in the middle ... more ABSTRACT Design patterns have been introduced in the field of software engineering in the middle of 90s as common solutions to common design problems. Until now, the effect of design patterns on software quality attributes has been studied by many researchers. However, the results are not the expected ones, in the sense that several studies suggest that there are cases when a design pattern is not the optimum way of designing a system. In this paper, we present the findings of a systematic literature review that aims at cataloging published design solutions, referenced as alternative design solutions, which are equivalent to design patterns and can be used when a design pattern instance is not the optimum design solution for a specific design problem.
IEEE International Conference on Computer Systems and Applications, 2006.
In this paper we apply a machine learning approach to the problem of estimating the number of def... more In this paper we apply a machine learning approach to the problem of estimating the number of defects called Regression via Classification (RvC). RvC initially automatically discretizes the number of defects into a number of fault classes, then learns a model that predicts the fault class of a software system. Finally, RvC transforms the class output of the model back into a numeric prediction. This approach includes uncertainty in the models because apart from a certain number of faults, it also outputs an associated interval of values, within which this estimate lies, with a certain confidence. To evaluate this approach we perform a comparative experimental study of the effectiveness of several machine learning algorithms in a software dataset. The data was collected by Pekka Forselious and involves applications maintained by a bank of Finland.
2010 Seventh International Conference on the Quality of Information and Communications Technology, 2010
Nowadays one key question for most organizations is which of the agile practices should be implem... more Nowadays one key question for most organizations is which of the agile practices should be implemented to improve product quality. This systematic literature review surveys studies published up to and including 2009 and attempts to present and evaluate the empirical findings regarding quality in agile practices. The studies were classified into three groups: test driven or test first development, pair programming, and miscellaneous agile practices and methods. The findings of most studies suggest that agile practices can improve quality if they are implemented correctly. The significant findings of this study, in conjunction with previous research, could be used as guidelines for practitioners on their own settings and situations.
Information and Software Technology, 2001
Although typically a software development organisation is involved in more than one project simul... more Although typically a software development organisation is involved in more than one project simultaneously, the available tools in the area of software cost estimation deal mostly with single software projects. In order to calculate the possible cost of the entire project portfolio, one must combine the single project estimates taking into account the uncertainty involved. In this paper, statistical simulation techniques are used to calculate con®dence intervals for the effort needed for a project portfolio. The overall approach is illustrated through the adaptation of the analogy-based method for software cost estimation to cover multiple projects.
Empirical Software Engineering, 2006
In this paper we discuss our empirical study about the advantages and difficulties 15 Greek softw... more In this paper we discuss our empirical study about the advantages and difficulties 15 Greek software companies experienced applying Extreme Programming (XP) as a holistic system in software development. Based on a generic XP system including feedback influences and using a cause-effect model including social-technical affecting factors, as our research tool, the study statistically evaluates the application of XP practices in the software companies being studied. Data were collected from 30 managers and developers, using the sample survey technique with questionnaires and interviews, in a time period of six months. Practices were analysed individually, using Descriptive Statistics (DS), and as a whole by building up different models using stepwise Discriminant Analysis (DA). The results have shown that companies, facing various problems with common code ownership, on-site customer, 40-hour week and metaphor, prefer to develop their own tailored XP method and way of working-practices that met their requirements. Pair programming and test-driven development were found to be the most significant success factors. Interactions and hidden dependencies for the majority of the practices as well as communication and synergy between skilled personnel were found to be other significant success factors. The contribution of this preliminary research work is to provide some evidence that may assist companies in evaluating whether the XP system as a holistic framework would suit their current situation.
Communications of the ACM, 2004
A study of almost six million lines of code tracks how freely accessible source code holds up aga... more A study of almost six million lines of code tracks how freely accessible source code holds up against time and multiple iterations.
25th Pan-Hellenic Conference on Informatics, 2021
This work describes a research collaboration between universities and industry with the aim to pr... more This work describes a research collaboration between universities and industry with the aim to provide a low-cost prototype based on Augmented Reality technologies, that assists with maintaining correct information in Warehouse Management Systems. The component interacts with the central server of an existing commercial WMS to provide up-to-date information on the actual state of the warehouse. The low-cost requirement restricts the solution to smartphones and other inexpensive equipment readily available, such as drones, as well as mostly Open Source Software. This requirement also introduces several interesting architectural issues that we discuss in this work. A prototype was built for the proposed architecture and several tests were carried out.
Proceedings of the XP2017 Scientific Workshops, 2017
Technical debt (TD) impedes software projects by reducing the velocity of development teams durin... more Technical debt (TD) impedes software projects by reducing the velocity of development teams during software evolution. Although TD is usually assessed on either the entire system or on individual software artifacts, it is the actual craftsmanship of developers that causes the accumulation of TD. In the light of extremely high maintenance costs, efficient software project management cannot occur without recognizing the relation between developer characteristics and the tendency to evoke violations that lead to TD. In this paper, we investigate three research questions related to the distribution of TD among the developers of a software project, the types of violations caused by each developer and the relation between developers' maturity and the tendency to accumulate TD. The study has been performed on four widely employed PHP open-source projects. All developers' personal characteristics have been anonymized in the study. CCS CONCEPTS • Software and its engineering → Software creation and management → Software post-development issues→ Maintaining software• Social and professional topics→ Management of computing and information systems → Software management → Software maintenance
2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2017
One of the first steps towards the effective Technical Debt (TD) management is the quantification... more One of the first steps towards the effective Technical Debt (TD) management is the quantification and continuous monitoring of the TD principal. In the current state-ofresearch and practice the most common ways to assess TD principal are the use of: (a) structural proxies-i.e., most commonly through quality metrics; and (b) monetized proxies-i.e., most commonly through the use of the SQALE (Software Quality Assessment based on Lifecycle Expectations) method. Although both approaches have merit, they seem to rely on different viewpoints of TD and their levels of agreement have not been evaluated so far. Therefore, in this paper, we empirically explore this relation by analyzing data obtained from 20 open source software projects and build a regression model that establishes a relationship between them. The results of the study suggest that a model of seven structural metrics, quantifying different aspects of quality (i.e., coupling, cohesion, complexity, size, and inheritance) can accurately estimate TD principal as appraised by SonarQube. The results of this case study are useful to both academia and industry. In particular, academia can gain knowledge on: (a) the reliability and agreement of TD principal assessment methods and (b) the structural characteristics of software that contribute to the accumulation of TD, whereas practitioners are provided with an alternative evaluation model with reduced number of parameters that can accurately assess TD, through traditional software quality metrics and tools.
Abstract: Requirements engineering is an extremely crucial phase in the software development life... more Abstract: Requirements engineering is an extremely crucial phase in the software development lifecycle, because mishaps in this stage are usually expensive to fix in later development phases. In the domain of computer games, requirements engineering is a heavily studied research field (39.3 % of published papers are dealing with requirements [1]), since it is considered substantially different from traditional software requirements engineering (see [1] and [14]). The main point of differentiation is that almost all computer games share a common key-driver as requirement, i.e. user satisfaction. In this paper, we investigate the most important user satisfaction factors from computer games, though a survey on regular gamers. The results of the study suggest that, user satisfaction factors are not uniform across different types of games (game genres), but are heavily dependent on them. Therefore, this study underlines the most important non-functional requirements that developers and r...
Proceedings of the 13th International Conference on Evaluation of Novel Approaches to Software Engineering, 2018
Game development is one of the fastest-growing industries in IT. In order for a game to be succes... more Game development is one of the fastest-growing industries in IT. In order for a game to be successful, the game should engage the player through a solid and interesting scenario, which does not only describe the state of the game, but also outlines the main characters and their interactions. By considering the increasing complexity of game scenarios, we seek for existing methods for scenario representation approaches, and based on the most popular one, we provide tool support for assisting the game design process. To evaluate the usefulness of the developed tool, we have performed a case study with the aim to assess the usability of the tool. The results of the case study suggested that after some interaction with end-users the tool has reached a highly usable state that to some extent guarantees its applicability in practice.
2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), 2017
Modeling Big Data Applications is a key research topic for designing, analyzing, programming and ... more Modeling Big Data Applications is a key research topic for designing, analyzing, programming and deploying data-intensive applications, with high value and long-term trade-offs. The need for unified perspectives, architectures and requirements techniques is requisite. The current approach proposes the use of Feature Models to fill this gap by extending present model-driven engineering practices with utter purpose to define a reusable, extensible and highly configurable design approach for Big Data Applications.
Proceedings of the 35th Annual ACM Symposium on Applied Computing, 2020
Game development is one of the fastest growing industries. Since games' success is mostly rel... more Game development is one of the fastest growing industries. Since games' success is mostly related to users' enjoyment, one of the cornerstones of their quality assessment is the evaluation from the user perspective. According to literature, game scenario constitutes a key-factor that leads to users' enjoyment. Despite their importance, scenarios are currently evaluated through heuristics in a subjective way. The aim of this paper is to develop an objective model (i.e., a set of quality attributes and metrics) for evaluating game scenarios with respect to users' satisfaction. The proposed model can be applied to flow charts and character models (i.e., common game scenario representation mechanisms). To achieve this goal, we: (a) gathered game scenario characteristics that are related to users' satisfaction, (b) proposed several metrics for quantifying these characteristics, and (c) performed a case study on three interactive scenarios to evaluate the model. As a r...
Proceedings of the Evaluation and Assessment on Software Engineering, 2019
Context: Technical Debt (TD) quantification has been studied in the literature and is supported b... more Context: Technical Debt (TD) quantification has been studied in the literature and is supported by various tools; however, there is no common ground on what information shall be presented to stakeholders. Similarly to other quality monitoring processes, it is desirable to provide several views of quality through a dashboard, in which metrics concerning the phenomenon of interest are displayed. Objective: The aim of this study is to investigate the indicators that shall be presented in such a dashboard, so as to: (a) be meaningful for industrial stakeholders, (b) present all necessary information, and (c) be simple enough so that stakeholders can use them. Method: We explore TD Management (TDM) activities (i.e., measurement, prioritization, repayment) and choose the main concepts that need to be visualized, based on existing literature and toolsupport. Next, we perform a survey with 60 software engineers (i.e., architects, developers, etc.) working for 11 software development companies located in 9 countries, to understand their needs for TDM. Results / Conclusions: The results of the study suggest that different stakeholders need a different view of the quality dashboard, but also some commonalities can be identified. For example, on the one hand, managers are mostly interested in financial concepts, whereas on the other hand developers are more interested in the nature of the problems that exist in the code. The outcomes of this study can be useful to both researchers and practitioners, in the sense that the former can focus their efforts on aspects that are meaningful to industry, whereas the latter to develop meaningful dashboards, with multiple views.
2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2018
Empirical evidence has pointed out that Extract Method refactorings are among the most commonly a... more Empirical evidence has pointed out that Extract Method refactorings are among the most commonly applied refactorings by software developers. The identification of Long Method code smells and the ranking of the associated refactoring opportunities is largely based on the use of metrics, primarily with measures of cohesion, size and coupling. Despite the relevance of these properties to the presence of large, complex and noncohesive pieces of code, the empirical validation of these metrics has exhibited relatively low accuracy (max precision: 66%) regarding their predictive power for long methods or extract method opportunities. In this work we perform an empirical validation of the ability of cohesion, coupling and size metrics to predict the existence and the intensity of long method occurrences. According to the statistical analysis, the existence and the intensity of the Long Method smell can be effectively predicted by two size (LoC and NoLV), two coupling (MPC and RFC), and four cohesion (LCOM1, LCOM2, Coh, and CC) metrics. Furthermore, the integration of these metrics into a multiple logistic regression model can predict whether a method should be refactored with a precision of 89% and a recall of 91%. The model yields suggestions whose ranking is strongly correlated to the ranking based on the effect of the corresponding refactorings on source code (correl. coef. 0.520). The results are discussed by providing interpretations and implications for research and practice.
Lecture Notes in Computer Science, 2016
In this paper, we focus on source code quality assessment for Share-Point applications, which is ... more In this paper, we focus on source code quality assessment for Share-Point applications, which is a powerful framework for developing software by combining imperative and declarative programming. In particular, we present an industrial case study conducted in a software consulting/development company in Netherlands, which aimed at: identifying the most common SharePoint quality rule violations and their severity. The results indicate that the most frequent rule violations are identified in the JavaScript part of the applications, and that the most severe ones are related to correctness, security and deployment. The aforementioned results can be exploited by both researchers and practitioners, in terms of future research directions, and to inform the quality assurance process.
Proceedings of the 18th International Academic MindTrek Conference: Media Business, Management, Content & Services, 2014
Requirements engineering is an extremely crucial phase in the software development lifecycle, bec... more Requirements engineering is an extremely crucial phase in the software development lifecycle, because mishaps in this stage are usually expensive to fix in later development phases. In the domain of computer games, requirements engineering is a heavily studied research field (39.3% of published papers are dealing with requirements [1]), since it is considered substantially different from traditional software requirements engineering (see [1] and [14]). The main point of differentiation is that almost all computer games share a common key-driver as requirement, i.e. user satisfaction. In this paper, we investigate the most important user satisfaction factors from computer games, though a survey on regular gamers. The results of the study suggest that, user satisfaction factors are not uniform across different types of games (game genres), but are heavily dependent on them. Therefore, this study underlines the most important non-functional requirements that developers and researchers should focus on, while dealing with game engineering.
Estimation of a software project effort, based on project analogies, is a promising method in the... more Estimation of a software project effort, based on project analogies, is a promising method in the area of software cost estimation. Projects in a historical database, that are analogous (similar) to the project under examination, are detected, and their effort data are used to produce estimates. As in all software cost estimation approaches, important decisions must be made regarding certain parameters, in order to calibrate with local data and obtain reliable estimates. In this paper, we present a statistical simulation tool, namely the bootstrap method, which helps the user in tuning the analogy approach before application to real projects. This is an essential step of the method, because if inappropriate values for the parameters are selected in the first place, the estimate will be inevitably wrong. Additionally, we show how measures of accuracy and in particular, confidence intervals, may be computed for the analogy-based estimates, using the bootstrap method with different assumptions about the population distribution of the data set. Estimate confidence intervals are necessary in order to assess point estimate accuracy and assist risk analysis and project planning. Examples of bootstrap confidence intervals and a comparison with regression models are presented on well-known cost data sets.
2012 Eighth International Conference on the Quality of Information and Communications Technology, 2012
Refactoring, aims to improve the design of existing code to cope with foreseen software architect... more Refactoring, aims to improve the design of existing code to cope with foreseen software architecture evolution. The selection of the optimum refactoring strategy can be a daunting task involving the identification of refactoring candidates, the determination of which refactorings to apply and the assessment of the refactoring impact on software product quality characteristics. As such, the benefits from refactorings are measured from the quality advancements achieved through the application of state of the art structural quality assessments on refactored code. Perceiving refactoring trough the lens of value creation, the optimum strategy should be the one that maximizes the endurance of the architecture in future imposed changes. We argue that an alternative measurement and examination of the refactoring success is possible, one, that focuses on the balance between effort spent and anticipated cost minimization. In this arena, traditional, quality evaluation methods fall short in examining the financial implications of uncertainties imposed by the frequent updates/modifications and by the dynamics of the XP programming. In this paper we apply simple Real Options Analysis techniques and we perceive the selection of the optimum refactoring strategy as an option capable of generating value (cost minimization) upon adoption. Doing so, we link the endurance of the refactored architecture to its true monetary value. To get an estimation of the expected cost that is needed to apply the considered refactorings and to the effect of applying them, in the cost of future adoptions we conducted a case study. The results of the case study suggest that every refactoring can be associated with different benefit levels during system extension.
Proceedings of the 17th Panhellenic Conference on Informatics - PCI '13, 2013
ABSTRACT Design patterns have been introduced in the field of software engineering in the middle ... more ABSTRACT Design patterns have been introduced in the field of software engineering in the middle of 90s as common solutions to common design problems. Until now, the effect of design patterns on software quality attributes has been studied by many researchers. However, the results are not the expected ones, in the sense that several studies suggest that there are cases when a design pattern is not the optimum way of designing a system. In this paper, we present the findings of a systematic literature review that aims at cataloging published design solutions, referenced as alternative design solutions, which are equivalent to design patterns and can be used when a design pattern instance is not the optimum design solution for a specific design problem.
IEEE International Conference on Computer Systems and Applications, 2006.
In this paper we apply a machine learning approach to the problem of estimating the number of def... more In this paper we apply a machine learning approach to the problem of estimating the number of defects called Regression via Classification (RvC). RvC initially automatically discretizes the number of defects into a number of fault classes, then learns a model that predicts the fault class of a software system. Finally, RvC transforms the class output of the model back into a numeric prediction. This approach includes uncertainty in the models because apart from a certain number of faults, it also outputs an associated interval of values, within which this estimate lies, with a certain confidence. To evaluate this approach we perform a comparative experimental study of the effectiveness of several machine learning algorithms in a software dataset. The data was collected by Pekka Forselious and involves applications maintained by a bank of Finland.