Software Complexity Research Papers - Academia.edu (original) (raw)

Software maintenance claims a large proportion of organizational resources. It is thought that many maintenance problems derive from inadequate software design and development practices. Poor design choices can result in complex software... more

Software maintenance claims a large proportion of organizational resources. It is thought that many maintenance problems derive from inadequate software design and development practices. Poor design choices can result in complex software that is costly to support and difficult to change. However, it is difficult to assess the actual maintenance performance effects of software development practices because their impact is

Object-oriented metrics have been validated empirically as measures of design complexity. These metrics can be used to mitigate potential problems in the software complexity. However, there are few studies that were conducted to formulate... more

Object-oriented metrics have been validated empirically as measures of design complexity. These metrics can be used to mitigate potential problems in the software complexity. However, there are few studies that were conducted to formulate the guidelines, represented as threshold values, to interpret the complexity of the software design using metrics. Classes can be clustered into low and high risk levels using threshold values. In this paper, we use a statistical model, derived from the logistic regression, to identify threshold values for the Chidamber and Kemerer (CK) metrics. The methodology is validated empirically on a large open-source system-the Eclipse project. The empirical results indicate that the CK metrics have threshold effects at various risk levels. We have validated the use of these thresholds on the next release of the Eclipse project-Version 2.1-using decision trees. In addition, the selected threshold values were more accurate than those were selected based on either intuitive perspectives or on data distribution parameters. Furthermore, the proposed model can be exploited to find the risk level for an arbitrary threshold value. These findings suggest that there is a relationship between risk levels and object-oriented metrics and that risk levels can be used to identify threshold effects.

Abstract: It is becoming increasingly difficult to ignore the complexity of software products. Software metrics are proposed to help show indications for quality, size, complexity, etc. of software products. In this paper, software... more

Abstract: It is becoming increasingly difficult to ignore the complexity of software products. Software metrics are proposed to help show indications for quality, size, complexity, etc. of software products. In this paper, software metrics related to complexity are developed and evaluated. A dataset of many open source projects is built to assess the value of the developed metrics. Comparisons and correlations are conducted among the different tested projects. A classifica-tion is proposed to classify software code into different levels of ...

The complexity of a program or software can create many difficulties during its lifetime. This complexity entails increased time and effort requirements maintain the code, or discover errors and defects. All of which will lead to an... more

The complexity of a program or software can create many difficulties during its lifetime. This complexity entails increased time and effort requirements maintain the code, or discover errors and defects. All of which will lead to an increase in the overall cost of the project. So that, software engineers and developers measure the complexity of program code before they start any project. This paper proposes a novel weighted complexity metric to measure code complexity by using six main attributes. Two of them are a mixture of Cyclomatic, Halsted, and Shao and Wangs metrics. The dataset of this research consists of 15 programs written in Java programming language, and collected from different websites. The programs were ranked by seven experts in Java programming language. Our metric was able to achieve 94% accuracy for results.

Language Migration is a highly risky and complex process. Many authors have provided different ways to tackle down the problem, but it still not completely resolved, even-more it is considered almost impossible on many circumstances.... more

Language Migration is a highly risky and complex process. Many authors have provided different ways to tackle down the problem, but it still not completely resolved, even-more it is considered almost impossible on many circumstances. Despite the approaches and solutions available, no work has been done on measuring the risks and complexity of a migration process based on the technological gap. In this article we contribute a first iteration on Language Migration complexity metrics, we apply and interpret metrics on an industrial project. We end the article with a discussion and proposing future works.

Advanced business applications like enterprise resource planning systems (ERP) are characterized by a high degree of complexity in data, functionality, and processes. This paper examines some decisive causes of this complexity and their... more

Advanced business applications like enterprise resource planning systems (ERP) are characterized by a high degree of complexity in data, functionality, and processes. This paper examines some decisive causes of this complexity and their implications for software configuration and user interaction. A case study of SAP ® 's R/3 ® Sales & Distribution module exemplifies complexity in order management systems, and documents its impact on the user experience. We emphasize the need to shield users appropriately from underlying system complexity to provide a convenient and simple-to-use software tool. We discuss several approaches for how to address this.

Many wireless sensor network applications require a gateway device to interface with services running on the Internet. Because of the software complexity involved in this device, it is often realized using a real-time operating system... more

Many wireless sensor network applications require a gateway device to interface with services running on the Internet. Because of the software complexity involved in this device, it is often realized using a real-time operating system running on an application processor. Most systems burden the user with developing the protocol handling and device configuration and management inside the application. In this paper, we present the Angelos Gateway -a turnkey, low-cost, Linux-powered WSN gateway that provides a socket-based environment for rapid network-enabled application development. Experimental results demonstrate that the proposed device is capable of highthroughput packet I/O, confirming the efficacy of the proposed implementation.

Summary Business Process Models (BPMs), often created using a modeling language such as UML activity diagrams, Event- Driven Process Chains Markup Language (EPML) and Yet Another Workflow Language (YAWL), serve as a base for communication... more

Summary Business Process Models (BPMs), often created using a modeling language such as UML activity diagrams, Event- Driven Process Chains Markup Language (EPML) and Yet Another Workflow Language (YAWL), serve as a base for communication between the stakeholders in the software development process. In order to fulfill this purpose, they should be easy to understand and easy to maintain. For

A model for the emerging area of software complexity measurement of OO systems is required for the integration of measures defined by various researchers and to provide a framework for continued investigation. We present a model, based in... more

A model for the emerging area of software complexity measurement of OO systems is required for the integration of measures defined by various researchers and to provide a framework for continued investigation. We present a model, based in the literature of OO systems and software complexity for structured systems. The model defines the software complexity of OO systems at the variable, method, object, and system levels. At each level, measures are identified that account for the cohesion and coupling aspects of the system. Users of OO techniques perceptions of complexity provide support for the levels and measures.

This paper reports on a modest study which relates seven different software complexity metrics to the experience of maintenance activities performed on a medium size software system. Three different versions of the system that evolved... more

This paper reports on a modest study which relates seven different software complexity metrics to the experience of maintenance activities performed on a medium size software system. Three different versions of the system that evolved over a period of three years were analyzed in this study. A major revision of the system, while still in its design phase, was also analyzed.

This paper reports on experiences in transitioning a capstone course from a single-quarter to three-quarters. A singlequarter project course in software design and development had been offered by our department for over twenty years. More... more

This paper reports on experiences in transitioning a capstone course from a single-quarter to three-quarters. A singlequarter project course in software design and development had been offered by our department for over twenty years. More recently, upon formation of a new undergraduate degree in Informatics, this course was transformed into a three-quarter capstone course taken by students in their final year. Correspondingly, some aspects of the course projects, such as the business scope and software complexity, grew in proportion to the increase in the project duration. At the same time, a number of "costs" were reduced, including the time and effort required to set up the development infrastructure and development environments, and learn new tools and languages. Most importantly, several other factors experienced substantial growth. The longer project duration allowed significant increase in the effort and attention paid to usability engineering and user-centered design, leading to systems that were deployable and more usable for the target users. It also enabled better software testing, deployment and release management. As a result, the final outcome was much closer to production quality than the prototypes and proof-ofconcept systems typical of earlier single-quarter projects.

This paper introduces a novel eigenstructure-based algorithm uni-vector-sensor ESPRIT that yields closed-form direction-of-arrival (DOA) estimates and polarization estimates using one electromagnetic vector sensor. A vector sensor is... more

This paper introduces a novel eigenstructure-based algorithm uni-vector-sensor ESPRIT that yields closed-form direction-of-arrival (DOA) estimates and polarization estimates using one electromagnetic vector sensor. A vector sensor is composed of six spatially co-located nonisotropic polarizationsensitive antennas, measuring all six electromagnetic field components of the incident wave field. Uni-vector-sensor ESPRIT is based on a matrix-pencil pair of temporally displaced data sets collected from a single electromagnetic vector sensor. The closed-form parameter estimates are obtained through a vector cross-product operation on each decoupled signalsubspace eigenvector of the data correlation matrix. This method exploits the electromagnetic sources' polarization diversity in addition to their spatial diversity, requires no a priori knowledge of signal frequencies, suffers no frequency-DOA ambiguity, pairs automatically the x-axis direction cosines with the y-axis direction cosines, eliminates array interelement calibration, can resolve up to five completely polarized uncorrelated monochromatic sources from near field or far field. It impressively out-performs an array of spatially displaced identically polarized antennas of comparable array-manifold size and computational load.

Build systems are responsible for transforming static source code artifacts into executable software. While build systems play such a crucial role in software development and maintenance, they have been largely ignored by software... more

Build systems are responsible for transforming static source code artifacts into executable software. While build systems play such a crucial role in software development and maintenance, they have been largely ignored by software evolution researchers. However, a firm understanding of build system aging processes is needed in order to allow project managers to allocate personnel and resources to build system maintenance tasks effectively, and reduce the build maintenance overhead on regular development activities. In this paper, we study the evolution of build systems based on two popular Java build languages (i.e., ANT and Maven) from two perspectives: (1) a static perspective, where we examine the complexity of build system specifications using software metrics adopted from the source code domain; and (2) a dynamic perspective, where the complexity and coverage of representative build runs are measured. Case studies of the build systems of six open source build projects with a combined history of 172 releases show that build system and source code size are highly correlated, with source code restructurings often requiring build system restructurings. Furthermore, we find that Java build systems evolve dynamically in terms of duration and recursive depth of the directory hierarchy.

The power of high-level languages lies in their abstraction over hardware and software complexity, leading to greater security, better reliability, and lower development costs. However, opaque abstractions are often show-stoppers for... more

The power of high-level languages lies in their abstraction over hardware and software complexity, leading to greater security, better reliability, and lower development costs. However, opaque abstractions are often show-stoppers for systems programmers, forcing them to either break the abstraction, or more often, simply give up and use a different language. This paper addresses the challenge of opening up a high-level language to allow practical low-level programming without forsaking integrity or performance. The contribution of this paper is threefold: 1) we draw together common threads in a diverse literature, 2) we identify a framework for extending high-level languages for low-level programming, and 3) we show the power of this approach through concrete case studies. Our framework leverages just three core ideas: extending semantics via intrinsic methods, extending types via unboxing and architectural-width primitives, and controlling semantics via scoped semantic regimes. We develop these ideas through the context of a rich literature and substantial practical experience. We show that they provide the power necessary to implement substantial artifacts such as a high-performance virtual machine, while preserving the software engineering benefits of the host language. The time has come for high-level low-level programming to be taken more seriously: 1) more projects now use high-level languages for systems programming, 2) increasing architectural heterogeneity and parallelism heighten the need for abstraction, and 3) a new generation of high-level languages are under development and ripe to be influenced. Categories and Subject Descriptors D.3.4

In this study, two different software complexity measures were applied to breadth-first search and depth-first search algorithms. The intention is to study what kind of new information about the algorithm the complexity measures... more

In this study, two different software complexity measures were applied to breadth-first search and depth-first search algorithms. The intention is to study what kind of new information about the algorithm the complexity measures (Halstead’s volume and Cylomatic number) are able to give and to study which software complexity measure is the most useful one in algorithm comparison. The results clearly show that with respect to Program Volume, breadth-first search algorithm is best implemented in Pascal language while depth-first search is best implemented in C language. The values of Program Difficulty and Program Effort indicate that both the algorithms are best implemented in Pascal language. Cyclomatic number is the same for both algorithms when programmed in Visual BASIC (i.e. 6).

The complexity of a program or software can create many difficulties during its lifetime. This complexity entails increased time and effort requirements maintain the code, or discover errors and defects. All of which will lead to an... more

The complexity of a program or software can create many difficulties during its lifetime. This complexity entails increased time and effort requirements maintain the code, or discover errors and defects. All of which will lead to an increase in the overall cost of the project. So that, software engineers and developers measure the complexity of program code before they start any project. This paper proposes a novel weighted complexity metric to measure code complexity by using six main attributes. Two of them are a mixture of Cyclomatic, Halsted, and Shao and Wangs metrics. The dataset of this research consists of 15 programs written in Java programming language, and collected from different websites. The programs were ranked by seven experts in Java programming language. Our metric was able to achieve 94% accuracy for results.

Knowledge work is generally regarded as involving complex cognition, and few types of knowledge work are as important in the modern economy as software engineering (SE). A large number of measures have been developed to analyze software... more

Knowledge work is generally regarded as involving complex cognition, and few types of knowledge work are as important in the modern economy as software engineering (SE). A large number of measures have been developed to analyze software and its concomitant processes with the goals of evaluating, predicting and controlling its complexity. While many effective measures can be used to achieve these goals, there is no firm theoretical basis for choosing among measures. The first research question for this paper is: how to theoretically determine a parsimonious subset of software measures to use in software complexity analysis? To answer this question, task complexity is studied; specifically Wood's model of task complexity is examined for relevant insights. The result is that coupling and cohesion stand out as comprising one such parsimonious subset. The second research question asks: how to resolve potential conflicts between coupling and cohesion? Analysis of the information processing view of cognition results in a model of cohesion as a moderator on a main relationship between coupling and complexity. The theory-driven approach taken in this research considers both the task complexity model and cognition and lends significant support to the developed model for software complexity. Furthermore, examination of the task complexity model steers this paper towards considering complexity in the holistic sense of an entire program, rather than of a single program unit, as is conventionally done. Finally, it is intended that by focusing software measurement on coupling and cohesion, research can more fruitfully aid both the practice and pedagogy of software complexity management.

Multiple-antenna systems, also known as multiple input-multiple output (MIMO) radio, improve the capacity and reliability of radio communication systems. Of considerable concern however is the huge complexity involved in the... more

Multiple-antenna systems, also known as multiple input-multiple output (MIMO) radio, improve the capacity and reliability of radio communication systems. Of considerable concern however is the huge complexity involved in the implementation of such systems. Therefore, the design of low complexity, low cost, MIMO systems that keep most of the advantages and benefits of the full-complexity system has gained significant attentions recently. In this paper, we design and implement on field programmable gate array (FPGA) board, a reduced-complexity MIMO-maximum likelihood detection (MLD) system whose performance is as close as possible to the optimal MLD (full-complexity) system while making significant cut back in the over-all hardware/software complexity (and therefore the operating cost) of the system.

This study demonstrates an objective method used to evaluate the 'enhanceability' of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal... more

This study demonstrates an objective method used to evaluate the 'enhanceability' of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal models of defect cause. Another definition of 'defect-prone software' is presented that concentrates attention on software that requires unusually high repair considering the magnitude of planned enhancement.

Predicting software complexity can save millions in maintenance costs, but while current measures can be used to somne degree, most are not sufficiently sensitive or comprehensive.

Quality factors namely testability, reliability, and maintainability are considered vulnerable to software complexity. Analyzing complexity of code is difficult though. Many techniques have been invented, including control flow graph... more

Quality factors namely testability, reliability, and maintainability are considered vulnerable to software complexity. Analyzing complexity of code is difficult though. Many techniques have been invented, including control flow graph (CFG) to aid program complexity analysis. However, the representation of code with 'web' structures exploited in CFG incurs some difficulty to human comprehension. Referring to Granular computing recently emerging from cognitive theories, this research thus proposes a novel approach to representing source code with "granular hierarchical structures". Instead of representing a program with 'web', the method uses multiple 'trees' to promisingly obtain more understanding during source code analysis. Preliminary experiments showed that representing source code with granular hierarchical structures gained more competent analysis of program complexity. The results were evaluated by the invented complexity measure called SCIM that satisfies more "basic needs of good software measures", compared to McCabe's Cyclomatic complexity derived from control flow graph.

Sundays.The use of the software resulted in significant time saving in the scheduling of the timetable, a shortening of the examination period and a well spread examination for the students. Also, none of the lecturers / examination... more

Sundays.The use of the software resulted in significant time saving in the scheduling of the timetable, a shortening of the examination period and a well spread examination for the students. Also, none of the lecturers / examination invigilators was double booked or booked successively. It was clearly evident that simulated annealing performed better than genetic algorithm in most of the evaluated parameters.

Task complexity is a construct widely used in the behavioral sciences to explore and predict the relationship between task characteristics and information processing. Because the creation and use of IT in the performance of tasks is a... more

Task complexity is a construct widely used in the behavioral sciences to explore and predict the relationship between task characteristics and information processing. Because the creation and use of IT in the performance of tasks is a central area of informing science (IS) research, it follows that better understanding of task complexity should be of great potential benefit to IS researchers and practitioners. Unfortunately, applying task complexity to IS is difficult because no complete, consistent definition exists. Furthermore, the most commonly adopted definition, objective task complexity, tends to be of limited use in situations where discretion or learning is present, or where information technology (IT) is available to assist the task performer. These limitations prove to be severe in many common IS situations. The paper presents a literature review identifying thirteen distinct definitions of task complexity, then synthesizes these into a new five-class framework, referred to as the Comprehensive Task Complexity Classes (CTCC). It then shows the potential relevance of the CTCC to IS, focusing on different ways it could be applied throughout a hypothetical information systems lifecycle. In the course of doing so, the paper also illustrates how the interaction between different classes of task complexity can serve as a rich source of questions for future investigations.

Calculating the complexity of software projects is important to software engineering as it helps in estimating the likely locations of bugs as well as the number of resources required to modify certain program areas. Cyclomatic complexity... more

Calculating the complexity of software projects is important to software engineering as it helps in estimating the likely locations of bugs as well as the number of resources required to modify certain program areas. Cyclomatic complexity is one of the pri- mary estimators of software complexity which operates by counted branch points in software code. However, cyclomatic complexity assumes that all branch points are equally complex. Some types of branch points require more creativity and foresight to understand and program correctly than others. Specifically, when knowledge of the behavior of a loop or recursion requires solving a problem similar to the halting problem, that loop has intrinsically more complexity than other types of loops or con- ditions. Halting-problem-like problems can be detected by looking for loops whose termination conditions are not intrinsically bound in the looping construct. These types of loops are counted to find the program complexity. This metric is orthogonal to cyclomatic complexity (which remains useful) rather than as a substitute for it.

McCabe's Cyclomatic Complexity (MCC) is a widely used metric for the complexity of control flow. Common usage decrees that functions should not have an MCC above 50, and preferably much less. However, the Linux kernel includes more than... more

McCabe's Cyclomatic Complexity (MCC) is a widely used metric for the complexity of control flow. Common usage decrees that functions should not have an MCC above 50, and preferably much less. However, the Linux kernel includes more than 800 functions with MCC values above 50, and over the years 369 functions have had an MCC of 100 or more. Moreover, some of these functions undergo extensive evolution, indicating that developers are successful in coping with the supposed high complexity. We attempt to explain this by analyzing the structure of such functions and showing that in many cases they are in fact well-structured. At the same time, we observe cases where developers indeed refactor the code in order to reduce complexity. These observations indicate that a high MCC is not necessarily an impediment to code comprehension, and support the notion that complexity cannot be fully captured using simple syntactic code metrics.

In recent days, the complexity of software has increased significantly in embedded products in such a way that the verification of Embedded Software (ESW) now plays an important role to ensure the product's quality. Embedded systems... more

In recent days, the complexity of software has increased significantly in embedded products in such a way that the verification of Embedded Software (ESW) now plays an important role to ensure the product's quality. Embedded systems engineers usually face the problems of verifying properties that have to meet the application's deadline, access the memory region, handle concurrency, and control the hardware registers. This work proposes a semiformal verification approach that combines dynamic and static verification to stress and cover exhaustively the state space of the system. We perform a case study on embedded software used in the medical devices domain. We conclude that the proposed approach improves the coverage and reduces substantially the verification time.

Three software complexity measures (Halstead's E,... more

Three software complexity measures (Halstead's E, McCabe's u(G), and the length as measured by number of statements) were compared to programmer performance on two software maintenance tasks. In an experiment on understanding, length and u(G) correlated with the percent of statements correctly recalled. In an experiment on modification, most significant correlations were obtained with metrics computed on modified rather than

The separation of concerns principle is aimed at the ability to modularize separately those different parts of software that are relevant to a particular concept, goal, task or purpose. Appropriate separation of application concerns... more

The separation of concerns principle is aimed at the ability to modularize separately those different parts of software that are relevant to a particular concept, goal, task or purpose. Appropriate separation of application concerns reduces software complexity, improves comprehensibility, and facilitates concerns reuse. Considering persistence as a common application concern, its separation from program's main code implies that applications can be developed without taking persistence requirements into consideration. As a result, persistence aspects may be plugged in at a later stage. This separation offers the developer handle persistence software attributes regardless the application functionality. We have analyzed different approaches to accomplish a complete separation of persistent features, appreciating that computational reflection achieves an entire transparency of persistence concerns, offering an enormous adaptability level. We present the implementation of a research-oriented prototype that illustrates how computational reflection can be used in future persistence systems to completely separate and adapt application persistence attributes at runtime.

Due to the tremendous complexity and sophistication of software, improving software reliability is an enormously difficult task. We study the software defect prediction problem, which focuses on predicting which modules will experience a... more

Due to the tremendous complexity and sophistication of software, improving software reliability is an enormously difficult task. We study the software defect prediction problem, which focuses on predicting which modules will experience a failure during operation. Numerous studies have applied machine learning to software defect prediction; however, skewness in defect-prediction datasets usually undermines the learning algorithms. The resulting classifiers will often never predict the faulty (minority0 class. This problem is well known in machine learning and is often referred to as learning from imbalanced datasets. We examine stratification, a widely used technique for learning imbalanced data that has received little attention in software defect prediction. Our experiments are focused on the SMOTE technique, which is a method of over-sampling minority-class examples. Our goal is to determine if SMOTE can improve recognition of defect-prone modules, and at what cost. Our experiments demonstrate that after SMOTE resampling, we have a more balanced classification. We found an improvement of at least 23% in the average geometric mean classification accuracy on four benchmark datasets.

Evolution creates structures of increasing order and power; in this process the stronger prevail over the weaker and carry the evolution further. Technology is an artificial creation that often threatens life and evolution conceived of as... more

Evolution creates structures of increasing order and power; in this process the stronger prevail over the weaker and carry the evolution further. Technology is an artificial creation that often threatens life and evolution conceived of as natural phenomena; but technology also supports life and it works together with evolution. However, there are claims that technology will do much more than that, and bring about an entirely new epoch of evolution. Technology will replace the fragile biological carriers of evolution by a new kind of nonbiological carriers of immense intelligence and power. The present paper discusses the plausibility and weaknesses of such fascinating projections that some people proudly announce as the final liberation of the Mind, while others fear them as signs of the final self-annihilation of the Man.

We perform a theoretical and empirical analysis of a set of Cascading Style Sheets (CSS) document complexity metrics. The metrics are validated using a practical framework that demonstrates their viability. The theoretical analysis is... more

We perform a theoretical and empirical analysis of a set of Cascading Style Sheets (CSS) document complexity metrics. The metrics are validated using a practical framework that demonstrates their viability. The theoretical analysis is performed using the Weyuker's properties−a widely adopted approach to conducting empirical validations of metrics proposals. The empirical analysis is conducted using visual and statistical analysis of distribution of metric values, Cliff's delta, Chi-square and Liliefors statistical normality tests, and correlation analysis on our own dataset of CSS documents. The results show that five out of the nine metrics (56%) satisfy Weyuker's properties except for the Number of Attributes Defined per Rule Block (NADRB) metric, which satisfies six out of nine (67%) properties. In addition, the results from the statistical analysis show good statistical distribution characteristics (only the Number of Extended Rule Blocks (NERB) metric exceeds the rule-of-thumb threshold value of the Cliff's delta). The correlation between the metric values and the size of the CSS documents is insignificant, suggesting that the presented metrics are indeed complexity rather than size metrics. The practical application of the presented CSS complexity metric suite is to assess the risk of CSS documents. The proposed CSS complexity metrics suite allows identification of CSS files that require immediate attention of software maintenance personnel.

To support the practical development of intelligent agents, several programming languages have been introduced that incorporate concepts from agent logics: on the one hand, we have languages that incorporate beliefs and plans (i.e.,... more

To support the practical development of intelligent agents, several programming languages have been introduced that incorporate concepts from agent logics: on the one hand, we have languages that incorporate beliefs and plans (i.e., procedural goals), and on the other hand, languages that implement the concepts of beliefs and (declarative) goals. We propose the agent programming language Dribble, in which these features of procedural and declarative goals are combined. The language Dribble thus incorporates beliefs and goals as well as planning features. The idea is, that a Dribble agent should be able to select a plan to reach a goal from where it is at a certain point in time. In order to do that, the agent has beliefs, goals and rules to select plans and to create and modify plans. Dribble comes with a formally defined operational semantics and, on top of this semantics, a dynamic logic is constructed that can be used to specify and verify properties of Dribble agents. The correspondence between the logic and the operational semantics is established.

Complexity metrics play an important role in software development; they are reducing the costs during almost the whole development process. There is a growing demand for measuring the complexity of large systems with keeping the... more

Complexity metrics play an important role in software development; they are reducing the costs during almost the whole development process. There is a growing demand for measuring the complexity of large systems with keeping the consistency of the results regardless of the diversity of the programming languages. In this article we present a general software measurement process on .NET basis

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and... more

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503.

Sundays.The use of the software resulted in significant time saving in the scheduling of the timetable, a shortening of the examination period and a well spread examination for the students. Also, none of the lecturers / examination... more

Sundays.The use of the software resulted in significant time saving in the scheduling of the timetable, a shortening of the examination period and a well spread examination for the students. Also, none of the lecturers / examination invigilators was double booked or booked successively. It was clearly evident that simulated annealing performed better than genetic algorithm in most of the evaluated parameters.

Applications of intelligent software systems are proliferating. As these systems proliferate, understanding and measuring their complexity becomes vital, especially in safety-critical environments. This paper proposes a model assessing... more

Applications of intelligent software systems are proliferating. As these systems proliferate, understanding and measuring their complexity becomes vital, especially in safety-critical environments. This paper proposes a model assessing the impacts of complexity for a particular type of intelligent software system, embedded intelligent real-time systems (EIRTS), and answers two research questions. (1) How should the complexity of embedded intelligent real-time systems be measured?, and (2) What are the impacts of differing levels of EIRTS complexity on software, operator and system performance when EIRTS are deployed in a safety-critical large-scale system? The model is tested and operationalized using an operational EIRTS in a safety-critical environment. The results suggest that users significantly prefer simple decision support and user interfaces, even when sophisticated user interfaces and complex decision support capabilities have been embedded in the system. These results have interesting implications for operators using complex EIRTS in safety-critical settings.

This work details the performance evaluation of simulated annealing (SA) and genetic algorithm (GA) in terms of their software complexity measurement and simulation time in solving a typical University examination timetabling problem... more

This work details the performance evaluation of simulated annealing (SA) and genetic algorithm (GA) in terms of their software complexity measurement and simulation time in solving a typical University examination timetabling problem (ETP). Preparation of a timetable consists basically of allocating a number of events to a finite number of time periods (also called slots) in such a way that a certain set of constraints is satisfied. The developed software was used to schedule the first semester examination of Ladoke Akintola University of Technology, Ogbomoso Nigeria during the 2010/2011 session. A task involving 20,100 students, 652 courses, 52 examination venues for 17 days excluding Saturdays and Sundays.The use of the software resulted in significant time saving in the scheduling of the timetable, a shortening of the examination period and a well spread examination for the students. Also, none of the lecturers / examination invigilators was double booked or booked successively. It was clearly evident that simulated annealing performed better than genetic algorithm in most of the evaluated parameters.

This work introduces a self-consistent model for software complexity, based on information which can be typically collected at early stages of software lifecycle, e.g. in the functional specification phase, when a functional size... more

This work introduces a self-consistent model for software complexity, based on information which can be typically collected at early stages of software lifecycle, e.g. in the functional specification phase, when a functional size measurement is also usually performed. The proposed model considers software complexity as structured into three bottom-up stages of the software architecture (from internal complexity of each base functional component to the overall structural complexity of the software system). Conceptually, complexity can be considered as proportional to dimensionality = scale × diversity, hence any complexity factor can be seen as an issue either of scale or of diversity, originating a corresponding specific factor of implementation difficulty, or complexity driver; such mapping suggests which factors could be referred to software structure and size, while other factors should be referred more adequately to the software development process. For each stage of the model, ...