Modelling group processes and effort estimation in project management using the Choquet integral: An MCDM approach (original) (raw)
Related papers
Project Effort Estimation: Or, When Size Makes a Difference
European Conference on Software Process Improvement, 2004
The motivation for this work is derived from the current interest in speeding up development schedules. A key implication of the shift to more rapid development methods is the growing emphasis on fixed time and fixed ef- fort delivered during such projects. However there appears to be little work that addresses the impacts of dealing with bound effort levels. The
Estimators Characteristics and Effort Estimation of Software Projects
Proceedings of the 9th International Conference on Software Engineering and Applications, 2014
Effort estimation is an important part of software project management. Accurate estimates ensure planned project execution and compliance with the set time and budget constraints. Despite attempts to produce accurate estimates by using formal models there is no substantial evidence that these methods guarantee better estimates than those experts make. In order to improve the effort estimation process it is crucial to enhance understanding of the human estimator. When producing estimates each expert exhibits mental effort. In such situation estimator relies on his personal characteristics, some of which are, in context of effort estimation, more important than others. This research tries to identify these characteristics and their relative influences. Data for the research have been collected from projects executed in large company specialized for development of IT solutions in telecom domain. For identification of expert characteristics data mining approach is used (the multilayer perceptron neural network). We considered the use of this method as it is similar to the way human brain operates. Data sets used in modelling contain more than 2000 samples collected from analysed projects. The obtained results are highly intuitive and later could be used in the assessment of reliability of each estimator and estimates he produces.
Human judgement in effort estimation of software projects
2000
This paper summarises earlier attempts to improve the effort estimation in software projects and discusses what we should learn from these attempts. Based on that discussion we recommend more research on how to combine estimation models and human judgement. In order to combine estimation models and human judgement we should identify the most useful findings and methods from earlier empirical research on human judgement. We give a few examples on potential useful findings and evaluates the use of a human judgement analysis model, the Lens Model. Finally, we describe a recently started, Lens Model based, empirical study on human judgement in software estimation.
A new approach in the engineering project analysis: the aggregation game
International Journal of Project Management, 1988
Main variables describing the progress of ;I project are believed to depend on a 'universal law of becoming' according to which the prelinlin~~ry ~)rganiz~~ti~~il of the project, the gathering of materials, the means of executing etc. lead to a univocal result; that concerning a correct forecast may only be affected by unforeseeable events caused by man or by meteoclimatic natural phenomena. Usually, one is interested in exterior behaviours of the project, neglecting random causes.
Evaluating Expert Estimators Based on Elicited Competences
Journal of information and organizational sciences, 2015
Utilization of expert effort estimation approach shows promising results when it is applied to software development process. It is based on judgment and decision making process and due to comparative advantages extensively used especially in situations when classic models cannot be accounted for. This becomes even more accentuated in today’s highly dynamical project environment. Confronted with these facts companies are placing ever greater focus on their employees, specifically on their competences. Competences are defined as knowledge, skills and abilities required to perform job assignments. During effort estimation process different underlying expert competences influence the outcome i.e. judgments they express. Special problem here is the elicitation, from an input collection, of those competences that are responsible for accurate estimates. Based on these findings different measures can be taken to enhance estimation process. The approach used in study presented in this paper ...
Software Development Effort Estimation: Formal Models or Expert Judgment?
IEEE Software, 2000
Which is better for estimating software project resources: formal models, as instantiated in estimation tools, or expert judgment? Two luminaries, debate this question in this paper. For this debate, they're taking opposite sides and trying to help software project managers figure out when, and under what conditions, each method would be best.
Are the Experts Really Experts? a Cognitive Ergonomics Investigation for Project Estimations
Jurnal Teknik Industri, 2011
Uniqueness is a major characteristic of any project systems. Hence it is virtually infeasible for project analysts to utilize data from past projects as references for subsequent project planning and scheduling. Most project analysts would then depend on intuition, gut feeling and experiences to develop quantitative models for project scheduling and analysis which, according to past studies, is prone towards systematic errors. This study attempts to investigate the performance of both 'experts' and 'non-experts' when utilizing their cognitive capability to estimate project durations in group/non-group settings. A cognitive ergonomics perspective-which views human capability to make judgment as rationally bounded-is utilized in this investigation. An empirical approach is used to inquiry data from 'projects' on which 'experts' and 'non-experts' are required to provide prior estimate on project durations. The estimates are then gauged against the actual duration. Results show that some systematic cognitive judgmental errors (biases) are observable for both experts and non-experts. The identified biases include: anchoring bias as well as accuracy bias.
Determining Confidence When Integrating Contributions from Multiple Agents
Integrating contributions received from other agents is an essential activity in multi-agent systems (MASs). Not only must related contributions be integrated together, but the confidence in each integrated contribution must be determined. In this paper we look specifically at the issue of confidence determination and its effect on developing "principled," highly collaborating MASs. Confidence determination is often masked by ad hoc contribution-integration techniques, viewed as being addressed by agent trust and reputation models, or simply assumed away. We present a domain-independent analysis model that can be used to measure the sensitivity of a collaborative problem-solving system to potentially incorrect confidence-integration assumptions. In analyses performed using our model, we focus on the typical assumption of independence among contributions and the effect that unaccounted-for dependencies have on the expected error in the confidence that the answers produced by the MAS are correct. We then demonstrate how the analysis model can be used to determine confidence bounds on integrated contributions and to identify where efforts to improve contribution-dependency estimates lead to the greatest improvement in solution-confidence accuracy.