A parsimonious explanation of observed biases when forecasting one’s own performance (original) (raw)
Pessimistic Bias in Predictions of Performance Results
Psychological Reports, 1995
Ben-Gurion U~tiuersity of the Negev, Israel Szimmary.-Reahsm in the pedormance predictions of 60 university students was investigated. After a practice trial on a task of creativity, one group of subjects were asked to state their expectations and the other group their hopes for their ance scores on the First and second test trials before each one. Both groups were unrealistically pessimistic about their performance: the firsc and second trial predictions of the expectation group as well as of the hope group were lower than their actual performance scores. In all cases (except the second-trial prediction of the hope group) the ddferences reached significance. Results are explained from the functional perspective. It is suggested that unreahsticdy low predictions may serve an affective function (feeling better).
Journal of Forecasting, 1994
A decomposition of the Brier skill score shows that the performance of judgmental forecasts depends on seven components: environmental predictability, fidelity of the information system, match between environment and forecaster, reliability of information acquisition, reliability of information processing, conditional bias, and unconditional bias. These components provide a framework for research on the forecasting process. Selected literature addressing each component is reviewed, and implications for improving judgmental forecasting are discussed. KEY WORDS Judgmental forecasting Brier skill score Lens model equation Bias Reliability In any field requiring judgmental forecasts, the performance of professional forecasters depends jointly on (1) the environment about which forecasts are made, (2) the information system that brings data about the environment to the forecaster, and (3) the cognitive system of the forecaster. For example, in weather forecasting the environment includes the atmosphere and the land, ocean and solar features that affect weather. The information system includes the instruments, observations, and algorithms that produce information about past and current weather and the communication and display systems that bring that information to the forecaster. The cognitive system consists of the perceptual and judgmental processes that the forecaster uses to acquire information, aggregate it, and produce the forecast. This paper describes how certain properties of these three systems combine to determine forecasting performance. A commonly used measure of skill (the 'skill score' based on the mean-square-error) is analyzed into seven components. Since each component describes a different aspect of forecast performance, the decomposition suggests a framework for research on judgmental forecasting.
Statistical correction of judgmental point forecasts and decisions
Omega, 1996
In many organizations point estimates labelled as 'forecasts' are produced by human judgment rather than statistical methods. However, when these estimates are subject to asymmetric loss they are, in fact, decisions because they involve the selection of a value with the objective ...
Improving Reliability of Judgmental Forecasts
International Series in Operations Research & Management Science, 2001
All judgmental forecasts will be affected by the inherent unreliability, or inconsistency, of the judgment process. Psychologists have studied this problem extensively, but forecasters rarely address it. Researchers and theorists describe two types of unreliability that can reduce the accuracy of judgmental forecasts: (1) unreliability of information acquisition, and (2) unreliability of information processing. Studies indicate that judgments are less reliable when the task is more complex; when the environment is more uncertain; when the acquisition of information relies on perception, pattern recognition, or memory; and when people use intuition instead of analysis. Five principles can improve reliability in judgmental forecasting: 1. Organize and present information in a form that clearly emphasizes relevant information. 2. Limit the amount of information used in judgmental forecasting. Use a small number of really important cues. 3. Use mechanical methods to process infomation. 4. Combine several forecasts. 5. Require justification of forecasts.
A Decomposition of the Correlation Coefficient and its Use in Analyzing Forecasting Skill
Weather and Forecasting, 1990
Estimates of several components of forecasting skill can be obtained by combining a skill-score decomposition developed by Allan Murphy with techniques for decomposing correlation coefficients that have been employed in research on human judgment. The decomposition of the correlation coefficient requires knowledge of the information or "cues" used by the forecaster. When the cues are known, it is possible to estimate the effects of uncertainty and the forecaster's consistency and use of the cues.
Limits of predictability in forecasting in the behavioral sciences
International Journal of Forecasting, 1988
A series of methodological problems in forecasting which arise out of the humanness of the predictor and/or predictee are reviewed. These include (1) perceptual disordering, in which the imperfect nature of data collected by human sensation is investigated, (2) model disordering, in which the imperfect nature of models and theories arising out of human information processing limitations is investigated, and lastly, (3) obtrusive reactive disordering, in which the human tendencies of the predictee, to guess the forecast, and to alter his or her behavior so as to reinforce or interfere with the forecast, are explored. The implications of these methodological dilemmas for forecasting are discussed.
Judgmental forecasting: A review of progress over the last 25years
International Journal of Forecasting, 2006
The past 25 years has seen phenomenal growth of interest in judgemental approaches to forecasting and a significant change of attitude on the part of researchers to the role of judgement. While previously judgement was thought to be the enemy of accuracy, today judgement is recognised as an indispensable component of forecasting and much research attention has been directed at understanding and improving its use. Human judgement can be demonstrated to provide a significant benefit to forecasting accuracy but it can also be subject to many biases. Much of the research has been directed at understanding and managing these strengths and weaknesses. An indication of the explosion of research interest in this area can be gauged by the fact that over 200 studies are referenced in this review.
Accuracy, error, and bias in predictions for real versus hypothetical events
Journal of Personality and Social Psychology, 2006
Participants made predictions about performance on tasks that they did or did not expect to complete. In three experiments, participants in task-unexpected conditions were unrealistically optimistic: They overestimated how well they would perform, often by a large margin, and their predictions were not correlated with their performance. By contrast, participants assigned to task-expected conditions made predictions that were not only less optimistic but strikingly accurate. Consistent with predictions from construal level theory, data from a fourth experiment suggest that it is the uncertainty associated with hypothetical tasks, and not a lack of cognitive processing, that frees people to make optimistic prediction errors. Unrealistic optimism, when it occurs, may be truly unrealistic; however, it may be less ubiquitous than has been previously suggested.
The Complex Relationship between Forecast Skill and Forecast Value: A Real-World Analysis
Weather and Forecasting, 1996
For routine forecasts of temperature and precipitation, the relative skill advantage of human forecasters with respect to the numerical-statistical guidance is small (and diminishing). Since the relationship between forecast skill and the value of those forecasts is complex, the authors have examined their value across a range of realworld user contexts. It is found that although in most cases the meteorological information possessed considerable value to the users, human intervention in making those forecasts (as measured by National Weather Service forecasts) has generally led to minimal gains in value beyond that which is obtainable through direct use of numerical-statistical guidance. An important exception is the use of meteorological information by gas utilities during peak wintertime periods; in those circumstances, the value of human intervention was considerable. The presence of information in the National Weather Service forecasts independent of that contained in the numerical-statistical guidance was also established. Despite this, application of the additional information through a combined National Weather Service/guidance forecast provided only a small gain in value in most cases. In the most successful forecast context (the gas utility), the combined approach led to a loss of value relative to the unaltered National Weather Service forecasts.