Crowdsourcing the future (original) (raw)
Related papers
Debunking Three Myths About Crowd-Based Forecasting
Collective Intelligence, NYU Tandon, 2017
Ever since the publication of « The Wisdom of Crowds » (Surowiecki, 2004), the accuracy of crowdbased forecasting has served as a prime example of the practical value of collective intelligence. Prediction markets were long believed to be the gold standard for eliciting the highest-quality forecast from a crowd (Arrow et al, 2008). However, over the last decade, this hypothesis has come under increasingly heavy fire from two new approaches claiming to push the boundaries of "the art and science of prediction". First arose the big-data statistical models, popularized by Nate Silver's FiveThirtyEight and the New York Time's Upshot. Then came a new generation of "prediction polls" (Atanasov et al, 2016) and "superforecasters" (Tetlock & Gardner, 2015), fresh from winning a multiyear geopolitical crowd-forecasting tournament sponsored by the U.S. government's Intelligence Advanced Research Projects Activity (IARPA). Now confusion reigns supreme. Who really is better at forecasting, crowds or models? And which crowd-based methods are the most reliable, polls or markets? And which types of markets, based on real money or play money?
Using meta-predictions to identify experts in the crowd when past performance is unknown
PLOS ONE, 2020
A common approach to improving probabilistic forecasts is to identify and leverage the forecasts from experts in the crowd based on forecasters' performance on prior questions with known outcomes. However, such information is often unavailable to decision-makers on many forecasting problems, and thus it can be difficult to identify and leverage expertise. In the current paper, we propose a novel algorithm for aggregating probabilistic forecasts using forecasters' meta-predictions about what other forecasters will predict. We test the performance of an extremised version of our algorithm against current forecasting approaches in the literature and show that our algorithm significantly outperforms all other approaches on a large collection of 500 binary decision problems varying in five levels of difficulty. The success of our algorithm demonstrates the potential of using meta-predictions to leverage latent expertise in environments where forecasters' expertise cannot otherwise be easily identified.
Written Justifications are Key to Aggregate Crowdsourced Forecasts
ArXiv, 2021
This paper demonstrates that aggregating crowdsourced forecasts benefits from modeling the written justifications provided by forecasters. Our experiments show that the majority and weighted vote baselines are competitive, and that the written justifications are beneficial to call a question throughout its life except in the last quarter. We also conduct an error analysis shedding light into the characteristics that make a justification unreliable.
Identifying Expertise to Extract the Wisdom of Crowds
Management Science, 2015
Statistical aggregation is often used to combine multiple opinions within a group. Such aggregates outperform individuals, including experts, in various prediction and estimation tasks. This result is attributed to the “wisdom of crowds.” We seek to improve the quality of such aggregates by eliminating poorly performing individuals from the crowd. We propose a new measure of contribution to assess the judges' performance relative to the group and use positive contributors to build a weighting model for aggregating forecasts. In Study 1, we analyze 1,233 judges forecasting almost 200 current events to illustrate the superiority of our model over unweighted models and models weighted by measures of absolute performance. In Study 2, we replicate our findings by using economic forecasts from the European Central Bank and show how the method can be used to identify smaller crowds of the top positive contributors. We show that the model derives its power from identifying experts who c...
Future Indices: How Crowd Forecasting Can Inform the Big Picture
Foretell is CSET's crowd forecasting pilot project focused on technology and security policy. It connects historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers. This issue brief uses recent forecast data to illustrate Foretell’s methodology.
Aggregation Mechanisms for Crowd Predictions
SSRN Electronic Journal
When the information of many individuals is pooled, the resulting aggregate often is a good predictor of unknown quantities or facts ("wisdom of crowds"). This aggregate predictor frequently outperforms the forecasts of experts or even the best individual forecast included in the aggregation process. However, an appropriate aggregation mechanism is considered crucial to reaping the benefits of a "wise crowd". Of the many possible ways to aggregate individual forecasts, we compare (uncensored and censored) mean and median, continuous double auction market prices and sealed bid-offer call market prices in a controlled experiment. We use an asymmetric information structure where subjects know different subsets of the total information needed to exactly calculate the asset value to be estimated. We find that prices from continuous double auction markets clearly outperform all alternative approaches for aggregating dispersed information and that information is only useful to the best-informed subjects.
Crowdsourcing of Economic Forecast Combination of Forecasts using Bayesian Model Averaging
SSRN Electronic Journal, 2015
Economic forecasts are quite essential in our daily lives, which is why many research institutions periodically make and publish forecasts of main economic indicators. We ask (1) whether we can consistently have a better prediction when we combine multiple forecasts of the same variable and (2) if we can, what will be the optimal method of combination. We linearly combine multiple linear combinations of existing forecasts to form a new forecast ("combination of combinations"), and the weights are given by Bayesian model averaging. In the case of forecasts on Germany's real GDP growth rate, this new forecast dominates any single forecast in terms of root-mean-square prediction errors.
Management Science, 2022
Modern forecasting algorithms use the wisdom of crowds to produce forecasts better than those of the best identifiable expert. However, these algorithms may be inaccurate when crowds are systematically biased or when expertise varies substantially across forecasters. Recent work has shown that meta-predictions—a forecast of the average forecasts of others—can be used to correct for biases even when no external information, such as forecasters’ past performance, is available. We explore whether meta-predictions can also be used to improve forecasts by identifying and leveraging the expertise of forecasters. We develop a confidence-based version of the Surprisingly Popular algorithm proposed by Prelec, Seung, and McCoy. As with the original algorithm, our new algorithm is robust to bias. However, unlike the original algorithm, our version is predicted to always weight forecasters with more informative private signals more than forecasters with less informative ones. In a series of exp...
Using prediction polling to harness collective intelligence for disease forecasting
BMC Public Health, 2021
Background The global spread of COVID-19 has shown that reliable forecasting of public health related outcomes is important but lacking. Methods We report the results of the first large-scale, long-term experiment in crowd-forecasting of infectious-disease outbreaks, where a total of 562 volunteer participants competed over 15 months to make forecasts on 61 questions with a total of 217 possible answers regarding 19 diseases. Results Consistent with the “wisdom of crowds” phenomenon, we found that crowd forecasts aggregated using best-practice adaptive algorithms are well-calibrated, accurate, timely, and outperform all individual forecasters. Conclusions Crowd forecasting efforts in public health may be a useful addition to traditional disease surveillance, modeling, and other approaches to evidence-based decision making for infectious disease outbreaks.