Using screeners to measure respondent attention on self-administered surveys: Which items and how many? | Political Science Research and Methods | Cambridge Core (original) (raw)

Abstract

Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys.

Type

Research Note

Copyright

Copyright © The European Political Science Association 2019

References

Alvarez, RM, Atkeson, LR, Levin, I and Li, Y (2019) Paying attention to inattentive survey respondents. Political Analysis 27(2), 145–162.CrossRefGoogle Scholar

Ansolabehere, S, Rodden, J and Snyder, JM Jr (2008) The strength of issues: using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review 102, 215–232.CrossRefGoogle Scholar

Berinsky, AJ, Margolis, MF and Sances, MW (2014) Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58, 739–753.CrossRefGoogle Scholar

Berinsky, AJ, Margolis, MF and Sances, MW (2016) Can we turn shirkers into workers? Journal of Experimental Social Psychology 66, 20–28.CrossRefGoogle Scholar

Bimbaum, A (1968) Some latent trait models and their use in inferring an examinee's ability. In Lord, FM, Novick, MR and Birnbaum, A (eds), Statistical Theories of Mental Test Scores. Oxford, England: Addison-Wesley, pp. 395–479.Google Scholar

Clinton, J, Jackman, S and Rivers, D (2004) The statistical analysis of roll call data. American Political Science Review 98, 355–370.CrossRefGoogle Scholar

Fox, J-P (2010) Bayesian Item Response Modeling: Theory and Applications. New York: Springer (PDF ebook).CrossRefGoogle Scholar

Hauser, DJ and Schwarz, N (2015) It's a trap! Instructional manipulation checks prompt systematic thinking on ‘tricky’ tasks. Sage Open 5(2), 1–6.CrossRefGoogle Scholar

Jackman, S (2010) pscl: Classes and methods for R. Developed in the Political Science Computational Laboratory, Stanford University. Department of Political Science, Stanford University, Stanford, CA. R package version 1.03. 5. http://www.pscl.stanford.edu/.Google Scholar

Kaplan, D (2004) The Sage Handbook of Quantitative Methodology for the Social Sciences. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar

Kung, FY, Kwok, N and Brown, DJ (2018) Are attention check questions a threat to scale validity? Applied Psychology 67(2), 264–283.CrossRefGoogle Scholar

Montgomery, JM and Cutler, J (2013) Computerized adaptive testing for public opinion surveys. Political Analysis 21, 172–192.CrossRefGoogle Scholar

Oppenheimer, DM, Meyvis, T and Davidenko, N (2009) Instructional manipulation checks: detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 45, 867–872.CrossRefGoogle Scholar

Treier, S and Hillygus, DS (2009) The nature of political ideology in the contemporary electorate. Public Opinion Quarterly 73, 679–703.CrossRefGoogle Scholar

Tversky, A and Kahneman, D (1981) The framing of decisions and the psychology of choice. Science 211(4481), 453–458.CrossRefGoogle ScholarPubMed

van der Linden, WJ (1998) Bayesian item selection criteria for adaptive testing. Psychometrika 63, 201–216.CrossRefGoogle Scholar

Van der Linden, WJ (2005) Linear Models for Optimal Test Design. New York: Springer Science & Business Media.CrossRefGoogle Scholar