The Effect of Answering in a Preferred Versus a Non-Preferred Survey Mode on Measurement (original) (raw)

Implementation of the forced answering option within online surveys: Do higher item response rates come at the expense of participation and answer quality?

Online surveys have become a popular method for data gathering for many reasons, including low costs and the ability to collect data rapidly. However, online data collection is often conducted without adequate attention to implementation details. One example is the frequent use of the forced answering option, which forces the respondent to answer each question in order to proceed through the questionnaire. The avoidance of missing data is often the idea behind the use of the forced answering option. However, we suggest that the costs of a reactance effect in terms of quality reduction and unit nonresponse may be high because respondents typically have plausible reasons for not answering questions. The objective of the study reported in this paper was to test the influence of forced answering on dropout rates and data quality. The results show that requiring participants answer every question increases dropout rates and decreases quality of answers. Our findings suggest that the desire for a complete data set has to be balanced against the consequences of reduced data quality.

Handling Do-Not-Know Answers: Exploring New Approaches in Online and Mixed-Mode Surveys

Social Science Computer Review

An important decision in online and mixed-mode questionnaire design is if and how to include a “do-not-know” (DK) option. Mandatory response is often a default option, but methodologists have advised against this. Several solutions for the DK category are suggested. These include (1) not explicitly offering a DK, but skipping questions is allowed, (2) explicitly offering a DK option with visual separation from the substantive responses, and (3) using the interactivity of the web to emulate interviewer probing after a DK answer. To test these solutions, experimental data were collected in a probability based online panel. Not offering DK, but allowing respondents to skip questions, followed by a polite probe when skips occurred, resulted in the lowest amount of missing information. To assess the effect of probing across different modes, a second experiment was carried out that compared explicitly and implicitly offering the DK option for web and telephone surveys.

Test of a Web and Paper Employee Satisfaction Survey: Comparison of Respondents and Non-Respondents

This study examined if administering an employee satisfaction survey using the Internet affected the rates or quality of employees’ participation. 644 hospital employees were randomly assigned to complete a satisfaction survey using either a Web survey or a traditional paper measure. Response rates were relatively high across both modes. No evidence for a very large difference in response rates was detected. A plurality of respondents showed no preference for survey mode while the remainder tended to express a preference for the mode they had been randomly assigned to complete in this study. Respondents did not differ from nonrespondents by sex, race, or education. Other response differences (such as age and employment status) are likely to be a function of the survey topic. Overall, Web and mail respondents did not differ in the level of employee satisfaction reported, the primary outcome being measured.

Mode effect or question wording? Measurement error in mixed mode surveys

Members of a high quality, probability-based Internet panel were randomly assigned to one of two modes: (1) computer assisted telephone interview or (2) web survey. Within each mode the same series of split ballot experiments on question format were conducted. We tested the effect of unfolding opinion questions in multiple steps vs. asking a complete question in one-step, and the effect of fully verbal labelling versus end-point labelling of response categories within and between the two modes. We found small direct mode effects, but no interaction. Unfolding (two-step question) had a larger effect. When using a mixed-mode design it is advisable to avoid unfolding formats in the telephone interview and use the complete (one step) question format in both modes. Full labelling is mildly preferred above labelling endpoints only. The absence of an interaction effect is an encouraging result for mixed-mode surveys.

International Handbook of Survey Methodology

The purpose of the EAM book series is to advance the development and application of methodological and statistical research techniques in social and behavioral research. Each volume in the series presents cutting-edge methodological developments in a way that is accessible to a broad audience.

Mode Effect or Question Wording? Measurement Error in Mixed Mode Surveys

Members of a high quality, probability-based Internet panel were randomly assigned to one of two modes: (1) computer assisted telephone interview or (2) web survey. Within each mode the same series of split ballot experiments on question format were conducted. We tested the effect of unfolding opinion questions in multiple steps vs. asking a complete question in one-step, and the effect of fully verbal labelling versus end-point labelling of response categories within and between the two modes. We found small direct mode effects, but no interaction. Unfolding (two-step question) had a larger effect. When using a mixed-mode design it is advisable to avoid unfolding formats in the telephone interview and use the complete (one step) question format in both modes. Full labelling is mildly preferred above labelling endpoints only. The absence of an interaction effect is an encouraging result for mixed-mode surveys.

The Stability of Mode Preferences: Implications for Tailoring in Longitudinal Surveys

One suggested tailoring strategy for longitudinal surveys is giving respondents their preferred mode. Mode preference could be collected at earlier waves and used when introducing a mixed-mode design. The utility of mode preference is in question, however, due to a number of fi ndings suggesting that preference is an artefact of mode of survey completion, and heavily aff ected by contextual factors. Conversely, recent fi ndings suggest that tailoring on mode preference may lead to improved response outcomes and data quality. The current study aims to ascertain whether mode preference is a meaningful construct with utility in longitudinal surveys through analysis of data providing three important features: multiple measurements of mode preference over time; an experiment in mode preference question order; and the repeated measures within respondents collected both prior and after the introduction of mixed-mode data collection. Results show that mode preference is not a stable attitude for a large percentage of respondents, and that these responses are affected by contextual factors. However, a substantial percentage of respondents do provide stable responses over time, and may explain the positive fi ndings elsewhere. Using mode preference to tailor longitudinal surveys should be done so with caution, but may be useful with further understanding.

Human-survey interaction: Usability and nonresponse in online surveys

2009

Response rates are a key quality indicator of surveys. The human-survey interaction framework developed in this book provides new insight in what makes respondents leave or complete an online survey. Many respondents suffer from difficulties when trying to answer survey questions. This results in omitted answers and abandoned questionnaires. Lars Kaczmirek explains how applied usability in surveys increases response rates. Here, central aspects addressed in the studies include error tolerance and useful feedback. Recommendations are drawn from seven studies and experiments. The results report on more than 33,000 respondents sampled from many different populations such as students, people above forty, visually impaired and blind people, and survey panel members. The results show that improved usability significantly boosts response rates and accessibility. This work clearly demonstrates that human-survey interaction is a cost-effective approach in the overall context of survey methodology. I would like to especially thank my supervisors Prof. Dr. Michael Bošnjak and Prof. Dr. Werner W. Wittmann. Without Michael Bošnjak I might never have ventured into the depths of survey methodology. My research has always profited from his strong focus on quality and coherent argumentation. Werner W. Wittmann's teaching fundamentally shaped my methodological thinking. His teaching and the insight that methods have a broad applicability encouraged me to concentrate on methodological issues in my own research.