Revisiting "yes/no" versus "check all that apply": Results from a mixed modes experiment (original) (raw)

Does "Yes or No" on the Telephone Mean the Same as "Check-All-That-Apply" on the Web?

Public Opinion Quarterly, 2008

Recent experimental research has shown that respondents to forced-choice questions endorse significantly more options than respondents to check-all questions. This research has challenged the common assumption that these two question formats can be used interchangeably but has been limited to comparisons within a single survey mode. In this paper we use data from a 2004 random sample survey of university students to compare the forced-choice and check-all question formats across web self-administered and telephone interviewer-administered surveys as they are commonly used in survey practice. We find that the within-mode question format effects revealed by previous research and reaffirmed in the current study appear to persist across modes as well; the telephone forced-choice format produces higher endorsement than the web check-all format. These results provide further support for the argument that the check-all and forced-choice question formats do not produce comparable results and are not interchangeable formats. Additional comparisons show that the forced-choice format performs similarly across telephone and web modes.

Yes–no Answers versus Check-all in Self-Administered Modes: A Systematic Review and Analyses

International Journal of Market Research

When writing questions with dichotomous response options, those administering surveys on the web or on paper can choose from a variety of formats, including a check-all-that-apply or a forced-choice format (e.g. yes-no) in self-administered questionnaires. These two formats have been compared and evaluated in many experimental studies. In this paper, we conduct a systematic review and a few meta-analyses of different aspects of the available research that compares these two formats. We find that endorsement levels increase by a factor of 1.42 when questions are posed in a forced-choice rather than check-all format. However, when comparing across a battery of questions, the rank order of endorsement rates remains the same for both formats. While most authors hypothesise that respondents endorse more alternatives presented in a forced-choice (versus check-all-that-apply) format because they process that format at a deeper cognitive level, we introduce the acquiescence bias hypothesis ...

Yes–no answers versus check-all in self-administered modes

When writing questions with dichotomous response options, those administering surveys on the web or on paper can choose from a variety of formats, including a check-all-that-apply or a forced-choice format (e.g. yes-no) in self-administered questionnaires. These two formats have been compared and evaluated in many experimental studies. In this paper, we conduct a systematic review and a few meta-analyses of different aspects of the available research that compares these two formats. We find that endorsement levels increase by a factor of 1.42 when questions are posed in a forced-choice rather than check-all format. However, when comparing across a battery of questions, the rank order of endorsement rates remains the same for both formats. While most authors hypothesise that respondents endorse more alternatives presented in a forced-choice (versus check-all-that-apply) format because they process that format at a deeper cognitive level, we introduce the acquiescence bias hypothesis as an alternative and complementary explanation. Further research is required to identify which format elicits answers closer to the 'true level' of endorsement, since the few validation studies have proved inconclusive.

Multiple Answer Questions in Self-Administered Surveys: The Use of Check-All-That-Apply and Forced-Choice Question Formats *

2003

This paper reports results from a series of experimental manipulations of check-all-thatapply questions in an Internet survey. One purpose of the experiments was to determine whether reversing the order of the presentation of response options resulted in order effects, including primacy and anchoring A second purpose was to determine the effects of converting check-allthat-apply questions to a forced choice-format (e.g. Yes/No for each item). We found that the existence of order effects in check-all-that-apply questions appears to depend on whether questions require respondents to use temporarily or chronically accessible information, as has been previously reported for items requiring selection of only one response option. The conversion to a forced/choice format increased the percentage of respondents answering affirmatively to each response option. Additionally, forced-choice formatted questions were unaffected by the use of more active answer categories (e.g. fan/not a fan) as opposed to the common yes/no format. Finally, the results from this survey of a random sample of 1503 University students, were quite similar to results from a previous mail survey experiment, suggesting that the response patterns observed in the current experiment result from selfadministration in general, and not from a unique characteristic of web surveys. Respondents to self-administered surveys are often asked to answer a question by checking only those response options that apply to them. For example, they might be instructed, "Please indicate which of the following sources of information you have used to find employment in the last month by checking all that apply." Each information source in the list would then be marked only if it had been used.

Mode effect or question wording? Measurement error in mixed mode surveys

Members of a high quality, probability-based Internet panel were randomly assigned to one of two modes: (1) computer assisted telephone interview or (2) web survey. Within each mode the same series of split ballot experiments on question format were conducted. We tested the effect of unfolding opinion questions in multiple steps vs. asking a complete question in one-step, and the effect of fully verbal labelling versus end-point labelling of response categories within and between the two modes. We found small direct mode effects, but no interaction. Unfolding (two-step question) had a larger effect. When using a mixed-mode design it is advisable to avoid unfolding formats in the telephone interview and use the complete (one step) question format in both modes. Full labelling is mildly preferred above labelling endpoints only. The absence of an interaction effect is an encouraging result for mixed-mode surveys.

The Effects of Mode and Format on Answers to Scalar Questions in Telephone and Web Surveys

Advances in Telephone Survey Methodology, 2007

The proliferation of mixed-mode surveys, where data is collected from respondents using different survey modes, raises concern about whether respondent characteristics are being measured equivalently across modes since the data are often combined for analysis. Previous research indicates that three types of factors differentiate survey modes: technical and cultural factors related to the mode or media itself, the impact of interviewer presence or absence, and how information is transmitted or conveyed during the survey (De Leeuw 1992). These differentiating factors can help us understand why variations in survey responses may arise based on the mode of data collection.

Handling Do-Not-Know Answers: Exploring New Approaches in Online and Mixed-Mode Surveys

Social Science Computer Review

An important decision in online and mixed-mode questionnaire design is if and how to include a “do-not-know” (DK) option. Mandatory response is often a default option, but methodologists have advised against this. Several solutions for the DK category are suggested. These include (1) not explicitly offering a DK, but skipping questions is allowed, (2) explicitly offering a DK option with visual separation from the substantive responses, and (3) using the interactivity of the web to emulate interviewer probing after a DK answer. To test these solutions, experimental data were collected in a probability based online panel. Not offering DK, but allowing respondents to skip questions, followed by a polite probe when skips occurred, resulted in the lowest amount of missing information. To assess the effect of probing across different modes, a second experiment was carried out that compared explicitly and implicitly offering the DK option for web and telephone surveys.

Agree or Disagree? Cognitive Processes in Answering Contrastive Survey Questions

Discourse Processes, 2011

Survey designers have long assumed that respondents who disagree with a negative question ("This policy is bad.": Yes or No; 2-point scale) will agree with an equivalent positive question ("This policy is good.": Yes or No; 2-point scale). However, experimental evidence has proven otherwise: Respondents are more likely to disagree with negative questions than to agree with positive ones. To explain these response effects for contrastive questions, the cognitive processes underlying question answering were examined. Using eye tracking, the authors show that the first reading of the question and the answers takes the same amount of time for contrastive questions. This suggests that the wording effect does not arise in the cognitive stages of question comprehension and attitude retrieval. Rereading a question and its answering options also takes the same amount of time, but happens more often for negative questions. This effect is likely to indicate a mapping difference: Fitting an opinion to the response options is more difficult for negative questions.

Assessing the Influence of Importance Prompt and Box Size on Response to Open-ended Questions in Mixed Mode Surveys: Evidence on Response Rate and Response Quality

2016

To understand the thinking behind respondents’ answers, researchers occasionally use open-ended questions. Getting a quality response to open-ended questions can be challenging but attending to the visual design of the question and using a motivational statement in the question can increase item response and data quality. To understand the use of open-ended questions in surveys further, we designed an experiment testing the effect of an importance statement (present/absent) and box size (large/small) on item response rate and response quality in a mixed-mode (web and mail modes) survey. Data for the study came from a survey of Florida Cooperative Extension Service (FCES) clients. The results showed that item response was improved with the importance prompt, irrespective of box size. The combination of importance statement and larger answer box also resulted in more words. Web responses produced more words than those on paper and words counts were significantly improved with an impor...

Degree of cognitive interviewer involvement in questionnaire pretesting on trending survey modes

Modern technology allows for surveying through different media (e.g. Internet, mobile phones, tablets), which may influence the quality of collected data with additional effects due to the survey mode and should be pretested with the purpose of avoiding effects that would deteriorate the quality of collected data. In the present study, we analysed the technological development of surveying tools by testing the applicability of cognitive interviews on several different survey modes. We focused on the importance of the cognitive interviewer and the effect of reducing their degree of involvement on the quality of the interview results. We carried out personal interviews, interviews using voice-over-Internet protocol, interviews using programs for instant messaging, and web-based interviews; these enabled us to analyse the quality of each survey mode and recognize their advantages and deficiencies. Through the comparison of these modes and their assigned technique, we showed that the role of a cognitive interviewer is important for the quality of interviews regardless of the degree of their involvement in the survey process. However, the requirement for the pretesting situation to resemble the actual final process leads to a necessity to develop new, enhanced approaches to cognitive interviewing on trending survey modes.