Influence of Type of Question on Skip Pattern Compliance in Self-Administered Questionnaires (original) (raw)

Skip-Pattern Compliance in Three Test Forms: A Theoretical and Empirical EVALUATION1

2000

administered paper questionnaires all questions, applicable or not, can be seen by each respondent, and the respondent must decide which questions to answer. Application of this five step model leads to the identification of ways that the skip-pattern compliance process may break down. 1 Perceive the existence of skip instructions. If instructions aren't seen, it's highly unlikely that they will be followed. When the respondent attends to the content of the questions, the skip instructions may simply not be seen. This appears to be the problem with current practice. Skip instructions remain unseen because of being outside the respondent's visual field as they are attending to the task of answering questions. The typical respondent, while reading, can only comprehend a span of about 8-10 characters at a time. An enlarged font, bolder type, directional arrows and location of skip directions closer to the answer box represent some of the visual design tools that have the potential for attracting the respondent's attention (Jenkins and Dillman, 1997). 2 Comprehend meaning of the skip instructions.

Factors that influence reading and comprehension of branching instructions in self–administered questionnaires

Allgemeines Statistisches Archiv, 2005

In this paper we examine a particular type of item non-response that is especially perplexing to users of self-administered paper questionnaires. It is the tendency for branching instructions to be ignored, misread, or otherwise not appropriately followed so that item non-response (or in other occasions erroneous completion) occurs for follow-up questions that only some respondents are expected to answer. We hypothesize that seven features of question complexity may affect the reading and comprehension of branching instructions: high number of question words; high number of answer categories; last categories branch; all categories branch; write-in responses; location at the bottom of a page; and high distance between the answer box and branching instruction. Largely, these variables reflect the proposition that complexity increases competition for the respondents' attention, making it less likely the branching instructions will be read and processed correctly. A logistic regression analysis revealed that as predicted, question complexity had a tendency to increase errors of commission (that is, respondents answering questions other than the ones they were directed to answer). Five of the seven characteristics demonstrated this effect (high answer categories, all categories branch, write-in responses, bottom of the page, and high distance). But contrary to prediction, complexity did not increase errors of omission (respondents leaving questions blank). Only two of the six characteristics demonstrated this effect (write-in response and bottom of the page), the reasons for which are explored in this paper. The results of this research confirm our general expectation that the visual and verbal complexity of information on a questionnaire affects what respondents read, the order in which they read it, and ultimately, their comprehension of the information.

Factors that Influence Reading and Comprehension in Self-Administered Questionnaires

2000

In this paper we examine a particular type of item non-response that is especially perplexing to users of self-administered paper questionnaires. It is the tendency for branching instructions to be ignored, misread, or otherwise not appropriately followed so that item non-response (or in other occasions erroneous completion) occurs for follow-up questions that only some respondents are expected to answer. We

The Questionnaire Looks Self Explanatory, so to Save Time I’l Just Skip the Skip Instructions

Survey Practice

Students in a first-year-experience course provided a Likert-scale rating of how much they learned about topics presented in that course. For comparative purposes, in another question they were asked to rate on the same four-point scale the academic courses taken that semester. The introduction instructed the respondent to leave a blank for the rating of any academic course not taken. With some exceptions, the number of students providing a rating for each of the academic courses exceeded the course's roster. The number of respondents in excess of the course census was positively related to how many had selected the lowest option: "almost nothing." Apparently, many respondents used a "satisficing" mode to answer and did not bother to read the instructions to skip non-applicable items. We speculate that their thinking pattern was: I didn't take the course, so I didn't learn anything. It is recommended that if general skip instructions are to be used, they need to be made prominent by use of bolding, capitalization, underlining, etc. to focus respondents' attention. Alternately, one may want to use a "does not apply" option for each item on the list.

The Effects of Altering the Design of Branching Instructions on Navigational Performance in Census 2000

2001

It has been hypothesized that a number of languages (verbal, symbolic, and graphic paralanguage) combine to affect respondent perception and comprehension of branching instructions, and consequently, the navigational path respondents follow when completing a questionnaire. A pilot study with college students in which these languages were altered in two distinct ways (the prevention and detection methods) and tested against the Census 2000 method of branching provided evidence for this proposition (Redline and Dillman In Press). This paper reports on an experiment conducted in Census 2000 in which the two branching instructions from the classroom experiment were revised, and two additional instructions were developed (reverse printing the instruction and substituting the words “go to” for “skip to”) and tested against the Census 2000 version. In this paper, we report whether altering the languages of the branching instructions in the Census 2000 long form had an effect on mail respon...

Cautionary Tale: Skipping the Skip Instructions

Students in an orientation-to-college course first provided ratings of how much they had learned about the nonacademic topics addressed in that course. In another question, they were then asked to rate on the same Likert-type scale the academic courses they had taken that semester. The introduction instructed the respondent to leave a blank for any academic course that was not taken. Generally, the number of students providing a rating for each of the academic courses exceeded the course’s roster. The number of respondents in excess of the course census was positively related to how many respondents had selected the lowest option: “almost nothing.” Apparently, a considerable number of respondents failed to read the instructions to skip non-applicable items, and they selected that option because the course was not taken, and hence, they did not learn anything. It is recommended that general skip instructions not be used unless they are made extremely prominent.

Skipping Questions in School Exams: The Role of Non-Cognitive Skills on Educational Outcomes

2018

Economists, educators and policy-makers have became increasingly interested in the importance of socio-emotional skills for students’ performance. Conscientiousness, openness to experience, and emotional stability, among others, have been shown to be related with taking harder classes, graduating from high school and earning higher grades. Understanding the nature of the accumulation of these skills and identifying education interventions that could boost them, however, has been restricted by the availability of objective and inexpensive measures of socio-emotional skills. This paper proposes an objective and relatively inexpensive proxy for students’ socio-emotional skills directly derived from test-taking behavior. The measure is the incidence of skipping questions on a statewide standardized test. This exam has no penalties for guessing and gives students as much time as they need to answer. We believe that skipping questions is related to a reduced level of important socio-emoti...

Prior Exposure to Instructional Manipulation Checks does not Attenuate Survey Context Eff ects Driven by Satisfi cing or Gricean Norms

Instructional manipulation checks (IMCs) are frequently included in unsupervised online surveys and experiments to assess whether participants pay close attention to the questions. However, IMCs are more than mere measures of attention -they also change how participants approach subsequent tasks, increasing attention and systematic reasoning. We test whether these previously documented changes in information processing moderate the emergence of response eff ects in surveys by presenting an IMC either before or after questions known to produce classic survey context eff ects. When the items precede an IMC, familiar satisfi cing as well as conversational eff ects replicate. More important, their pattern and size does not change when the items follow an IMC, in contrast to experiments with reasoning tasks. Given a power of 82% to 98% to detect an eff ect of d = .3, we conclude that prior exposure to an IMC is unlikely to increase or attenuate these types of context eff ects in surveys.

The impact of next and back buttons on time to complete and measurement reliability in computer-based surveys

2010

Purpose To assess the impact of including next and back buttons on response burden and measurement reliability of computer-based surveys. Methods A sample of 807 participants (mean age of 53; 64% women, 83% non-Hispanic white; 81% some college or college graduates) from the YouGov Polimetrix panel was administered 56 items assessing performance of social/ role activities and 56 items measuring satisfaction with social/role activities. Participants were randomly assigned to either (1) automatic advance to the next question with no opportunity to go back (auto/no back); (2) automatic advance to the next questions with an opportunity to go back (auto/back); (3) next button to go to the next question with no opportunity to go back (next/no back); or (4) next button to go to the next question with an opportunity to go back (next/back). Results We found no difference in missing data, internal consistency reliability, and domain scores by group. Time to complete the survey was about 50% longer when respondents were required to use a next button to go on. Conclusions Given the similarity in missing data, reliability and mean scale scores with or without use of the next button, we recommend automatic advancement to the next item with the option to go back to the previous item.