Integrating Responses across Surveys Using Item Response Theory (original) (raw)

Evaluating Surveys and Questionnaires

Critical Thinking in Psychology, 2006

Much of what we know about human behavior is based on self-reports. When we want to learn about individuals' health behaviors, consumer habits, family problems, media consumption, values or political beliefs, we ask appropriate questions. The answers provided to these questions serve as input into scientific analyses and provide the basis of statistical indicators used to describe the state of a society. Obviously, these data are only as meaningful as the questions we ask and the answers we receive. Moreover, whom we ask is of crucial importance to our ability to draw conclusions that extend beyond the particular people who answered our questions. Accordingly, the processes underlying question answering and the appropriate selection of respondents are of great importance to many areas of social research.

Approaches for Analyzing Survey Data: a Discussion

An increasing number of researchers are analyzing survey microdata to describe relationships in a target population. There are many statistical issues that researchers face, some of which seem controversial because of differing approaches advocated in the statistical literature. Because the researchers come from a variety of disciplines, they often do not have good understanding of the statistical underpinnings behind the alternative methods for analyzing survey data obtained from a complex design. In this paper we discuss a model-design-based framework within which the alternative methods may be clarified. The issue of integration of data from more than one survey is provided as an example.

Development of an international survey attitude scale: measurement equivalence, reliability, and predictive validity

Measurement Instruments for the Social Sciences

Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale.The data came from three...

A Critique of the Conventional Methods of Survey Item Transformations, with an Eye to Quantification

The Pope of Happiness, 2021

Our contribution to the Festschrift for Professor Ruut Veenhoven in celebration of his many contributions to happiness and social indicators research over the past fifty years focuses on the theme of making valid claims based on questionnaire response data from social surveys and assessments. In a series of publications since the early 1990s, Veenhoven has been calling for the development of survey research methods, as well as statistical and mathematical techniques, aimed at constructing reliable and valid measures to assess progress in societies using large-scale questionnaire studies among samples of the general population in different countries (Veenhoven 1993, 2008). The motivation for his call to action stems from the observation that many of us have had, that various surveys ask about the same topic, but the survey questions may not be the same. This difference could be due to personal preferences or styles for writing survey questions within the same country, or cross-cultural or crosslingual differences between countries. Even a cursory review of different social surveys shows that there is a mélange of ways of asking about and recording a survey respondent's characteristics such as age, not to mention the jumble of survey questions and response formats for measures of wellbeing and quality of life. Veenhoven remarked on several occasions that this methodological question was motivated by his need to prepare the data in his World Database of Happiness for a research synthesis such as a meta-analysis. The fundamental problem comes down to the fact that the questions that make up social surveys have response formats (also referred to as response rating scales) that may differ for different questionnaires within or between countries. The different response options of the rating scales for various surveys are not a problem, from a statistical point of view when analyses are

Mechanics of Survey Data Analysis

Employee Surveys and Sensing, 2020

In this chapter the authors outline important issues to consider when setting up a survey so that the administration team or a third party vendor is able to conduct meaningful analyses afterward. They discuss decisions surrounding data structure, confidential versus anonymous surveys, and missing data and reporting elements. They also discuss understanding variability, item history, and using norms and benchmark data. The latter part of the chapter focuses on model-building. In summary, it is critical to establish a purpose for a survey and to have a clear strategy for data analysis at the outset to facilitate leaders’ understanding of the data and their ability to act afterward.

Survey research: Process and limitations

International Journal of Therapy and Rehabilitation, 2009

Survey research is a non-experimental research approach used to gather information about the incidence and distribution of, and the relationships that exist between, variables in a predetermined population. Its uses include the gathering of data related to attitudes, behaviours and the incidence of events. Survey research in one form or another has existed for over two millennia with the population census of Caesar Augustus (St. Luke's Gospel) being an early example. For most modern researchers sample surveys are more cost effective and easier to undertake than population surveys when gathering information; however, this increases the risk of both representation and measurement errors. There are a number of different forms of survey research; however they all share common steps and common limitations. The purpose of this article is to discuss these steps with a view to highlighting some of the common difficulties.

The impact of administration mode on response effects in survey measurement

Applied Cognitive Psychology, 1991

T he m ajo r differences betw een face-to-face an d telep h o n e interviews as well as self-adm inis tered q u estio n n aires are reviewed an d are related to th e cognitive and com m unicative processes assum ed to underlie the process o f question answ ering. Based on these considerations the im pact o f a d m in istratio n m ode o n the em ergence o f w ell-know n response effects in survey m easurem ent is discussed, and relevant experim ental evidence is reported. It is concluded th at ad m in istra tio n m ode affects the em ergence o f question o rd er an d context effects; the em ergence o f response o rd er effects; the validity o f retrospective reports; and the degree o f socially desirable responding. T he em ergence o f q u estio n w ording and question form effects, on the o th e r h an d , ap p e ars to be relatively in d ep en d e n t o f adm in istratio n m ode, That the results o f public opinion surveys can be significantly affected by the way in which questions are worded, the form in which they are presented, and the order or context in which they are asked is well known. While a considerable number o f these influences have been documented in the literature (cf. Dijkstra and van der Zouwen, 1982; Payne, 1951; Schuman and Presser, 1981; Sudman and Bradbum, 1974 for reviews), the underlying cognitive processes have only recently received systematic attention (cf. Hippier, Schwarz, and Sudman, 1987; Jabine, Straf, Tanur, and Tourangeau, 1984; Schwarz and Sudman, in press for examples). N ot surpris ingly, all researchers agree that answering a survey question requires that respondents solve several tasks (see Strack and Martin, 1987; Tourangeau, 1984,1987; Tourangeau and Rasinski, 1988 for detailed discussions). A s a first step, respondents have to interpret the question to understand what is meant. If the question is an opinion question, they subsequently have to 'generate1 an opinion on the issue. To do so, they need to retrieve relevant information from memory to form ajudgement.

Chapter 19 Statistical analysis of survey data

The fact that survey data are obtained from units selected with complex sample designs needs to be taken into account in the survey analysis: weights need to be used in analyzing survey data and variances of survey estimates need to be computed in a manner that reflects the complex sample design. This chapter outlines the development of weights and their use in computing survey estimates and provides a general discussion of variance estimation for survey data. It deals first with what are termed "descriptive" estimates, such as the totals, means, and proportions that are widely used in survey reports. It then discusses three forms of "analytic" uses of survey data that can be used to examine relationships between survey variables, namely multiple linear regression models, logistic regression models and multi-level models. These models form a set of valuable tools for analyzing the relationships between a key response variable and a number of other factors. In this chapter we give examples to illustrate the use of these modeling techniques and also provide guidance on the interpretation of the results.

Adjusting for mode of administration effect in surveys using mailed questionnaire and telephone interview data

Psychometric scales and ordinal categorical items are often used in surveys with different modes of administration like mailed questionnaires and telephone interviews. If items are perceived differently by people over the telephone this can be a source of bias. We describe how such bias can be tested using item response theory and propose a method to correct for such differences using multiple imputation and item response theory. The method is motivated and illustrated by analyzing data from an occupational health study using mailed questionnaire and telephone interview data to evaluate job influence.