Integrating Responses across Surveys Using Item Response Theory (original) (raw)

Approaches for Analyzing Survey Data: a Discussion

An increasing number of researchers are analyzing survey microdata to describe relationships in a target population. There are many statistical issues that researchers face, some of which seem controversial because of differing approaches advocated in the statistical literature. Because the researchers come from a variety of disciplines, they often do not have good understanding of the statistical underpinnings behind the alternative methods for analyzing survey data obtained from a complex design. In this paper we discuss a model-design-based framework within which the alternative methods may be clarified. The issue of integration of data from more than one survey is provided as an example.

Evaluating and Improving Item Response Theory Models for Cross-National Expert Surveys

SSRN Electronic Journal, 2015

that comprises almost ten staff members, and a project team across the world with four Principal Investigators, fifteen Project Managers, 30+ Regional Managers, 170 Country Coordinators, Research Assistants, and 2,500 Country Experts, the V-Dem project is one of the largest-ever social science research-oriented data collection programs.

Development of an international survey attitude scale: measurement equivalence, reliability, and predictive validity

Measurement Instruments for the Social Sciences

Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale.The data came from three...

Mechanics of Survey Data Analysis

Employee Surveys and Sensing, 2020

In this chapter the authors outline important issues to consider when setting up a survey so that the administration team or a third party vendor is able to conduct meaningful analyses afterward. They discuss decisions surrounding data structure, confidential versus anonymous surveys, and missing data and reporting elements. They also discuss understanding variability, item history, and using norms and benchmark data. The latter part of the chapter focuses on model-building. In summary, it is critical to establish a purpose for a survey and to have a clear strategy for data analysis at the outset to facilitate leaders’ understanding of the data and their ability to act afterward.

Survey research: Process and limitations

International Journal of Therapy and Rehabilitation, 2009

Survey research is a non-experimental research approach used to gather information about the incidence and distribution of, and the relationships that exist between, variables in a predetermined population. Its uses include the gathering of data related to attitudes, behaviours and the incidence of events. Survey research in one form or another has existed for over two millennia with the population census of Caesar Augustus (St. Luke's Gospel) being an early example. For most modern researchers sample surveys are more cost effective and easier to undertake than population surveys when gathering information; however, this increases the risk of both representation and measurement errors. There are a number of different forms of survey research; however they all share common steps and common limitations. The purpose of this article is to discuss these steps with a view to highlighting some of the common difficulties.

The impact of administration mode on response effects in survey measurement

Applied Cognitive Psychology, 1991

T he m ajo r differences betw een face-to-face an d telep h o n e interviews as well as self-adm inis tered q u estio n n aires are reviewed an d are related to th e cognitive and com m unicative processes assum ed to underlie the process o f question answ ering. Based on these considerations the im pact o f a d m in istratio n m ode o n the em ergence o f w ell-know n response effects in survey m easurem ent is discussed, and relevant experim ental evidence is reported. It is concluded th at ad m in istra tio n m ode affects the em ergence o f question o rd er an d context effects; the em ergence o f response o rd er effects; the validity o f retrospective reports; and the degree o f socially desirable responding. T he em ergence o f q u estio n w ording and question form effects, on the o th e r h an d , ap p e ars to be relatively in d ep en d e n t o f adm in istratio n m ode, That the results o f public opinion surveys can be significantly affected by the way in which questions are worded, the form in which they are presented, and the order or context in which they are asked is well known. While a considerable number o f these influences have been documented in the literature (cf. Dijkstra and van der Zouwen, 1982; Payne, 1951; Schuman and Presser, 1981; Sudman and Bradbum, 1974 for reviews), the underlying cognitive processes have only recently received systematic attention (cf. Hippier, Schwarz, and Sudman, 1987; Jabine, Straf, Tanur, and Tourangeau, 1984; Schwarz and Sudman, in press for examples). N ot surpris ingly, all researchers agree that answering a survey question requires that respondents solve several tasks (see Strack and Martin, 1987; Tourangeau, 1984,1987; Tourangeau and Rasinski, 1988 for detailed discussions). A s a first step, respondents have to interpret the question to understand what is meant. If the question is an opinion question, they subsequently have to 'generate1 an opinion on the issue. To do so, they need to retrieve relevant information from memory to form ajudgement.

Chapter 19 Statistical analysis of survey data

The fact that survey data are obtained from units selected with complex sample designs needs to be taken into account in the survey analysis: weights need to be used in analyzing survey data and variances of survey estimates need to be computed in a manner that reflects the complex sample design. This chapter outlines the development of weights and their use in computing survey estimates and provides a general discussion of variance estimation for survey data. It deals first with what are termed "descriptive" estimates, such as the totals, means, and proportions that are widely used in survey reports. It then discusses three forms of "analytic" uses of survey data that can be used to examine relationships between survey variables, namely multiple linear regression models, logistic regression models and multi-level models. These models form a set of valuable tools for analyzing the relationships between a key response variable and a number of other factors. In this chapter we give examples to illustrate the use of these modeling techniques and also provide guidance on the interpretation of the results.

Adjusting for mode of administration effect in surveys using mailed questionnaire and telephone interview data

Psychometric scales and ordinal categorical items are often used in surveys with different modes of administration like mailed questionnaires and telephone interviews. If items are perceived differently by people over the telephone this can be a source of bias. We describe how such bias can be tested using item response theory and propose a method to correct for such differences using multiple imputation and item response theory. The method is motivated and illustrated by analyzing data from an occupational health study using mailed questionnaire and telephone interview data to evaluate job influence.

International Handbook of Survey Methodology

The purpose of the EAM book series is to advance the development and application of methodological and statistical research techniques in social and behavioral research. Each volume in the series presents cutting-edge methodological developments in a way that is accessible to a broad audience.

Attitudes toward Surveys: Development of a Measure and Its Relationship to Respondent Behavior

Organizational Research Methods, 2001

Attitudes toward surveys were conceptualized as having two relatively independent components: feelings about the act of completing a survey, called survey enjoyment, and perceptions of the value of survey research, called survey value. After developing a psychometrically sound measure, the authors examined how the measure related to respondent behaviors that directly impact the quality and quantity of data collected in surveys. With the exception of a response distortion index, survey enjoyment was generally related to all the respondent behaviors studied (item response rates, following directions, volunteering to participate in other survey research, timeliness of a response to a survey request, and willingness to participate in additional survey research). Survey value was related to item response rates, following directions, and willingness to participate in additional survey research. A respondent motivation and intentions explanation is provided. Although the identified effect ...

The Use of Item Response Theory in Survey Methodology: Application in Seat Belt Data

American Journal of Operations Research, 2018

Problem: Several approaches to analyze survey data have been proposed in the literature. One method that is not popular in survey research methodology is the use of item response theory (IRT). Since accurate methods to make prediction behaviors are based upon observed data, the design model must overcome computation challenges, but also consideration towards calibration and proficiency estimation. The IRT model deems to be offered those latter options. We review that model and apply it to an observational survey data. We then compare the findings with the more popular weighted logistic regression. Method: Apply IRT model to the observed data from 136 sites within the Commonwealth of Virginia over five years collected in a two stage systematic stratified proportional to size sampling plan. Results: A relationship within data is found and is confirmed using the weighted logistic regression model selection. Practical Application: The IRT method may allow simplicity and better fit in the prediction within complex methodology: the model provides tools for survey analysis.

Evaluating survey questions: a comparison of methods

2010

This study compares five techniques to evaluate survey questions --expert reviews, cognitive interviews, quantitative measures of reliability and validity, and error rates from latent class models. It is the first such comparison that includes both quantitative and qualitative methods. We examined several sets of items, each consisting of three questions intended to measure the same underlying construct. We found low consistency across the methods in how they rank ordered the items within each set. Still, there was considerable agreement between the expert ratings and the latent class method and between the cognitive interviews and the validity estimates. Overall, the methods yield different and sometimes contradictory conclusions with regard to the 15 items pretested. The findings raise the issue of whether results from different testing methods should agree.

Aggregate item response analysis

Psychometrika, 1988

A stochastic postulate is given for the multiple-item, successive-intervals scaling of populations. The logistic equivalent of this postulate provides an aggregate item response model in which a unidimensional submodeI may be nested. This reduction provides a subtractive conjoint measurement of several items and stimuli on the same latent scale. Generalized-least-squares methods are used to estimate and test the multiple-item model, and its unidimensional reduction, on aggregate survey responses. The entire procedure is illustrated with an analysis of semanticdifferential attitude data. This analysis exhibits an item selection procedure that is applicable to various social constructs.

Survey Mode as a Source of Instability in Responses across Surveys

Field Methods, 2005

Changes in survey mode for conducting panel surveys may contribute significantly to survey error. This article explores the causes and consequences of such changes in survey mode. The authors describe how and why the choice of survey mode often causes changes to be made to the wording of questions, as well as the reasons that identically worded questions often produce different answers when administered through different modes. The authors provide evidence that answers may changeas a result of different visual layouts for otherwise identical questions and suggest ways to keep measurement the same despite changes in survey mode.

Using Mixed-Model Item Response Theory to Analyze Organizational Survey Responses: An Illustration Using the Job Descriptive Index

Organizational Research Methods, 2011

In this article, the authors illustrate the use of mixed-model item response theory (MM-IRT) and explain its usefulness for analyzing organizational surveys. The authors begin by giving an overview of MM-IRT, focusing on both technical aspects and previous organizational applications. Guidance is provided on how researchers can use MM-IRT to check scoring assumptions, identify the influence of systematic responding that is unrelated to item content (i.e., response sets), and evaluate individual and group difference variables as predictors of class membership. After summarizing the current body of research using MM-IRT to address problems relevant to organizational researchers, the authors present an illustration of the use of MM-IRT with the Job Descriptive Index (JDI), focusing on the use of the ''?'' response option. Three classes emerged, one most likely to respond in the positive direction, one most likely to respond in the negative direction, and another most likely to use the ''?'' response. Trust in management, job tenure, age, race, and sex were considered as correlates of class membership. Results are discussed in terms of the applicability of MM-IRT and future research endeavors.

An Empirical Test of Alternative Theories of Survey Response Behaviour

Market Research Society. Journal., 1999

This study examines the extent to which the theories of exchange, cognitive dissonance, self-perception and commitment/involvement, when used to design surveys, can influence potential respondents to participate in a survey. The results from an experiment involving a total of 403 subjects in Hong Kong and Australia expands what is known about the role played by theory by examining consumer responses to participation requests made on the basis of each theoretical framework. Specific results support the relatively high positive impact of two of the frameworks that has been reported in a study of research practitioners.