Kristen Olson | University of Nebraska Lincoln (original) (raw)
Papers by Kristen Olson
Public Opinion Quarterly
Survey interviewers are often tasked with assessing the quality of respondents' answers after com... more Survey interviewers are often tasked with assessing the quality of respondents' answers after completing a survey interview. These interviewer observations have been used to proxy for measurement error in interviewer-administered surveys. How interviewers formulate these evaluations and how well they proxy for measurement error has received little empirical attention. According to dual-process theories of impression formation, individuals form impressions about others based on the social categories of the observed person (e.g., sex, race) and individual behaviors observed during an interaction. Although initial impressions start with heuristic, rule-of-thumb evaluations, systematic processing is characterized by extensive incorporation of available evidence. In a survey context, if interviewers default to heuristic information processing Kirchner, Olson, & Smyth POQ 2017 DO INTERVIEWER POSTSURVEY 2 when evaluating respondent engagement, then we expect their evaluations to be primarily based on respondent characteristics and stereotypes associated with those characteristics. Under systematic processing, on the other hand, interviewers process and evaluate respondents based on observable respondent behaviors occurring during the question-answering process. We use the Work and Leisure Today Survey, including survey data and behavior codes, to examine proxy measures of heuristic and systematic processing by interviewers as predictors of interviewer postsurvey evaluations of respondents' cooperativeness, interest, friendliness, and talkativeness. Our results indicate that CATI interviewers base their evaluations on actual behaviors during an interview (i.e., systematic processing) rather than perceived characteristics of the respondent or the interviewer (i.e., heuristic processing). These results are reassuring for the many surveys that collect interviewer observations as proxies for data quality.
This paper identifies new opportunities for innovation and expansion on current survey practice i... more This paper identifies new opportunities for innovation and expansion on current survey practice in the design of a new household panel survey, including an increased use of new and mobile technologies, more frequent data collection, modified clustering, and use of non-traditional survey measures such as administrative data, planned missing/matrix sampling questionnaire design, real-time data collection, and biomarkers. These innovative data collection methods require rethinking traditional panel survey methods, but can help reduce respondent burden and expand on current social science knowledge. The paper concludes that a new household panel survey would improve knowledge about important social, economic and health issues facing the US, and would provide a useful test bed for new hypotheses and innovative methods of data collection.
In this paper, we evaluate the joint effects of question, respondent, and interviewer characteris... more In this paper, we evaluate the joint effects of question, respondent, and interviewer characteristics on response time in a telephone survey. We include question features traditionally examined, such as the length of the question and format of response options, and features that have yet to be examined that are related to the layout and format of intervieweradministered questions. We examine how these question features affect the time to ask and answer survey questions and how different interviewers vary in their administration of these questions. This paper uses paradata from the Work and Leisure Today survey and uses cross-classified random effects models. Overall, most of the variation in response time is due to question characteristics, rather than respondent or interviewer attributes. Additionally, we find that question characteristics related to necessary survey design features and respondent confusion are the primary predictors of response time, with little effect of visual design features of the question.We also find modest differences in the effects of question characteristics by interviewer experience.
feasibility test of using smartphones to collect GPS information in face-to-face surveys" (2015).
The University of Michigan dioxin exposure study (UMDES) was undertaken in response to concerns a... more The University of Michigan dioxin exposure study (UMDES) was undertaken in response to concerns among residents in the Midland, MI area that the historic discharge of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) from the Dow Chemical Company facilities had resulted in soil contamination in the Tittabawassee River floodplain and in the City of Midland leading to an increase in residents' body burdens of these compounds. Dow Chemical has operated in Midland, MI ...
Environmental Science & Technology, 2008
The University of Michigan dioxin exposure study was undertaken to address concerns that the indu... more The University of Michigan dioxin exposure study was undertaken to address concerns that the industrial discharge of dioxin-like compounds in the Midland, MI area had resulted in contamination of soils in the Tittabawassee River floodplain and downwind of the incinerator. The study was designed in a rigorously statistical manner comprising soil measurements of 29 polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) from 766 residential properties, selected probabilistically, in the Midland area and in Jackson and Calhoun Counties (Michigan) as a background comparison. A statistical comparison determined that the geometric mean toxic equivalent (TEQ) levels in samples from the target populations were statistically significantly above background. In addition, the probabilities of being above the 75th and 95th percentiles of background were also greater. Congener contributions to the TEQ were dominated by 2,3,4,7,8-PeCDF and 2,3,7,8-TCDF in the floodplain and by 2,3,7,8-TCDD in the incinerator plume. However, PCB 126 was the top congener contributing to the background TEQ.
Journal of the Royal Statistical Society: Series A (Statistics in Society), 2010
Non-response weighting is a commonly used method to adjust for bias due to unit non-response in s... more Non-response weighting is a commonly used method to adjust for bias due to unit non-response in surveys. Theory and simulations show that, to reduce bias effectively without increasing variance, a covariate that is used for non-response weighting adjustment needs to be highly associated with both the response indicator and the survey outcome variable. In practice, these requirements pose a challenge that is often overlooked, because those covariates are often not observed or may not exist. Surveys have recently begun to collect supplementary data, such as interviewer observations and other proxy measures of key survey outcome variables. To the extent that these auxiliary variables are highly correlated with the actual outcomes, these variables are promising candidates for non-response adjustment. In the present study, we examine traditional covariates and new auxiliary variables for the National Survey of Family We provide empirical estimates of the association between proxy measures and response to the survey request as well as the actual survey outcome variables.We also compare unweighted and weighted estimates under various non-response models. Our results from multiple surveys with multiple recruitment protocols from multiple organizations on multiple topics show the difficulty of finding suitable covariates for non-response adjustment and the need to improve the quality of auxiliary data.
Previous research has shown that offering respondents their preferred mode can increase response ... more Previous research has shown that offering respondents their preferred mode can increase response rates, but the effect of doing so on how respondents process and answer survey questions (i.e., measurement) is unclear. In this paper, we evaluate whether changes in question format have different effects on data quality for those responding in their preferred mode than for those responding in a non-preferred mode for three question types (multiple answer, open-ended, and grid). Respondents were asked about their preferred mode in a 2008 survey and were recontacted in 2009. In the recontact survey, respondents were randomly assigned to one of two modes such that some responded in their preferred mode and others did not. They were also randomly assigned to one of two questionnaire forms in which the format of individual questions was varied. On the multiple answer and open-ended items, those who answered in a non-preferred mode seemed to take advantage of opportunities to satisfice when the question format allowed or encouraged it (e.g., selecting fewer items in the check-all than the forced-choice format and being more likely to skip the open-ended item when it had a larger answer box), while those who answered in a preferred mode did not. There was no difference on a grid formatted item across those who did and did not respond by their preferred mode, but results indicate that a fully labeled grid reduced item missing rates vis-à-vis a grid with only column heading labels. Results provide insight into the effect of tailoring to mode preference on commonly used questionnaire design features.
- See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/5750#sthash.hLq5cLB0.dpuf
To increase the likelihood of response, many survey organizations attempt to provide sample membe... more To increase the likelihood of response, many survey organizations attempt to provide sample members with a mode they are thought to prefer. Mode assignment is typically based on conventional wisdom or results from mode choice studies that presented only limited options. In this paper we draw heavily on research and theory from the mode effects and the survey participation literatures to develop a framework for understanding what characteristics should predict mode preferences. We then test these characteristics using data from two different surveys. We find that measures of familiarity with and access to a mode are the strongest predictors of mode preference and measures of safety concerns, physical abilities, and normative concerns are unexpectedly weak predictors. Our findings suggest that variables that may exist on sample frames can be used to inform the assignment of “preferred” modes to sample members.
Household surveys are increasingly moving toward self-administered modes of data collection. To m... more Household surveys are increasingly moving toward self-administered modes of data collection. To maintain a probability sample of the population, researchers must use probability methods to select adults within households. However, very little experimental methodological work has been conducted on within-household selection in mail surveys. In this study, we experimentally examine four methods—the next-birthday method, the last-birthday method, selection of the youngest adult in the household, and selection of the oldest adult in the household—in two mail surveys of Nebraska residents (n = 2,498, AAPOR RR1 36.3 percent, and n = 947, AAPOR RR1 31.6 percent). To evaluate how accurately respondents were selected from among all adults in the household, we also included a household roster in the questionnaire for one of the surveys. We evaluated response rates, the completed sample composition resulting from the different within-household selection methods, and the accuracy of within-household selection. The analyses indicate that key demographics differed little across the selection methods, and that all of the within-household selection methods tend to underrepresent key demographic groups such as Hispanics and persons with lower levels of education. Rates of selection accuracy were low among the four selection methods analyzed, and the rates were similar across all four methods.
Household surveys are moving from interviewer-administered modes to self-administered modes for d... more Household surveys are moving from interviewer-administered modes to self-administered modes for data collection, but many households do not accurately follow within-household selection procedures in mail surveys. In this article, we examine accuracy of within-household selection using an oldest adult/youngest adult method in web, mail, and mixed-mode surveys. The frame for this study comes from a telephone survey conducted with Nebraska residents in which the oldest adult/youngest adult method is used to select the initial respondent. One year later, these telephone participants are followed up using identical household selection methods. This article examines characteristics of people who followed the selection procedures compared to those who did not.
classical intra-interviewer correlation (ρ int ) provides survey researchers with an estimate of ... more classical intra-interviewer correlation (ρ int ) provides survey researchers with an estimate of the effect of interviewers on variation in measurements of a survey variable of interest. This correlation is an undesirable product of the data collection process that can arise when answers from respondents interviewed by the same interviewer are more similar to each other than answers from other respondents, decreasing the precision of survey estimates. Estimation of this parameter, however, uses only respondent data. The potential contribution of variance in nonresponse errors between interviewers to the estimation of ρ int has been largely ignored. Responses within interviewers may appear correlated because the interviewers successfully obtain cooperation from different pools of respondents, not because of systematic response deviations. This study takes a first step in filling this gap in the literature on interviewer effects by analyzing a unique survey data set, collected using computerassisted telephone interviewing (CATI) from a sample of divorce records. This data set, which includes both true values and reported values for respondents and a CATI sample assignment that approximates interpen-Brady T. h i n t e r v i e w e r v a r i a n c e i s n o n r e s p o n s e e r r o r v a r i a n c e ? 1005 etrated assignment of subsamples to interviewers, enables the decomposition of interviewer variance in means of respondent reports into nonresponse error variance and measurement error variance across interviewers. We show that in cases where there is substantial interviewer variance in reported values, the interviewer variance may arise from nonresponse error variance across interviewers. 1. Additional interviewer-level information, such as interviewing experience, is not available.
Are we keeping the people who used to stay? Changes in correlates of panel survey attrition over time, 2011
As survey response rates decline, correlates of survey participation may also be changing. Panel ... more As survey response rates decline, correlates of survey participation may also be changing. Panel studies provide an opportunity to study a rich set of correlates of panel attrition over time. We look at changes in attrition rates in the American National Election Studies from 1964 to 2004, a repeated panel survey with a two-wave pre-post election design implemented over multiple decades. We examine changes in attrition rates by three groups of variables: sociodemographic and ecological characteristics of the respondent and household, party affiliation and political and social attitudes recorded at the first interview, and paradata about the first wave interview. We find relatively little overall change in the pre-post election panel attrition rates, but important changes in demographic correlates of panel attrition over time. We also examine contact and cooperation rates from 1988 to 2004.
Analyzing paradata for measurement error evaluation, 2013
a n a lY z i n g P a r a d ata tO i n v e s t i g at e M e a s u r e M e n t e r rO r
Survey research has long grappled with the concept of survey mode preference: the idea that a res... more Survey research has long grappled with the concept of survey mode preference: the idea that a respondent may prefer to participate in one survey mode over another. This article experimentally examines the effect of mode preference on response, contact, and cooperation rates; mode choice; and data collection efficiency. Respondents to a 2008 telephone survey (n = 1,811; AAPOR RR3 = 38 percent) were asked their mode preference for future survey participation. These respondents were subsequently followed up in 2009 with two independent survey requests. The first follow-up survey request was another telephone survey (n = 548; AAPOR RR2 = 55.5 percent). In the second follow-up survey (n = 565; AAPOR RR2 = 46.0 percent), respondents were randomly assigned to one of four mode treatments: Web only, mail only, Web followed by mail, and mail followed by Web. We find that mode preference predicts participation in Web and phone modes, cooperation in phone mode (where contact and cooperation can be disentangled), and the selection of a mode when given the option of two modes. We find weak and mixed evidence about the relationship between mode preference and reduction of field effort. We discuss the important implications these findings have for mixed mode surveys.
Improving Surveys with Paradata: Analytic Uses of Process Information, 2013
Improving Surveys with Paradata: Analytic Uses of Process Information, 2013
An Examination of Within-Person Variation in Response Propensity over the Data Collection Field Period, 2012
Statistical examinations of deterministic and stochastic response propensity assert that a sample... more Statistical examinations of deterministic and stochastic response propensity assert that a sample case's propensity is determined by fixed respondent characteristics. The perspective of this article, that of dynamic response propensities, differs, viewing sample cases' propensities as evolving over the course of the data collection. Each sample case begins the data collection period in a "base" response propensity. Each change in the data collection protocol which the survey organization subsequently makes might change that base propensity. This article examines four questions: (1) Is there any evidence that the average response propensities of sampled individuals vary over the data collection? (2) Is there any evidence that propensities are influenced in accordance with specific actions taken by the survey recruitment protocol?
Multiple Auxiliary Variables in Nonresponse Adjustment, 2011
Prior work has shown that effective survey nonresponse adjustment variables should be highly corr... more Prior work has shown that effective survey nonresponse adjustment variables should be highly correlated with both the propensity to respond to a survey and the survey variables of interest. In practice, propensity models are often used for nonresponse adjustment with multiple auxiliary variables as predictors. These auxiliary variables may be positively or negatively associated with survey participation, they may be correlated with each other, and can have positive or negative relationships with the survey variables. Yet the consequences for nonresponse adjustment of these conditions are not known to survey practitioners. Simulations are used here to examine the effects of multiple auxiliary variables with opposite relationships with survey participation and the survey variables. The results show that bias and mean square error of adjusted respondent means are substantially different when the predictors have relationships of the same directions compared to when they have opposite directions with either propensity or the survey variables. Implications for nonresponse adjustment and responsive designs will be discussed.
Public Opinion Quarterly
Survey interviewers are often tasked with assessing the quality of respondents' answers after com... more Survey interviewers are often tasked with assessing the quality of respondents' answers after completing a survey interview. These interviewer observations have been used to proxy for measurement error in interviewer-administered surveys. How interviewers formulate these evaluations and how well they proxy for measurement error has received little empirical attention. According to dual-process theories of impression formation, individuals form impressions about others based on the social categories of the observed person (e.g., sex, race) and individual behaviors observed during an interaction. Although initial impressions start with heuristic, rule-of-thumb evaluations, systematic processing is characterized by extensive incorporation of available evidence. In a survey context, if interviewers default to heuristic information processing Kirchner, Olson, & Smyth POQ 2017 DO INTERVIEWER POSTSURVEY 2 when evaluating respondent engagement, then we expect their evaluations to be primarily based on respondent characteristics and stereotypes associated with those characteristics. Under systematic processing, on the other hand, interviewers process and evaluate respondents based on observable respondent behaviors occurring during the question-answering process. We use the Work and Leisure Today Survey, including survey data and behavior codes, to examine proxy measures of heuristic and systematic processing by interviewers as predictors of interviewer postsurvey evaluations of respondents' cooperativeness, interest, friendliness, and talkativeness. Our results indicate that CATI interviewers base their evaluations on actual behaviors during an interview (i.e., systematic processing) rather than perceived characteristics of the respondent or the interviewer (i.e., heuristic processing). These results are reassuring for the many surveys that collect interviewer observations as proxies for data quality.
This paper identifies new opportunities for innovation and expansion on current survey practice i... more This paper identifies new opportunities for innovation and expansion on current survey practice in the design of a new household panel survey, including an increased use of new and mobile technologies, more frequent data collection, modified clustering, and use of non-traditional survey measures such as administrative data, planned missing/matrix sampling questionnaire design, real-time data collection, and biomarkers. These innovative data collection methods require rethinking traditional panel survey methods, but can help reduce respondent burden and expand on current social science knowledge. The paper concludes that a new household panel survey would improve knowledge about important social, economic and health issues facing the US, and would provide a useful test bed for new hypotheses and innovative methods of data collection.
In this paper, we evaluate the joint effects of question, respondent, and interviewer characteris... more In this paper, we evaluate the joint effects of question, respondent, and interviewer characteristics on response time in a telephone survey. We include question features traditionally examined, such as the length of the question and format of response options, and features that have yet to be examined that are related to the layout and format of intervieweradministered questions. We examine how these question features affect the time to ask and answer survey questions and how different interviewers vary in their administration of these questions. This paper uses paradata from the Work and Leisure Today survey and uses cross-classified random effects models. Overall, most of the variation in response time is due to question characteristics, rather than respondent or interviewer attributes. Additionally, we find that question characteristics related to necessary survey design features and respondent confusion are the primary predictors of response time, with little effect of visual design features of the question.We also find modest differences in the effects of question characteristics by interviewer experience.
feasibility test of using smartphones to collect GPS information in face-to-face surveys" (2015).
The University of Michigan dioxin exposure study (UMDES) was undertaken in response to concerns a... more The University of Michigan dioxin exposure study (UMDES) was undertaken in response to concerns among residents in the Midland, MI area that the historic discharge of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) from the Dow Chemical Company facilities had resulted in soil contamination in the Tittabawassee River floodplain and in the City of Midland leading to an increase in residents' body burdens of these compounds. Dow Chemical has operated in Midland, MI ...
Environmental Science & Technology, 2008
The University of Michigan dioxin exposure study was undertaken to address concerns that the indu... more The University of Michigan dioxin exposure study was undertaken to address concerns that the industrial discharge of dioxin-like compounds in the Midland, MI area had resulted in contamination of soils in the Tittabawassee River floodplain and downwind of the incinerator. The study was designed in a rigorously statistical manner comprising soil measurements of 29 polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) from 766 residential properties, selected probabilistically, in the Midland area and in Jackson and Calhoun Counties (Michigan) as a background comparison. A statistical comparison determined that the geometric mean toxic equivalent (TEQ) levels in samples from the target populations were statistically significantly above background. In addition, the probabilities of being above the 75th and 95th percentiles of background were also greater. Congener contributions to the TEQ were dominated by 2,3,4,7,8-PeCDF and 2,3,7,8-TCDF in the floodplain and by 2,3,7,8-TCDD in the incinerator plume. However, PCB 126 was the top congener contributing to the background TEQ.
Journal of the Royal Statistical Society: Series A (Statistics in Society), 2010
Non-response weighting is a commonly used method to adjust for bias due to unit non-response in s... more Non-response weighting is a commonly used method to adjust for bias due to unit non-response in surveys. Theory and simulations show that, to reduce bias effectively without increasing variance, a covariate that is used for non-response weighting adjustment needs to be highly associated with both the response indicator and the survey outcome variable. In practice, these requirements pose a challenge that is often overlooked, because those covariates are often not observed or may not exist. Surveys have recently begun to collect supplementary data, such as interviewer observations and other proxy measures of key survey outcome variables. To the extent that these auxiliary variables are highly correlated with the actual outcomes, these variables are promising candidates for non-response adjustment. In the present study, we examine traditional covariates and new auxiliary variables for the National Survey of Family We provide empirical estimates of the association between proxy measures and response to the survey request as well as the actual survey outcome variables.We also compare unweighted and weighted estimates under various non-response models. Our results from multiple surveys with multiple recruitment protocols from multiple organizations on multiple topics show the difficulty of finding suitable covariates for non-response adjustment and the need to improve the quality of auxiliary data.
Previous research has shown that offering respondents their preferred mode can increase response ... more Previous research has shown that offering respondents their preferred mode can increase response rates, but the effect of doing so on how respondents process and answer survey questions (i.e., measurement) is unclear. In this paper, we evaluate whether changes in question format have different effects on data quality for those responding in their preferred mode than for those responding in a non-preferred mode for three question types (multiple answer, open-ended, and grid). Respondents were asked about their preferred mode in a 2008 survey and were recontacted in 2009. In the recontact survey, respondents were randomly assigned to one of two modes such that some responded in their preferred mode and others did not. They were also randomly assigned to one of two questionnaire forms in which the format of individual questions was varied. On the multiple answer and open-ended items, those who answered in a non-preferred mode seemed to take advantage of opportunities to satisfice when the question format allowed or encouraged it (e.g., selecting fewer items in the check-all than the forced-choice format and being more likely to skip the open-ended item when it had a larger answer box), while those who answered in a preferred mode did not. There was no difference on a grid formatted item across those who did and did not respond by their preferred mode, but results indicate that a fully labeled grid reduced item missing rates vis-à-vis a grid with only column heading labels. Results provide insight into the effect of tailoring to mode preference on commonly used questionnaire design features.
- See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/5750#sthash.hLq5cLB0.dpuf
To increase the likelihood of response, many survey organizations attempt to provide sample membe... more To increase the likelihood of response, many survey organizations attempt to provide sample members with a mode they are thought to prefer. Mode assignment is typically based on conventional wisdom or results from mode choice studies that presented only limited options. In this paper we draw heavily on research and theory from the mode effects and the survey participation literatures to develop a framework for understanding what characteristics should predict mode preferences. We then test these characteristics using data from two different surveys. We find that measures of familiarity with and access to a mode are the strongest predictors of mode preference and measures of safety concerns, physical abilities, and normative concerns are unexpectedly weak predictors. Our findings suggest that variables that may exist on sample frames can be used to inform the assignment of “preferred” modes to sample members.
Household surveys are increasingly moving toward self-administered modes of data collection. To m... more Household surveys are increasingly moving toward self-administered modes of data collection. To maintain a probability sample of the population, researchers must use probability methods to select adults within households. However, very little experimental methodological work has been conducted on within-household selection in mail surveys. In this study, we experimentally examine four methods—the next-birthday method, the last-birthday method, selection of the youngest adult in the household, and selection of the oldest adult in the household—in two mail surveys of Nebraska residents (n = 2,498, AAPOR RR1 36.3 percent, and n = 947, AAPOR RR1 31.6 percent). To evaluate how accurately respondents were selected from among all adults in the household, we also included a household roster in the questionnaire for one of the surveys. We evaluated response rates, the completed sample composition resulting from the different within-household selection methods, and the accuracy of within-household selection. The analyses indicate that key demographics differed little across the selection methods, and that all of the within-household selection methods tend to underrepresent key demographic groups such as Hispanics and persons with lower levels of education. Rates of selection accuracy were low among the four selection methods analyzed, and the rates were similar across all four methods.
Household surveys are moving from interviewer-administered modes to self-administered modes for d... more Household surveys are moving from interviewer-administered modes to self-administered modes for data collection, but many households do not accurately follow within-household selection procedures in mail surveys. In this article, we examine accuracy of within-household selection using an oldest adult/youngest adult method in web, mail, and mixed-mode surveys. The frame for this study comes from a telephone survey conducted with Nebraska residents in which the oldest adult/youngest adult method is used to select the initial respondent. One year later, these telephone participants are followed up using identical household selection methods. This article examines characteristics of people who followed the selection procedures compared to those who did not.
classical intra-interviewer correlation (ρ int ) provides survey researchers with an estimate of ... more classical intra-interviewer correlation (ρ int ) provides survey researchers with an estimate of the effect of interviewers on variation in measurements of a survey variable of interest. This correlation is an undesirable product of the data collection process that can arise when answers from respondents interviewed by the same interviewer are more similar to each other than answers from other respondents, decreasing the precision of survey estimates. Estimation of this parameter, however, uses only respondent data. The potential contribution of variance in nonresponse errors between interviewers to the estimation of ρ int has been largely ignored. Responses within interviewers may appear correlated because the interviewers successfully obtain cooperation from different pools of respondents, not because of systematic response deviations. This study takes a first step in filling this gap in the literature on interviewer effects by analyzing a unique survey data set, collected using computerassisted telephone interviewing (CATI) from a sample of divorce records. This data set, which includes both true values and reported values for respondents and a CATI sample assignment that approximates interpen-Brady T. h i n t e r v i e w e r v a r i a n c e i s n o n r e s p o n s e e r r o r v a r i a n c e ? 1005 etrated assignment of subsamples to interviewers, enables the decomposition of interviewer variance in means of respondent reports into nonresponse error variance and measurement error variance across interviewers. We show that in cases where there is substantial interviewer variance in reported values, the interviewer variance may arise from nonresponse error variance across interviewers. 1. Additional interviewer-level information, such as interviewing experience, is not available.
Are we keeping the people who used to stay? Changes in correlates of panel survey attrition over time, 2011
As survey response rates decline, correlates of survey participation may also be changing. Panel ... more As survey response rates decline, correlates of survey participation may also be changing. Panel studies provide an opportunity to study a rich set of correlates of panel attrition over time. We look at changes in attrition rates in the American National Election Studies from 1964 to 2004, a repeated panel survey with a two-wave pre-post election design implemented over multiple decades. We examine changes in attrition rates by three groups of variables: sociodemographic and ecological characteristics of the respondent and household, party affiliation and political and social attitudes recorded at the first interview, and paradata about the first wave interview. We find relatively little overall change in the pre-post election panel attrition rates, but important changes in demographic correlates of panel attrition over time. We also examine contact and cooperation rates from 1988 to 2004.
Analyzing paradata for measurement error evaluation, 2013
a n a lY z i n g P a r a d ata tO i n v e s t i g at e M e a s u r e M e n t e r rO r
Survey research has long grappled with the concept of survey mode preference: the idea that a res... more Survey research has long grappled with the concept of survey mode preference: the idea that a respondent may prefer to participate in one survey mode over another. This article experimentally examines the effect of mode preference on response, contact, and cooperation rates; mode choice; and data collection efficiency. Respondents to a 2008 telephone survey (n = 1,811; AAPOR RR3 = 38 percent) were asked their mode preference for future survey participation. These respondents were subsequently followed up in 2009 with two independent survey requests. The first follow-up survey request was another telephone survey (n = 548; AAPOR RR2 = 55.5 percent). In the second follow-up survey (n = 565; AAPOR RR2 = 46.0 percent), respondents were randomly assigned to one of four mode treatments: Web only, mail only, Web followed by mail, and mail followed by Web. We find that mode preference predicts participation in Web and phone modes, cooperation in phone mode (where contact and cooperation can be disentangled), and the selection of a mode when given the option of two modes. We find weak and mixed evidence about the relationship between mode preference and reduction of field effort. We discuss the important implications these findings have for mixed mode surveys.
Improving Surveys with Paradata: Analytic Uses of Process Information, 2013
Improving Surveys with Paradata: Analytic Uses of Process Information, 2013
An Examination of Within-Person Variation in Response Propensity over the Data Collection Field Period, 2012
Statistical examinations of deterministic and stochastic response propensity assert that a sample... more Statistical examinations of deterministic and stochastic response propensity assert that a sample case's propensity is determined by fixed respondent characteristics. The perspective of this article, that of dynamic response propensities, differs, viewing sample cases' propensities as evolving over the course of the data collection. Each sample case begins the data collection period in a "base" response propensity. Each change in the data collection protocol which the survey organization subsequently makes might change that base propensity. This article examines four questions: (1) Is there any evidence that the average response propensities of sampled individuals vary over the data collection? (2) Is there any evidence that propensities are influenced in accordance with specific actions taken by the survey recruitment protocol?
Multiple Auxiliary Variables in Nonresponse Adjustment, 2011
Prior work has shown that effective survey nonresponse adjustment variables should be highly corr... more Prior work has shown that effective survey nonresponse adjustment variables should be highly correlated with both the propensity to respond to a survey and the survey variables of interest. In practice, propensity models are often used for nonresponse adjustment with multiple auxiliary variables as predictors. These auxiliary variables may be positively or negatively associated with survey participation, they may be correlated with each other, and can have positive or negative relationships with the survey variables. Yet the consequences for nonresponse adjustment of these conditions are not known to survey practitioners. Simulations are used here to examine the effects of multiple auxiliary variables with opposite relationships with survey participation and the survey variables. The results show that bias and mean square error of adjusted respondent means are substantially different when the predictors have relationships of the same directions compared to when they have opposite directions with either propensity or the survey variables. Implications for nonresponse adjustment and responsive designs will be discussed.