What would you do? An investigation of stated-response data (original) (raw)
Related papers
2005
This paper uses data from a social experiment in Honduras to estimate the educational impact on children participating in PRAF II, a social program that has two components: demand-side intervention – conditional cash transfers to families whose children are attending school on a regular basis – and supply-side intervention cash transfers to improve the quality of the schools. The results of the difference-in-differences and cross-sectional regressions indicate that the conditional cash transfers increase school attendance and reduce drop out rates; but they show no impact whatsoever from the supply-side intervention. The results of a Markov schooling transition model, used to assess the impact of the demand-side intervention, show that the program effectively facilitates progression through the grades. When we used a simulation method to evaluate the long-term impact of exposure to the program and a bootstrap method to test the statistical significance of our estimations, we found t...
Policy Research Working Papers, 2013
The Impact Evaluation Series has been established in recognition of the importance of impact evaluation studies for World Bank operations and for development in general. The series serves as a vehicle for the dissemination of findings of those studies. Papers in this series are part of the Bank's Policy Research Working Paper Series. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
Non-response biases in surveys of school children: the case of the English PISA samples
2012
The Case of the English PISA Samples * We analyse response patterns to an important survey of school children, exploiting rich auxiliary information on respondents' and non-respondents' cognitive ability that is correlated both with response and the learning achievement that the survey aims to measure. The survey is the Programme for International Student Assessment (PISA), which sets response thresholds in an attempt to control data quality. We analyse the case of England for 2000 when response rates were deemed high enough by the PISA organisers to publish the results, and 2003, when response rates were a little lower and deemed of sufficient concern for the results not to be published. We construct weights that account for the pattern of nonresponse using two methods, propensity scores and the GREG estimator. There is clear evidence of biases, but there is no indication that the slightly higher response rates in 2000 were associated with higher quality data. This underlines the danger of using response rate thresholds as a guide to data quality.
Journal of the Royal Statistical Society Series a, 2012
The Case of the English PISA Samples * We analyse response patterns to an important survey of school children, exploiting rich auxiliary information on respondents' and non-respondents' cognitive ability that is correlated both with response and the learning achievement that the survey aims to measure. The survey is the Programme for International Student Assessment (PISA), which sets response thresholds in an attempt to control data quality. We analyse the case of England for 2000 when response rates were deemed high enough by the PISA organisers to publish the results, and 2003, when response rates were a little lower and deemed of sufficient concern for the results not to be published. We construct weights that account for the pattern of nonresponse using two methods, propensity scores and the GREG estimator. There is clear evidence of biases, but there is no indication that the slightly higher response rates in 2000 were associated with higher quality data. This underlines the danger of using response rate thresholds as a guide to data quality.
The Zomba conditional cash transfer experiment: An assessment of its methodology
The effectiveness of using conditions in cash transfer programmes is currently one of the most polemic policy debates in social protection. However, to date, there has been no robust evidence on whether the use of conditions provides any additional benefit to providing cash alone. To address this question, between 2007 and 2009 the World Bank carried out an experiment in Zomba, Malawi to test whether conditions made any difference to school enrolment and attendance among adolescent girls. Based on their findings, in 2010, the researchers-Sarah Baird, Craig McIntosh and Berk Ozler-published a paper indicating that conditions made no difference. By 2011 this conclusion had changed, with a revised paper claiming that the use of conditions not only increased enrolment and attendance but also improved learning outcomes. Unsurprisingly, the change in findings has generated a significant amount of interest. A careful reading of the 2010 and 2011 papers suggests that the conclusion that conditions work is not possible to substantiate. It would appear that there were significant flaws in the methodology used by the researchers. While in both the 2010 and 2011 papers the original methodology indicated that conditions had no impact on enrolment, another methodology-which used a much smaller sample-indicated that they did. We suggest that it is not possible to conclude that the revised methodology is superior to the original one. Furthermore, given that the methodology used to measure impacts on attendance employed a very small sample and a flawed data source, no robust conclusions can be drawn on the impact of conditions on either enrolment or attendance. Further potential flaws include: the research probably assessed the impact of conditions mainly on better-off families, rather than among the poor, which is of limited relevance for policy discussions; extreme scenarios were designed for the conditional and unconditional transfers, which do not reflect reality; the experiment incorporated a test of free education against fee-paying education which will have distorted the results; there were relatively significant differences between the groups of girls in the conditional and unconditional programmes, and it is unclear whether this was reliably controlled for; and, finally, the methodology used to test whether conditions had an impact on educational attainment did not use a baseline and the results appear merely to reflect inherent differences in ability between the girls in the conditional and unconditional groups. It is our view, therefore, that the study by Baird, McIntosh and Ozler provides no robust evidence on whether conditions work. It does, however, indicate that the use of conditions may impact negatively on the mental health of adolescent girls and that conditional transfers are less effective than unconditional transfers in reducing child marriage. Both findings should be addressed more seriously in debates on the use of conditions.
Do conditional cash transfers increase schooling among adolescents?
International Economics and Economic Policy, 2021
In several Latin American countries, conditional cash transfer programmes are a proven means of alleviating poverty in the short term and promoting education of children from disadvantaged families in the longer run. While the effectiveness of the Brazilian Bolsa Família for children’s education outcomes up to 15 years of age has been widely documented, its contribution to the promotion of students of secondary school age has not been fully explored in light of the programme’s expansion to 16-17 years olds in 2008. In this paper, I draw on Brazilian National Household Sample Survey data and use a difference-in-differences approach already applied in research in the context of Bolsa Família extension. Whereas these data were previously examined to detect intent-to-treat (ITT) effects due to insufficient information on treatment status, in this study I rely on a classifier method to additionally estimate average treatment effects on the treated who belong to families supposedly receiv...
Experimental Tests of Survey Responses to Expenditure Questions*
Fiscal Studies, 2009
This paper tests for a number of survey effects in the elicitation of expenditure items. In particular we examine the extent to which individuals use features of the expenditure question to construct their answers. We test whether respondents interpret question wording as researchers intend and examine the extent to which prompts, clarifications and seemingly arbitrary features of survey design influence expenditure reports. We find that over one quarter of respondents have difficulty distinguishing between "you" and "your household" when making expenditure reports; that respondents report higher pro-rata expenditure when asked to give responses on a weekly as opposed to monthly or annual time scale; that respondents give higher estimates when using a scale with a higher mid-point; and that respondents report higher aggregated expenditure when categories are presented in a disaggregated form. In summary, expenditure reports are constructed using convenient rules of thumb and available information, which will depend on the characteristics of the respondent, the expenditure domain and features of the survey question. It is crucial to further account for these features in ongoing surveys. JEL Classification: D03, D12, C81, C93