Hertin, J., J. Turnpenny, M. Jordan, A. Nilsson, B. Nykvist, and D. Russel (2009). Rationalising the policy mess? The role of ex ante policy assessment and the utilization of knowledge in the policy process, Environment and Planning A, 41, 5, 1185-1200 (original) (raw)

Rationalising the policy mess? Ex ante policy assessment and the utilisation of knowledge in the policy process

Environment and Planning A, 2009

Procedures for the ex ante assessment of public policies are currently in vogue across the OECD. Their design is typically informed by an instrumentally rational model of problem solving, which assumes that knowledge is collected, evaluated and then translated straightforwardly into 'better policies'. This model has, it seems, been little affected by more than three decades of academic research which has demonstrated tha the reality of every-day policy-making is far messier. This paper analyses whether the uptake of ex ante assessment of policies is nonetheless capable of providing new opportunities for knowledge to inform processes of policy deliberation and learning. Drawing on an analysis of policy assessment procedures in three countries and the European Commission, it finds that there are several ways in which assessment knowledge is used in the policy process. Moreover, its argues that policy learning occurs despite, rather than because of the instrumental design of the new assessment procedures, which tends to act as a barrier to open deliberation and knowledge utilisation. t

The use and non-use of policy appraisal tools in public policy making: an analysis of three European countries and the European Union

Policy Sciences, 2008

The increasing complexity of policy problems, coupled with the political desire to base new policies on the foundation of firm evidence, has accelerated the development of policy assessment tools. These range from complex computer models and cost benefit analysis through simple checklists and decision trees. In the last decade, many governments have established formal policy assessment systems to harness these tools in order to facilitate more evidence-based policy making. These tools are potentially widely available, but to what extent are they used by policy makers and what becomes of the evidence that they generate? This paper addresses these questions by studying the empirical patterns of tool use across 37 cases in three European countries and the European Commission. It uses a simple classification of tools into advanced, formal and simple types. It finds that even when tools are embedded in policy assessment systems, their use is differentiated and on the whole very limited, in particular when it comes to more advanced tools. It then explores these patterns from contrasting theoretical perspectives to shed light on why, when and how different policy assessment tools are used in the policy process.

Policy evaluation meets harsh reality: Instrument of integration or preserver of disintegration?

Evaluation and Program Planning, 1994

It has become clear that the major problem facing policymakers isfinding a means of integrating publis policies so as to profit from their synergetic effects and thus square the impossible circle of the state% having to do more with less: more-thereby ensuring the continuing legitimacy of the social order in a context of crisis and global rest~~turing~ with less money-due to the requirements of budgetary restraint. Over theist year CEOPS has conducted an inventory of the spread of evaluation practices in the EEC with reference to st~ctural-type~oIicies. Our objective has been to identify in each country the institutionaI andprofe~io~al resources currently available for the production of evaluations. Structural poiieies are a good illustration of this quest for integrated, multidimensional policies that interweave, for example, economic, technological, and social policies. As such, they are attempts to replace traditional, sectorial industrial policies with an approach whereby the state reverts to articulating broad policy oritintations, formulating the substance of policies, and surveying their implementation through partnerships with actors from decentralized tiers of government and sections of the private sector. Such policies, obviously, call for effective systems of coordination. These systems in turn require effective strategies of evaluation. This article is an attempt to draw lessons from a necessarily inductive exercise by using this material to construct a more scientific set of arguments and hypotheses. From concrete examples of the particular problems posed by evaluations of structural policies, the authors build an argument to show that the question researchers in public policy need to pose is whether policy evaluation is being used as an instrument for the improved integration of policies, or rather as an agent useful to administrative systems for preserving the disintegration of policies, which has allowed these systems to reproduce themselves in the past?

Policy formulation, policy advice and policy appraisal: the distribution of analytical tools

The Tools of Policy Formulation, 2015

At its heart, policy analysis is what Gill and Saunders (1992, pp. 6-7) have characterized as 'a method for structuring information and providing opportunities for the development of alternative choices for the policymaker'. An important part of the process of policy formulation, policy analysis involves policy appraisal: providing information or advice to policymakers concerning the relative advantages and disadvantages of alternative policy choices (Mushkin 1977; Wildavsky 1979; Sidney 2007; Howlett et al. 2009). Such advice comes from a variety of different actors operating in a wide range of venues both internal and external to government. And policy workers operating in these venues employ many different types of analytical techniques or 'policy formulation tools' in this effort (Mayer et al. 2004; Colebatch et al. 2011). These tools generally are designed to help evaluate current or past practices and aid decision making by clarifying or eliminating some of the many possible alternative courses of action mooted in the course of policy formulation. They play a significant role in structuring policy-making activity and in determining the content of policy outputs and thus policy outcomes (Sidney 2007) and are a worthy subject of investigation in their own right. Unfortunately, although many works have made recommendations and suggestions for how formulation should be conducted (Vining and Weimer 2010; Dunn 2004), very few works have studied how it is actually practiced, on the ground (Colebatch 2005 and 2006; Colebatch and Radin 2006; Noordegraaf 2011). This lack of knowledge is generally true of many of the tasks and activities involved in policy formulation (DeLeon 1992;

Governing policy evaluation? Towards a new typology

Evaluation, 2017

As policy evaluation matures, thoughts are turning to its governance. However, few scholars have combined insights from the evaluation and governance literatures to shed new light on this matter. In order to address this important gap, this article develops a new typology of ways to comprehend and perhaps ultimately govern ex-post policy evaluation activities. The article then explores its validity in the context of climate policy evaluation activities, a vibrant policy area in which the demand for and practices of evaluation have grown fast, particularly in Europe. The analysis reveals that the typology usefully guides new thinking, but also highlights important gaps in our empirical knowledge of the various modes of governing policy evaluation. The article identifies a need for a new research agenda that simultaneously develops a fuller understanding of these evaluation practices and the options for governing them.

The paradox of policy analysis: If it is not used, why do we produce so much of it?

Journal of Policy Analysis and Management, 1999

This article explores the apparent paradox that our society invests heavily in policy analysis when empirical studies, political science theory, and common wisdom all suggest that analysis is not used by policymakers to make better policy decisions. It offers a critique of the traditional view of policy analysis and presents an alternative view derived from contemporary literature on the policy process and decisionmaking. The alternative view suggests that there are legitimate uses for analysis other than the problem-solving use originally envisioned but apparently rarely attained. The two views imply different patterns of use of analysis by legislative committees-a contrast that I subject to an empirical test. An examination of quantitative data on policy analysis use by congressional committees from 1985 to 1994 lends support for the alternative view. The research has two implications. First, despite its scientific origins, policy analysis may be a more effective instrument of the democratic process than of the problemsolving process. Second, the profession of policy analysis may be in better shape than many who are calling for fundamental changes to its practice seem to believe.

Dunlop, C.A. and Radaelli, C.M. (2017) ‘If Evaluation is the Solution, What’s the Problem?’, in Buonanno, L. and Zahariadis, N. (eds) The Routledge Handbook of European Public Policy, Taylor & Francis.

Over the years, there has been a proliferation of initiatives, methods and tools for evaluation in the European Union (EU). In 2015, the Commission produced a set of integrated guidelines and a single toolbox for better regulation, with the ambitious aim of closing the policy cycle, that is, to draw on evaluation methods systematically from the stage of policy formulation to (a) the end of a project or (b) the moment of ex-post regulatory review. The idea of ‘closing the policy cycle’ is intuitively attractive, but in practice it raises issues of who is exercising control and oversight of different evaluation approaches and tools inside the Commission, the relationship between the Member States and the Commission, and the inter-institutional relations that define power within ‘better regulation’. We examine across time the emergence of different types of evaluation (ex ante and ex post, regulatory evaluations and more traditional approaches to expenditure evaluation) as ‘solutions’, and associate them to problems. We find that the goal of closing the policy cycle is a very tall order for the Commission and the EU more generally, given the historical development of different problems-solutions combinations. The rise of ‘better regulation’ provides the ideational cement for this re-configuration of evaluation ‘to close the policy cycle’ but there are critical issues with tools, methods and scope of evaluation. In the end, today the pieces do not fall into place and the puzzle of ‘evaluation for whom and for what purposes’ has not been solved yet. This less-than-Cartesian puzzle, with its odd de-coupled pieces of different evaluations is not efficient if the problem is to close the policy cycle. But ambiguity is organizationally acceptable if the problem is to generate local power equilibria that can be exploited within the Commission and externally. Evaluation, in fact, is also a frame of reference and praxis where the Member States, the Sec Gen, the DGs of the Commission, the European Parliament test and constantly re-define the question of who has control over EU policy.